tExtractPositionalFields
Extracts data and generates multiple columns from a formatted string using
positional fields.
tExtractPositionalFields generates multiple columns from one
column using positional fields.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tExtractPositionalFields Standard properties.
The component in this framework is available in all Talend
products. -
MapReduce: see tExtractPositionalFields MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Batch:
see tExtractPositionalFields properties for Apache Spark Batch.The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming:
see tExtractPositionalFields properties for Apache Spark Streaming.This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tExtractPositionalFields Standard properties
These properties are used to configure
tExtractPositionalFields running in the
Standard Job framework.
The Standard
tExtractPositionalFields component belongs to the
Processing family.
The component in this framework is available in all Talend
products.
Basic settings
Field |
Select an incoming field from the Field list to extract. |
Ignore NULL as the source data |
Select this check box to ignore the Null value in the Clear this check box to generate the Null records that |
Customize |
Select this check box to customize the data format of the
Column: Select the column you Size: Enter the column size.
Padding char: Type in between
Alignment: Select the appropriate |
Pattern |
Enter the pattern to use as basis for the extraction. A pattern is length values separated by commas, |
Die on error |
Clear the check box to skip any rows on error and complete the process for |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Edit
Click Sync |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Advanced settings
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Trim Column |
Select this check box to remove leading and trailing |
Check each row structure against |
Select this check box to check whether the total number of columns |
tStatCatcher Statistics |
Select this check box to gather the processing metadata |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the
NB_LINE: the number of rows read by an input component or The NB_LINE A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component handles flow of data therefore it requires |
Related scenario
For a related scenario, see Extracting name, domain and TLD from e-mail addresses.
tExtractPositionalFields MapReduce properties (deprecated)
These properties are used to configure tExtractPositionalFields running in the MapReduce Job framework.
The MapReduce
tExtractPositionalFields component belongs to the Processing family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Prev.Comp.Column |
Select an incoming field from the Field list to extract. |
Customize |
Select this check box to customize the data format of the
Column: Select the column you want Size: Enter the column size.
Padding char: Type in between
Alignment: Select the appropriate |
Pattern |
Enter the pattern to use as basis for the extraction. A pattern is length values separated by commas, interpreted as a |
Die on error |
Clear the check box to skip any rows on error and complete the process for |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
Click Sync |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched For further information about the Hadoop |
Advanced settings
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Trim Column |
Select this check box to remove leading and trailing whitespace |
Check each row structure against |
Select this check box to check whether the total number of columns |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component handles flow of data therefore it requires input In a You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. For further information about a Note that in this documentation, unless otherwise |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tExtractPositionalFields properties for Apache Spark Batch
These properties are used to configure tExtractPositionalFields running in the Spark Batch Job framework.
The Spark Batch
tExtractPositionalFields component belongs to the Processing family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Prev.Comp.Column list |
Select an incoming field from the Field list to extract. |
Customize |
Select this check box to customize the data format of the positional
Column: Select the column you want to Size: Enter the column size.
Padding char: Type in between inverted
Alignment: Select the appropriate |
Pattern |
Enter the pattern to use as basis for the extraction. A pattern is length values separated by commas, interpreted as a |
Die on error |
Select the check box to stop the execution of the Job when an error |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
Click Sync |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Advanced settings
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Trim Column |
Select this check box to remove leading and trailing whitespace from |
Check each row structure against |
Select this check box to check whether the total number of columns |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Batch version of this component
yet.
tExtractPositionalFields properties for Apache Spark Streaming
These properties are used to configure tExtractPositionalFields running in the Spark Streaming Job framework.
The Spark Streaming
tExtractPositionalFields component belongs to the Processing family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Prev.Comp.Column list |
Select an incoming field from the Field list to extract. |
Customize |
Select this check box to customize the data format of the positional
Column: Select the column you want to Size: Enter the column size.
Padding char: Type in between inverted
Alignment: Select the appropriate |
Pattern |
Enter the pattern to use as basis for the extraction. A pattern is length values separated by commas, interpreted as a |
Die on error |
Clear the check box to skip any rows on error and complete the process for |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
Click Sync |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Advanced settings
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Trim Column |
Select this check box to remove leading and trailing whitespace from |
Check each row structure against |
Select this check box to check whether the total number of columns |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.