tFileOutputPositional
columns in a row.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tFileOutputPositional Standard properties.
The component in this framework is available in all Talend
products. -
MapReduce: see tFileOutputPositional MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Batch: see tFileOutputPositional properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming: see tFileOutputPositional properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tFileOutputPositional Standard properties
These properties are used to configure tFileOutputPositional running in the Standard Job framework.
The Standard
tFileOutputPositional component belongs to the File family.
The component in this framework is available in all Talend
products.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the |
Use existing dynamic |
Select this check box to reuse an existing dynamic schema to When this check box is selected, a Component |
Use Output Stream |
Select this check box process the data flow of interest. Once you The data flow to be processed must be added to the flow in order This variable could be already pre-defined in your Studio or In order to avoid the inconvenience of hand writing, you could For further information about how to use a stream, see Reading data from a remote file in streaming mode. |
File Name |
Name or path to the file to be processed and or the variable to be This field becomes unavailable once you have selected the For further information about how to define and use a variable in Warning: Use absolute path (instead of relative path) for
this field to avoid possible errors. |
Schema |
A schema is a row description. It defines the number of fields Click Edit
This This |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Row separator |
The separator used to identify the end of a row. |
Append |
Select this check box to add the new rows at the end of the |
Include header |
Select this check box to include the column header to the |
Compress as zip file |
Select this check box to compress the output file in zip |
Formats |
Customize the positional file data format and fill in the columns
Column: Select the column you
Size: Enter the column
Padding char: Type in between
Alignment: Select the appropriate
Keep: If the data in the column or |
Advanced settings
Advanced separator (for numbers) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).
Thousands separator: define
Decimal separator: define |
Use byte length as the cardinality |
Select this check box to add support of double-byte character to |
Create directory if not exists |
This check box is selected by default. It creates a directory to |
Custom the flush buffer size |
Select this check box to define the number of lines to write
Row Number: set the number of |
Output in row mode |
Writes in row mode. |
Encoding |
Select the encoding from the list or select Custom |
Don’t generate empty file |
Select this check box if you do not want to generate empty |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at a |
Global Variables
Global Variables |
NB_LINE: the number of rows read by an input component or
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
Use this component to read a file and separate the fields using |
Dynamic settings |
Click the [+] button to add a row in the The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Related scenario
For a related scenario, see Reading data using a Regex and outputting the result to Positional file.
For scenario about the usage of Use Output Stream
check box, see Utilizing Output Stream to save filtered data to a local file.
tFileOutputPositional MapReduce properties (deprecated)
These properties are used to configure tFileOutputPositional running in the MapReduce Job framework.
The MapReduce
tFileOutputPositional component belongs to the MapReduce family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched For further information about the Hadoop |
![]() |
Click this icon to open a database connection wizard and store the For more information about setting up and storing database |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Folder |
Browse to, or enter the path pointing to the data to be used in the file system. This path must point to a folder rather than a file, because a Note that you need |
Action |
Select an operation for writing data:
Create: Creates a file and write data in
Overwrite: Overwrites the file existing |
Compress the data |
Select the Compress the data check box to compress the Hadoop provides different compression formats that help reduce the space needed for |
Formats |
Customize the positional file data format and fill in the
Column: Select the column you want to Size: Enter the column size.
Padding char: Type in between quotes the
Alignment: Select the appropriate
Keep: If the data in the column or in |
Advanced settings
Use local timezone for date | Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data. |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
In a Once a Map/Reduce Job is opened in the workspace, tFileOutputPositional as well as the MapReduce Note that in this documentation, unless otherwise |
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tFileOutputPositional properties for Apache Spark Batch
These properties are used to configure tFileOutputPositional running in the Spark Batch Job framework.
The Spark Batch
tFileOutputPositional component belongs to the File family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Define a storage configuration |
Select the configuration component to be used to provide the configuration If you leave this check box clear, the target file system is the local The configuration component to be used must be present in the same Job. |
||
Property type |
Either Built-In or Repository. |
||
 |
Built-In: No property data stored centrally. |
||
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched For further information about the Hadoop |
||
|
Click this icon to open a database connection wizard and store the For more information about setting up and storing database |
||
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
||
 |
Built-In: You create and store the schema locally for this component |
||
 |
Repository: You have already created the schema and stored it in the |
||
Folder |
Browse to, or enter the path pointing to the data to be used in the file system. This path must point to a folder rather than a file. The button for browsing does not work with the Spark tHDFSConfiguration |
||
Action |
Select an operation for writing data:
Create: Creates a file and write
Overwrite: Overwrites the file |
||
Compress the data |
Select the Compress the data check box to compress the |
||
Row separator |
The separator used to identify the end of a row. |
||
Include header |
Select this check box to include the column header to the |
||
Custom encoding |
You may encounter encoding issues when you process the stored data. In that Select the encoding from the list or select Custom and define it manually. |
||
Merge result to single file |
Select this check box to merge the final part files into a single file and put that file in a Once selecting it, you need to enter the path to, or browse to the The following check boxes are used to manage the source and the
target files:
If this component is writing merged files with a Databricks cluster, add the following
This This option is not available for a Sequence file. |
||
Formats |
Customize the positional file data format and fill in the columns
Column: Select the column you
Size: Enter the column
Padding char: Type in between
Alignment: Select the appropriate
Keep: If the data in the column or |
Advanced settings
Advanced separator (for numbers) |
Select this check box to modify the separators used for
Thousands separator: define separators
Decimal separator: define separators for |
Use local timezone for date | Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data. |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Batch version of this component
yet.
tFileOutputPositional properties for Apache Spark Streaming
These properties are used to configure tFileOutputPositional running in the Spark Streaming Job framework.
The Spark Streaming
tFileOutputPositional component belongs to the File family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Define a storage configuration |
Select the configuration component to be used to provide the configuration If you leave this check box clear, the target file system is the local The configuration component to be used must be present in the same Job. |
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched For further information about the Hadoop |
|
Click this icon to open a database connection wizard and store the For more information about setting up and storing database |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Folder |
Browse to, or enter the path pointing to the data to be used in the file system. This path must point to a folder rather than a file. The button for browsing does not work with the Spark tHDFSConfiguration |
Action |
Select an operation for writing data:
Create: Creates a file and write data
Overwrite: Overwrites the file |
Compress the data |
Select the Compress the data check box to compress the Hadoop provides different compression formats that help reduce the space needed for |
Row separator |
The separator used to identify the end of a row. |
Include header |
Select this check box to include the column header to the file. |
Custom encoding |
You may encounter encoding issues when you process the stored data. In that Select the encoding from the list or select Custom and define it manually. |
Formats |
Customize the data format of the positional file for each column
|
Advanced settings
Advanced separator (for numbers) |
Select this check box to modify the separators used for
Thousands separator: define
Decimal separator: define |
Write empty batches | Select this check box to allow your Spark Job to create an empty batch when the incoming batch is empty. For further information about when this is desirable |
Use local timezone for date | Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data. |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.