tFileOutputParquet
records into Parquet format files in a given distributed file system.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
MapReduce: see tFileOutputParquet MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Batch: see tFileOutputParquet properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming: see tFileOutputParquet properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tFileOutputParquet MapReduce properties (deprecated)
These properties are used to configure tFileOutputParquet running in the MapReduce Job framework.
The MapReduce
tFileOutputParquet component belongs to the MapReduce family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched For further information about the Hadoop |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Folder/File |
Browse to, or enter the path pointing to the data to be used in the file system. Note that you need |
Action |
Select an operation for writing data:
Create: Creates a file and write
Overwrite: Overwrites the file |
Compression |
By default, the Uncompressed Hadoop provides different compression formats that help reduce the space needed for |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
In a Once a Map/Reduce Job is opened in the workspace, tParquetOutput as well as the MapReduce Note that in this documentation, unless otherwise |
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Related scenario
This component is used in the similar way as the tAvroOutput
component. For a scenario using tAvroOutput, see Filtering Avro format employee data.
tFileOutputParquet properties for Apache Spark Batch
These properties are used to configure tFileOutputParquet running in the Spark Batch Job framework.
The Spark Batch
tFileOutputParquet component belongs to the File family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Define a storage configuration component |
Select the configuration component to be used to provide the configuration If you leave this check box clear, the target file system is the local The configuration component to be used must be present in the same Job. |
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched data. For further information about the Hadoop |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Edit
This component does not support the Object type and the Spark automatically infers |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Folder/File |
Browse to, or enter the path pointing to the data to be used in the file system. The button for browsing does not work with the Spark tHDFSConfiguration |
Action |
Select an operation for writing data: Create: Creates a file and write data in it.
Overwrite: Overwrites the file existing in the directory |
Compression |
By default, the Uncompressed option is active. But you |
Advanced settings
Define column partitions | Select this check box and complete the table that is displayed using columns from the schema of the incoming data. The records of the selected columns are used as keys to partition your data. |
Sort columns alphabetically | Select this check box to sort the schema columns in the alphabetical order. If you leave this check box clear, these columns stick to the order defined in the schema editor. |
Use Timestamp format for Date type |
Select the check box to output dates, hours, minutes and seconds contained in your The format used by Deltalake is |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Batch version of this component
yet.
tFileOutputParquet properties for Apache Spark Streaming
These properties are used to configure tFileOutputParquet running in the Spark Streaming Job framework.
The Spark Streaming
tFileOutputParquet component belongs to the File family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Define a storage configuration component |
Select the configuration component to be used to provide the configuration If you leave this check box clear, the target file system is the local The configuration component to be used must be present in the same Job. |
Property type |
Either Built-In or Repository. |
 |
Built-In: No property data stored centrally. |
 |
Repository: Select the repository file where the The properties are stored centrally under the Hadoop The fields that come after are pre-filled in using the fetched data. For further information about the Hadoop |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Edit
This component does not support the Object type and the Spark automatically infers |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Folder/File |
Browse to, or enter the path pointing to the data to be used in the file system. The button for browsing does not work with the Spark tHDFSConfiguration |
Action |
Select an operation for writing data: Create: Creates a file and write data in it.
Overwrite: Overwrites the file existing in the directory |
Compression |
By default, the Uncompressed option is active. But you |
Advanced settings
Write empty batches | Select this check box to allow your Spark Job to create an empty batch when the incoming batch is empty. For further information about when this is desirable |
Define column partitions | Select this check box and complete the table that is displayed using columns from the schema of the incoming data. The records of the selected columns are used as keys to partition your data. |
Sort columns alphabetically | Select this check box to sort the schema columns in the alphabetical order. If you leave this check box clear, these columns stick to the order defined in the schema editor. |
Use Timestamp format for Date type |
Select the check box to output dates, hours, minutes and seconds contained in your The format used by Deltalake is |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.