tHDFSOutput
(HDFS).
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tHDFSOutput Standard properties.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric. -
MapReduce: see tHDFSOutput MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
tHDFSOutput Standard properties
These properties are used to configure tHDFSOutput running in the Standard Job framework.
The Standard
tHDFSOutput component belongs to the Big Data and the File families.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Use an existing connection |
Select this check box and in the Component List click the HDFS connection component from which Note that when a Job contains the parent Job and the child Job, |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
Version |
Select the version of the Hadoop distribution you are using. The available |
Scheme | Select the URI scheme of the file system to be used from the Scheme drop-down list. This scheme could be
The schemes present on this list vary depending on the distribution you Once a scheme is If you have selected
ADLS, the connection parameters to be defined become:
For a |
NameNode URI |
Type in the URI of the Hadoop NameNode, the master node of a |
Use kerberos authentication |
If you are accessing the Hadoop cluster running
with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your user name to authenticate against the credentials stored in Kerberos.
This check box is available depending on the Hadoop distribution you are |
Use a keytab to authenticate |
Select the Use a keytab to authenticate Note that the user that executes a keytab-enabled Job is not necessarily |
User name |
The User name field is available when you are not using |
Group |
Enter the membership including the authentication user under which the HDFS instances were |
File Name |
Browse to, or enter the location of the file which you write data to. This file |
Type |
Select the type of the file to be processed. The type of the file may be:
|
Action |
Select an operation in HDFS:
Create: Creates a file with data using the file name
Overwrite: Overwrites the data in the file specified in
Append: Inserts the data into the file specified in the |
Row separator |
The separator used to identify the end of a row. This field is not available for a Sequence file. |
Field separator |
Enter character, string or regular expression to separate fields for the transferred This field is not available for a Sequence file. |
Custom encoding |
You may encounter encoding issues when you process the stored data. In that Select the encoding from the list or select Custom This option is not available for a Sequence file. |
Compression |
Select the Compress the data check box to compress the Hadoop provides different compression formats that help reduce the space needed for Note that when the type of the file to be written is Sequence |
Include header |
Select this check box to output the header of the data. This option is not available for a Sequence file. |
Advanced settings
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component needs an input component. |
Dynamic settings |
Click the [+] button to add a row in the The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Limitations |
JRE 1.6+ is required. |
Related scenario
-
Related topic, see Writing data in a delimited file.
-
Related topic, see Computing data with Hadoop distributed file system.
tHDFSOutput MapReduce properties (deprecated)
These properties are used to configure tHDFSOutput running in the MapReduce Job framework.
The MapReduce
tHDFSOutput component belongs to the MapReduce family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the The properties are stored centrally under the Hadoop For further information about the Hadoop |
||
Schema and Edit |
A schema is a row description. It defines the number of fields |
||
 |
Built-In: You create and store the schema locally for this component |
||
 |
Repository: You have already created the schema and stored it in the |
||
Folder |
Browse to, or enter the path pointing to the data to be used in the file system. This path must point to a folder rather than a file, because a Note that you need |
||
Type |
Select the type of the file to be processed. The type of the file may be:
|
||
Action |
Select an operation in HDFS:
Create: Creates a file and write
Overwrite: Overwrites the file |
||
Row separator |
The separator used to identify the end of a row. This field is not available for a Sequence file. |
||
Field separator |
Enter character, string or regular expression to separate fields for the transferred This field is not available for a Sequence file. |
||
Include header |
Select this check box to output the header of the data. This option is not available for a Sequence file. |
||
Custom encoding |
You may encounter encoding issues when you process the stored data. In that Select the encoding from the list or select Custom This option is not available for a Sequence file. |
||
Compression |
Select the Compress the data check box to compress the Hadoop provides different compression formats that help reduce the space needed for Note that when the type of the file to be written is Sequence |
||
Merge result to single file |
Select this check box to merge the final part files into a single file and put that file in a Once selecting it, you need to enter the path to, or browse to the The following check boxes are used to manage the source and the
target files:
If this component is writing merged files with a Databricks cluster, add the following
This This option is not available for a Sequence file. |
Advanced settings
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Use local timezone for date | Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data. |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
In a Once a Map/Reduce Job is opened in the workspace, tHDFSOutput as well as the MapReduce Note that in this documentation, unless otherwise |
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Related scenario
If you are a subscription-based Big Data user, you can consult a
Talend
Map/Reduce Job using the Map/Reduce version of tHDFSOutput: