tHDFSInput
Extracts the data in a HDFS file for other components to process it.
tHDFSInput reads a file located on a given Hadoop distributed
file system (HDFS) and puts the data of interest from this file into a
Talend schema. Then it passes the data to the component that follows.
Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:
-
Standard: see tHDFSInput Standard properties.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
-
MapReduce: see tHDFSInput MapReduce properties.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
tHDFSInput Standard properties
These properties are used to configure tHDFSInput running in the Standard Job framework.
The Standard
tHDFSInput component belongs to the Big Data and the File families.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
||
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
||
Built-In: You create and store the |
|||
Repository: You have already created |
|||
Use an existing connection |
Select this check box and in the Component List Note that when a Job contains the parent Job and the child Job, Component List presents only the connection components in the same |
||
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
||
Hadoop version |
Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:
|
||
Use kerberos authentication |
If you are accessing the Hadoop cluster running
with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your user name to authenticate against the credentials stored in Kerberos.
This check box is available depending on the Hadoop distribution you are |
||
Use a keytab to authenticate |
Select the Use a keytab to authenticate Note that the user that executes a keytab-enabled Job is not necessarily |
||
NameNode URI |
Type in the URI of the Hadoop NameNode, the master node of a Hadoop system. For |
||
User name |
The User name field is available when you are not using |
||
Group |
Enter the membership including the authentication user under which the HDFS instances were |
||
File Name |
Browse to, or enter the path pointing to the data to be used in the file system. If the path you set points to a folder, this component will read |
||
Type |
Select the type of the file to be processed. The type of the file may be:
|
||
Row separator |
The separator used to identify the end of a row. This field is not available for a Sequence file. |
||
Field separator |
Enter character, string or regular expression to separate fields for the transferred This field is not available for a Sequence file. |
||
Header |
Set values to ignore the header of the transferred data. For This field is not available for a Sequence file. |
||
Custom encoding |
You may encounter encoding issues when you process the stored data. In that Select the encoding from the list or select Custom and define it manually. This field is compulsory for database This option is not available for a Sequence file. |
||
Compression |
Select the Uncompress the data check box to uncompress Hadoop provides different compression formats that help reduce the space needed for This option is not available for a Sequence file. |
Advanced settings
Include sub-directories if path is |
Select this check box to read not only the folder you have |
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
tStatCatcher Statistics |
Select this check box to collect log data at the component level. |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component needs an output link. |
Dynamic settings |
Click the [+] button to add a row in the The Dynamic settings table is For examples on using dynamic parameters, see Scenario: Reading data from databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Limitations |
JRE 1.6+ is required. |
Related scenario
-
Related topic, see Scenario 1: Writing data in a delimited file.
-
Related topic, see Scenario: Computing data with Hadoop distributed file system.
tHDFSInput MapReduce properties
These properties are used to configure tHDFSInput running in the MapReduce Job framework.
The MapReduce
tHDFSInput component belongs to the MapReduce family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the The properties are stored centrally under the Hadoop For further information about the Hadoop |
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the |
Built-In: You create and store the |
|
Repository: You have already created |
|
Folder/File |
Browse to, or enter the path pointing to the data to be used in the file system. If the path you set points to a folder, this component will read all of the files stored in that folder, for example,/user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the property mapreduce.input.fileinputformat.input.dir.recursive to be If you want to specify more than one files or directories in this If the file to be read is a compressed one, enter the file name with
Note that you need |
Die on error |
Select the check box to stop the execution of the Job when an error Clear the check box to skip any rows on error and complete the process for |
Type |
Select the type of the file to be processed. The type of the file may be:
|
Row separator |
The separator used to identify the end of a row. This field is not available for a Sequence file. |
Field separator |
Enter character, string or regular expression to separate fields for the transferred This field is not available for a Sequence file. |
Header |
Enter the number of rows to be skipped in the beginning of file. For example, enter 0 to ignore no This field is not available for a Sequence file. |
Custom Encoding |
You may encounter encoding issues when you process the stored data. In that Then select the encoding to be used from the list or select Custom and define it manually. This option is not available for a Sequence file. |
Advanced settings
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
Trim all columns |
Select this check box to remove the leading and trailing whitespaces from all |
Check column to trim |
This table is filled automatically with the schema being used. Select the check |
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
In a Once a Map/Reduce Job is opened in the workspace, tHDFSInput as well as the MapReduce family Note that in this documentation, unless otherwise |
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Related scenarios
If you are a subscription-based Big Data user, you can consult a
Talend
Map/Reduce Job using
the Map/Reduce version of tHDFSInput: