tHDFSPut
Connects to Hadoop distributed file system to load large-scale files into it with
optimized performance.
tHDFSPut copies files from an
user-defined directory, pastes them into a given Hadoop distributed
file system(HDFS) and if needs be, renames these files.
tHDFSPut Standard properties
These properties are used to configure tHDFSPut running in the Standard Job framework.
The Standard
tHDFSPut component belongs to the Big Data and the File families.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
Basic settings
Property type |
Either Built-in or Repository
Built-in: No property data stored
Repository: Select the repository |
||
Use an existing connection |
Select this check box and in the Component List Note that when a Job contains the parent Job and the child Job, Component List presents only the connection components in the same |
||
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
||
Hadoop version |
Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:
|
||
Use kerberos authentication |
If you are accessing the Hadoop cluster running
with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your user name to authenticate against the credentials stored in Kerberos.
This check box is available depending on the Hadoop distribution you are |
||
Use a keytab to authenticate |
Select the Use a keytab to authenticate Note that the user that executes a keytab-enabled Job is not necessarily |
||
NameNode URI |
Type in the URI of the Hadoop NameNode, the master node of a Hadoop system. For |
||
User name |
The User name field is available when you are not using |
||
Group |
Enter the membership including the authentication user under which the HDFS instances were |
||
Local directory |
Local directory where are stored the files to be loaded into |
||
HDFS directory |
Browse to, or enter the path pointing to the data to be used in the file system. |
||
Overwrite file |
Options to overwrite or not the existing file with the new |
||
Use Perl5 Regex Expression as |
Select this check box if you want to use Perl5 regular expressions in the Files field as file For information about Perl5 regular expression syntax, see Perl5 Regular Expression Syntax. |
||
Files |
In the Files area, the fields to – File mask: type in the file – New name: give a new name to |
||
Die on error |
This check box is selected by default. Clear the check box to skip |
Advanced settings
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
Global Variables
Global Variables |
NB_FILE: the number of files processed. This is an After
TRANSFER_MESSAGES: file transferred information. This is
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component combines HDFS connection and data extraction, thus Different from the tHDFSInput and the It is often connected to the Job using OnSubjobOk or OnComponentOk link, depending on the context. |
Dynamic settings |
Click the [+] button to add a row in the The Dynamic settings table is For examples on using dynamic parameters, see Scenario: Reading data from databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Limitations |
JRE 1.6+ is required. |
Related scenario
For related scenario, see Scenario: Computing data with Hadoop distributed file system.