August 17, 2023

tHDFSGet – Docs for ESB 5.x

tHDFSGet

thdfsget_icon32_white.png

Warning

This component will be available in the Palette of
Talend Studio on the condition that you have subscribed to one of
the Talend
solutions with Big Data.

tHDFSGet properties

Component family

Big Data / Hadoop

 

Function

tHDFSGet copies files from Hadoop
distributed file system(HDFS), pastes them in an user-defined
directory and if needs be, renames them.

Purpose

tHDFSGet connects to Hadoop
distributed file system, helping to obtain large-scale files with
optimized performance.

Basic settings

Property type

Either Built-in or Repository

Built-in: No property data stored
centrally.

Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

Use an existing connection

Select this check box and in the Component List click the
HDFS connection component from which you want to reuse the connection details already
defined.

Note

When a Job contains the parent Job and the child Job, Component
list
presents only the connection components in the same Job
level.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary
depending on the component you are using. Among these options, the following ones requires
specific configuration:

  • If available in this Distribution drop-down list, the
    Microsoft HD Insight option allows you to use a
    Microsoft HD Insight cluster. For this purpose, you need to configure the
    connections to the WebHCat service, the HD Insight service and the Windows Azure
    Storage service of that cluster in the areas that are displayed. A demonstration
    video about how to configure this connection is available in the following link:
    https://www.youtube.com/watch?v=A3QTT6VsNoM

  • The Custom option allows you to connect to a
    cluster different from any of the distributions given in this list, that is to
    say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the dotbutton.png button to display the dialog box in which you can
alternatively:

  1. Select Import from existing version to import an
    officially supported distribution as base and then add other required jar files
    which the base distribution does not provide.

  2. Select Import from zip to import a custom
    distribution zip that, for example, you can download from http://www.talendforge.org/exchange/index.php.

    Note

    In this dialog box, the active check box must be kept selected so as to import
    the jar files pertinent to the connection to be created between the custom
    distribution and this component.

    For an step-by-step example about how to connect to a custom distribution and
    share this connection, see Connecting to a custom Hadoop distribution.

 

Hadoop version

Select the version of the Hadoop distribution you are using. The available options vary
depending on the component you are using. Along with the evolution of Hadoop, please note
the following changes:

  • If you use Hortonworks Data Platform V2.2, the
    configuration files of your cluster might be using environment variables such as
    ${hdp.version}. If this is your situation, you
    need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value
    explicitly pointing to the MapReduce framework archive of your cluster. For
    example:

  • If you use Hortonworks Data Platform V2.0.0, the
    type of the operating system for running the distribution and a Talend
    Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend
    Jobserver to execute the Job in the same type of operating system in which the
    Hortonworks Data Platform V2.0.0 distribution you
    are using is run. For further information about Talend Jobserver, see
    Talend Installation
    and Upgrade Guide
    .

 

Use kerberos authentication

If you are accessing the Hadoop cluster running with Kerberos security, select this check
box, then, enter the Kerberos principal name for the NameNode in the field displayed. This
enables you to use your user name to authenticate against the credentials stored in
Kerberos.

This check box is available depending on the Hadoop distribution you are connecting
to.

  Use a keytab to authenticate

Select the Use a keytab to authenticate check box to log
into a Kerberos-enabled Hadoop system using a given keytab file. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used
in the Principal field and the access path to the keytab
file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a
principal designates but must have the right to read the keytab file being used. For
example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 Connection

NameNode URI

Type in the URI of the Hadoop NameNode. The NameNode is the master node of a Hadoop system.
For example, we assume that you have chosen a machine called masternode as the NameNode of an Apache Hadoop distribution, then the
location is hdfs://masternode:portnumber.

 

User name

Enter the user authentication name of HDFS.

 

Group

Enter the membership including the authentication user under which the HDFS instances were
started. This field is available depending on the distribution you are using.

HDFS directory

Browse to, or enter the directory in HDFS where the data you need to use is.

 

Local directory

Browse to, or enter the local directory to store the files
obtained from HDFS.

 

Overwrite file

Options to overwrite or not the existing file with the new
one.

 

Append

Select this check box to add the new rows at the end of the
records.

 

Include subdirectories

Select this check box if the selected input source type includes
sub-directories.

 

Files

In the Files area, the fields to
be completed are:

File mask: type in the file
name to be selected from HDFS. Regular expression is
available.

New name: give a new name to
the obtained file.

 

Die on error

This check box is selected by default. Clear the check box to skip
the row on error and complete the process for error-free
rows.

 Advanced settings

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

 

Hadoop properties

Talend Studio uses a default configuration for its engine to perform
operations in a Hadoop distribution. If you need to use a custom configuration in a specific
situation, complete this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override those default ones.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the
    properties defined in that metadata and becomes uneditable unless you change the
    Property type from Repository to Built-in.

For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you
are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:

Dynamic settings

Click the [+] button to add a row in the table and fill the
Code field with a context variable to choose your HDFS
connection dynamically from multiple connections planned in your Job. This feature is useful
when you need to access files in different HDFS systems or different distributions,
especially when you are working in an environment where you cannot change your Job settings,
for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the
Use an existing connection check box is selected in the
Basic settings view. Once a dynamic parameter is
defined, the Component List box in the Basic settings view becomes unusable.

For more information on Dynamic settings and context
variables, see Talend Studio User Guide.

Global Variables

NB_FILE: the number of files processed. This is an After
variable and it returns an integer.

CURRENT_STATUS: the execution result of the component.
This is a Flow variable and it returns a string.

TRANSFER_MESSAGES: file transferred information. This is
an After variable and it returns a string.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component combines HDFS connection and data extraction, thus
used as a single-component subjob to move data from HDFS to an
user-defined local directory.

Different from the tHDFSInput and the
tHDFSOutput components, it runs
standalone and does not generate input or output flow for the other
components.

It is often connected to the Job using OnSubjobOk or OnComponentOk link, depending on the context.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with Talend Studio. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the
    path to the native library of that MapR client. This allows the subscription-based
    users to make full use of the Data viewer to view
    locally in the Studio the data stored in MapR. For further information about how to
    set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitations

JRE 1.6+ is required.

Scenario: Computing data with Hadoop distributed file system

The following scenario describes a simple Job that creates a file in a defined
directory, get it into and out of HDFS, subsequently store it to another local
directory, and read it at the end of the Job.

Setting up the Job

  1. Drop the following components from the Palette onto the design workspace: tFixedFlowInput, tFileOutputDelimited, tHDFSPut, tHDFSGet,
    tFileInputDelimited and tLogRow.

  2. Connect tFixedFlowInput to tFileOutputDelimited using a Row > Main
    connection.

  3. Connect tFileInputDelimited to tLogRow using a Row > Main
    connection.

  4. Connect tFixedFlowInput to tHDFSPut using an OnSubjobOk connection.

  5. Connect tHDFSPut to tHDFSGet using an OnSubjobOk connection.

  6. Connect tHDFSGet to tFileInputDelimitedusing an OnSubjobOk connection.

    Use_Case_tHDFSGet1.png

Configuring the input component

  1. Double-click tFixedFlowInput to define
    the component in its Basic settings
    view.

  2. Set the Schema to Built-In and click the three-dot […] button next to Edit
    Schema
    to describe the data structure you want to create from
    internal variables. In this scenario, the schema contains one column:
    content.

    Use_Case_tHDFSGet3.png
  3. Click the plus button to add the parameter line.

  4. Click OK to close the dialog box and
    accept to propagate the changes when prompted by the studio.

  5. In Basic settings, define the
    corresponding value in the Mode area using
    the Use Single Table option. In this
    scenario, the value is “Hello world!”.

    Use_Case_tHDFSGet2.png

Configuring the tFileOutputDelimited component

  1. Double-click tFileOutputDelimited to
    define the component in its Basic settings
    view.

    Use_Case_tHDFSGet4.png
  2. Click the […] button next to the
    File Name field and browse to the
    output file you want to write data in, in.txt in this
    example.

Loading the data from the local file

  1. Double-click tHDFSPut to define the
    component in its Basic settings
    view.

    Use_Case_tHDFSGet5.png
  2. Select, for example, Apache 0.20.2 from the Hadoop
    version
    list.

  3. In the NameNode URI, the Username and the Group fields, enter the connection parameters to the
    HDFS.

  4. Next to the Local directory field, click
    the three-dot […] button to browse to the
    folder with the file to be loaded into the HDFS. In this scenario, the
    directory has been specified while configuring tFileOutputDelimited:
    C:/hadoopfiles/putFile/.

  5. In the HDFS directory field, type in the
    intended location in HDFS to store the file to be loaded. In this example,
    it is /testFile.

  6. Click the Overwrite file field to stretch
    the drop-down.

  7. From the menu, select always.

  8. In the Files area, click the plus button
    to add a row in which you define the file to be loaded.

  9. In the File mask column, enter
    *.txt to replace newLine
    between quotation marks and leave the New
    name
    column as it is. This allows you to extract all the
    .txt files in the specified directory without
    changing their names. In this example, the file is
    in.txt
    .

Getting the data from the HDFS

  1. Double-click tHDFSGet to define the
    component in its Basic settings
    view.

    Use_Case_tHDFSGet6.png
  2. Select, for example, Apache 0.20.2 from the Hadoop
    version
    list.

  3. In the NameNode URI, the Username, the Group fields, enter the connection parameters to the
    HDFS.

  4. In the HDFS directory field, type in
    location storing the loaded file in HDFS. In this example, it is
    /testFile.

  5. Next to the Local directory field, click
    the three-dot […] button to browse to the
    folder intended to store the files that are extracted out of the HDFS. In
    this scenario, the directory is:
    C:/hadoopfiles/getFile/.

  6. Click the Overwrite file field to stretch
    the drop-down.

  7. From the menu, select always.

  8. In the Files area, click the plus button
    to add a row in which you define the file to be extracted.

  9. In the File mask column, enter
    *.txt to replace newLine
    between quotation marks and leave the New
    name
    column as it is. This allows you to extract all the
    .txt files from the specified directory in the HDFS
    without changing their names. In this example, the file is
    in.txt
    .

Reading data from the HDFS and saving the data locally

  1. Double-click tFileInputDelimited to
    define the component in its Basic settings
    view.

    Use_Case_tHDFSGet7.png
  2. Set property type to Built-In.

  3. Next to the File Name/Stream field, click
    the three-dot button to browse to the file you have obtained from the HDFS.
    In this scenario, the directory is
    C:/hadoopfiles/getFile/in.txt.

  4. Set Schema to Built-In and click Edit
    schema
    to define the data to pass on to the tLogRow component.

    Use_Case_tHDFSGet8.png
  5. Click the plus button to add a new column.

  6. Click OK to close the dialog box and
    accept to propagate the changes when prompted by the studio.

Executing the Job

Save the Job and press F6 to execute it.

The in.txt file is created and loaded into the HDFS.

Use_Case_tHDFSGet9.png

The file is also extracted from the HDFS by tHDFSGet and is read by tFileInputDelimited.

Use_Case_tHDFSGet10.png

Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x