July 30, 2023

tHDFSConnection – Docs for ESB 7.x


Connects to a given HDFS so that the other Hadoop components can reuse the
connection it creates to communicate with this HDFS.

tHDFSConnection provides
connection to the Hadoop distributed file system (HDFS) of interest
at runtime.

tHDFSConnection Standard properties

These properties are used to configure tHDFSConnection running in the Standard Job framework.

The Standard
tHDFSConnection component belongs to the Big Data and the File families.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Property type

Either Built-in or Repository

Built-in: No property data stored centrally.

Repository: Select the repository file in which the
properties are stored. The fields that follow are completed automatically using
the data retrieved.


Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:

  • If available in this Distribution drop-down
    list, the Microsoft HD
    option allows you to use a Microsoft HD Insight
    cluster. For this purpose, you need to configure the connections to the HD
    Insightcluster and the Windows Azure Storage service of that cluster in the
    areas that are displayed. For
    detailed explanation about these parameters, search for configuring the
    connection manually on Talend Help Center (https://help.talend.com).

  • If you select Amazon EMR, find more details about Amazon EMR getting started in
    Talend Help Center (https://help.talend.com).

  • The Custom option
    allows you to connect to a cluster different from any of the distributions
    given in this list, that is to say, to connect to a cluster not officially
    supported by

  1. Select Import from existing
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.


    Exchange, members of
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.


    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.


Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Scheme Select the URI scheme of the file system to be used from the
Scheme drop-down list. This scheme could be

  • HDFS
  • WebHDFS. WebHDFS with SSL is not supported yet.
  • ADLS. Only Azure Data Lake Storage Gen1 is supported.

The schemes present on this list vary depending on the distribution you
are using and only the scheme that appears on this list with a given
distribution is officially supported by Talend.

Once a scheme is
selected, the corresponding syntax such as
webhdfs://localhost:50070/ for WebHDFS is displayed in the
field for the NameNode location of your cluster.

If you have selected
ADLS, the connection parameters to be defined become:

  • In the
    Client ID and the Client
    fields, enter, respectively, the authentication
    ID and the authentication key generated upon the registration of the
    application that the current Job you are developing uses to access
    Azure Data Lake Storage.

    Ensure that the application to be used has appropriate
    permissions to access Azure Data Lake. You can check this on the
    Required permissions view of this application on Azure. For further
    information, see Azure documentation Assign the Azure AD application to
    the Azure Data Lake Storage account file or folder

  • In the
    Token endpoint field, copy-paste the
    OAuth 2.0 token endpoint that you can obtain from the
    Endpoints list accessible on the
    App registrations page on your Azure

For a
video demonstration, see Configure and use Azure in a

NameNode URI

Type in the URI of the Hadoop NameNode, the master node of a
Hadoop system. For example, we assume that you have chosen a machine called masternode as the NameNode, then the location is
hdfs://masternode:portnumber. If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; WebHDFS with SSL is not
supported yet.

Inspect the classpath for configurations

Select this check box to allow the component to check the configuration
files in the directory you have set with the $HADOOP_CONF_DIR
variable and directly read parameters from these files in this directory. This
feature allows you to easily change the Hadoop configuration for the component to
switch between different environments, for example, from a test environment to a
production environment.

In this situation, the fields or options used to configure Hadoop
connection and/or Kerberos security are hidden.

If you want to use certain parameters such as the Kerberos parameters but
these parameters are not included in these Hadoop configuration files, you need to
create a file called talend-site.xml and put this file into the
same directory defined with $HADOOP_CONF_DIR. This talend-site.xml file should read as follows:

The parameters read from these configuration files override the default
ones used by the Studio. When a parameter does not exist in these configuration
files, the default one is used.

Use kerberos authentication

If you are accessing the Hadoop cluster running
with Kerberos security, select this check box, then, enter the Kerberos
principal name for the NameNode in the field displayed. This enables you to use
your user name to authenticate against the credentials stored in Kerberos.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    check box and the Use Kerberos
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

This check box is available depending on the Hadoop distribution you are
connecting to.

Use a keytab to authenticate

Select the Use a keytab to authenticate
check box to log into a Kerberos-enabled system using a given keytab file. A keytab
file contains pairs of Kerberos principals and encrypted keys. You need to enter the
principal to be used in the Principal field and
the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in
which your Job actually runs, for example, on a Talend Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

User name

User authentication name of HDFS.


Enter the membership including the authentication user under which the HDFS
instances were started. This field is available depending on the distribution
you are using.

Hadoop properties

Talend Studio
uses a default configuration for its engine to perform
operations in a Hadoop distribution. If you need to use a custom configuration in a specific
situation, complete this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override those default ones.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the
    properties defined in that metadata and becomes uneditable unless you change the
    Property type from Repository to Built-in.

For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you
are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:

Use datanode hostname

Select the Use datanode hostname check box
to allow the Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname property to true.

Setup HDFS encryption configurations

If the HDFS transparent encryption has been enabled in your cluster, select
the Setup HDFS encryption configurations check
box and in the HDFS encryption key provider field
that is displayed, enter the location of the KMS proxy.

For further information about the HDFS transparent encryption and its KMS proxy, see Transparent Encryption in HDFS.

Advanced settings

tStatCatcher Statistics

Select this check box to collect log data at the component

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.


Usage rule

This component is generally used with other Hadoop


The Hadoop distribution must be properly installed, so as to guarantee the interaction
Talend Studio
. The following list presents MapR related information for

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    . For example, the library for
    Windows is lib
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.


JRE 1.6+ is required.

Related scenarios

No scenario is available for the Standard version of this component yet.

Document get from Talend https://help.talend.com
Thank you for watching.
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x