August 17, 2023

tHDFSOutput – Docs for ESB 5.x

tHDFSOutput

thdfsoutput_icon32_white.png

Warning

This component will be available in the Palette of
Talend Studio on the condition that you have subscribed to one of
the Talend
solutions with Big Data.

tHDFSOutput properties

Component family

Big Data / Hadoop

 

Function

tHDFSOutput writes data flows it
receives into a given Hadoop distributed file system (HDFS).

If you have subscribed to one of the Talend solutions with Big Data, you are
able to use this component in a Talend Map/Reduce Job to generate
Map/Reduce code. For further information, see tHDFSOutput in Talend Map/Reduce
Jobs
.

Purpose

tHDFSOutput transfers data flows
from into a given HDFS file system.

Basic settings

Property type

Either Built-in or Repository

Built-in: No property data stored
centrally.

Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

   

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

   

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

 

Use an existing connection

Select this check box and in the Component List click the
HDFS connection component from which you want to reuse the connection details already
defined.

Note

When a Job contains the parent Job and the child Job, Component
list
presents only the connection components in the same Job
level.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary
depending on the component you are using. Among these options, the following ones requires
specific configuration:

  • If available in this Distribution drop-down list, the
    Microsoft HD Insight option allows you to use a
    Microsoft HD Insight cluster. For this purpose, you need to configure the
    connections to the WebHCat service, the HD Insight service and the Windows Azure
    Storage service of that cluster in the areas that are displayed. A demonstration
    video about how to configure this connection is available in the following link:
    https://www.youtube.com/watch?v=A3QTT6VsNoM

  • The Custom option allows you to connect to a
    cluster different from any of the distributions given in this list, that is to
    say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the dotbutton.png button to display the dialog box in which you can
alternatively:

  1. Select Import from existing version to import an
    officially supported distribution as base and then add other required jar files
    which the base distribution does not provide.

  2. Select Import from zip to import a custom
    distribution zip that, for example, you can download from http://www.talendforge.org/exchange/index.php.

    Note

    In this dialog box, the active check box must be kept selected so as to import
    the jar files pertinent to the connection to be created between the custom
    distribution and this component.

    For an step-by-step example about how to connect to a custom distribution and
    share this connection, see Connecting to a custom Hadoop distribution.

 

Hadoop version

Select the version of the Hadoop distribution you are using. The available options vary
depending on the component you are using. Along with the evolution of Hadoop, please note
the following changes:

  • If you use Hortonworks Data Platform V2.2, the
    configuration files of your cluster might be using environment variables such as
    ${hdp.version}. If this is your situation, you
    need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value
    explicitly pointing to the MapReduce framework archive of your cluster. For
    example:

  • If you use Hortonworks Data Platform V2.0.0, the
    type of the operating system for running the distribution and a Talend
    Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend
    Jobserver to execute the Job in the same type of operating system in which the
    Hortonworks Data Platform V2.0.0 distribution you
    are using is run. For further information about Talend Jobserver, see
    Talend Installation
    and Upgrade Guide
    .

Authentication

Use kerberos authentication

If you are accessing the Hadoop cluster running with Kerberos security, select this check
box, then, enter the Kerberos principal name for the NameNode in the field displayed. This
enables you to use your user name to authenticate against the credentials stored in
Kerberos.

This check box is available depending on the Hadoop distribution you are connecting
to.

  Use a keytab to authenticate

Select the Use a keytab to authenticate check box to log
into a Kerberos-enabled Hadoop system using a given keytab file. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to be used
in the Principal field and the access path to the keytab
file itself in the Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a
principal designates but must have the right to read the keytab file being used. For
example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

NameNode URI

Type in the URI of the Hadoop NameNode. The NameNode is the master node of a Hadoop system.
For example, we assume that you have chosen a machine called masternode as the NameNode of an Apache Hadoop distribution, then the
location is hdfs://masternode:portnumber.

 

User name

Enter the user authentication name of HDFS.

 

Group

Enter the membership including the authentication user under which the HDFS instances were
started. This field is available depending on the distribution you are using.

 

File Name

Browse to, or enter the location of the file which you write data
to. This file is created automatically if it does not exist.

File type

Type

Select the type of the file to be processed. The type of the file may be:

  • Text file.

  • Sequence file: a Hadoop sequence file
    consists of binary key/value pairs and is suitable for the Map/Reduce framework.
    For further information, see http://wiki.apache.org/hadoop/SequenceFile.

    Once you select the Sequence file format, the
    Key column list and the Value column list appear to allow you to select the
    keys and the values of that Sequence file to be processed.

 

Action

Select an operation in HDFS:

Create: Creates a file with data
using the file name defined in the File
Name
field.

Overwrite: Overwrites the data in
the file specified in the File Name
field.

Append: Inserts the data into the
file specified in the File Name
field. The specified file is created automatically if it does not
exist.

 

Row separator

Enter the separator used to identify the end of a row.

This field is not available for a Sequence file.

 

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

This field is not available for a Sequence file.

 

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select
this check box to display the Encoding list.

Select the encoding from the list or select Custom and
define it manually. This field is compulsory for database data handling.

This option is not available for a Sequence file.

 

Compression

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

 

Include header

Select this check box to output the header of the data.

This option is not available for a Sequence file.

Advanced settings

Hadoop properties

Talend Studio uses a default configuration for its engine to perform
operations in a Hadoop distribution. If you need to use a custom configuration in a specific
situation, complete this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override those default ones.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the
    properties defined in that metadata and becomes uneditable unless you change the
    Property type from Repository to Built-in.

For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you
are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:

 

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Dynamic settings

Click the [+] button to add a row in the table and fill the
Code field with a context variable to choose your HDFS
connection dynamically from multiple connections planned in your Job. This feature is useful
when you need to access files in different HDFS systems or different distributions,
especially when you are working in an environment where you cannot change your Job settings,
for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the
Use an existing connection check box is selected in the
Basic settings view. Once a dynamic parameter is
defined, the Component List box in the Basic settings view becomes unusable.

For more information on Dynamic settings and context
variables, see Talend Studio User Guide.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component needs an input component.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with Talend Studio. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the
    path to the native library of that MapR client. This allows the subscription-based
    users to make full use of the Data viewer to view
    locally in the Studio the data stored in MapR. For further information about how to
    set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Limitations

JRE 1.6+ is required.

tHDFSOutput in Talend Map/Reduce
Jobs

Warning

The information in this section is only for users that have subscribed to one of
the Talend solutions with Big Data and is not applicable to
Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tHDFSOutput, as well as the other Map/Reduce components preceding it,
generates native Map/Reduce code. This section presents the specific properties of
tHDFSOutput when it is used in that situation. For
further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

MapReduce / Output

 

Basic settings

Property type

Either Built-in or Repository.

   

Built-in: no property data stored
centrally.

   

Repository: reuse properties
stored centrally under the Hadoop
Cluster
node of the Repository tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Talend Big Data Getting Started Guide.

 

Schema and Edit
Schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

   

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

   

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

 

Folder

Browse to, or enter the directory in HDFS where the data you need to use is.

This path must point to a folder rather than a file, because a
Talend Map/Reduce
Job need to write in its target folder not only the final result but
also multiple part- files
generated in performing Map/Reduce computations.

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
configuration
tab in the Run view.

File type

Type

Select the type of the file to be processed. The type of the file may be:

  • Text file.

  • Sequence file: a Hadoop sequence file
    consists of binary key/value pairs and is suitable for the Map/Reduce framework.
    For further information, see http://wiki.apache.org/hadoop/SequenceFile.

    Once you select the Sequence file format, the
    Key column list and the Value column list appear to allow you to select the
    keys and the values of that Sequence file to be processed.

 

Action

Select an operation in HDFS:

Create: Creates a file and write
data in it.

Overwrite: Overwrites the file
existing in the directory specified in the Folder field.

 

Row separator

Enter the separator used to identify the end of a row.

This field is not available for a Sequence file.

 

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

This field is not available for a Sequence file.

 

Include header

Select this check box to output the header of the data.

This option is not available for a Sequence file.

 

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select
this check box to display the Encoding list.

Select the encoding from the list or select Custom and
define it manually. This field is compulsory for database data handling.

This option is not available for a Sequence file.

 

Compression

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

 

Merge result to single file

Select this check box to merge the final part files into a single file and put that file in a
specified directory.

Once selecting it, you need to enter the path to, or browse to the
folder you want to store the merged file in. This directory is
automatically created if it does not exist.

The following check boxes are used to manage the source and the target files:

  • Remove source dir: select this check box to remove the source
    files after the merge.

  • Override target file: select this check box to override the
    file already existing in the target location. This option does not override the
    folder.

This option is not available for a Sequence file.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By
default, the thousands separator is a coma (,) and the decimal separator is a period (.).

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

In a Talend Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tHDFSOutput as well as the MapReduce
family appears in the Palette of
the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenario

If you are a subscription-based Big Data user, you can as well consult a Talend
Map/Reduce Job using the Map/Reduce version of tHDFSOutput:


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x