August 17, 2023

tHBaseOutput – Docs for ESB 5.x

tHBaseOutput

tHBaseOutput_icon32_white.png

Warning

This component will be available in the Palette of
Talend Studio on the condition that you have subscribed to one of
the Talend
solutions with Big Data.


tHBaseOutput properties

Component family

Big Data / HBase

 

Function

tHBaseOutput receives data from
its preceding component, creates a table in a given HBase database
and writes the received data into this HBase table.

If you have subscribed to one of the Talend solutions with Big Data, you are
able to use this component in a Talend Map/Reduce Job to generate
Map/Reduce code. In that situation, tHBaseOutput belongs to the MapReduce component
family and can only wirte data in an existing HBase table. For
further information, see tHBaseOutput in Talend
Map/Reduce Jobs
.

Purpose

tHBaseOutput writes columns of
data into a given tHBase database.

Basic settings

Property type

Either Built-in or Repository.

Built-in : No property data
stored centrally.

Repository : Select the
repository file in which the properties are stored. The fields that
follow are completed automatically using the data retrieved.

 

Save_Icon.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

 

Use an existing connection

Select this check box and in the Component List click the
relevant connection component to reuse the connection details you already defined.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary
depending on the component you are using. Among these options, the following ones requires
specific configuration:

  • If available in this Distribution drop-down list, the
    Microsoft HD Insight option allows you to use a
    Microsoft HD Insight cluster. For this purpose, you need to configure the
    connections to the WebHCat service, the HD Insight service and the Windows Azure
    Storage service of that cluster in the areas that are displayed. A demonstration
    video about how to configure this connection is available in the following link:
    https://www.youtube.com/watch?v=A3QTT6VsNoM

  • The Custom option allows you to connect to a
    cluster different from any of the distributions given in this list, that is to
    say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the dotbutton.png button to display the dialog box in which you can
alternatively:

  1. Select Import from existing version to import an
    officially supported distribution as base and then add other required jar files
    which the base distribution does not provide.

  2. Select Import from zip to import a custom
    distribution zip that, for example, you can download from http://www.talendforge.org/exchange/index.php.

    Note

    In this dialog box, the active check box must be kept selected so as to import
    the jar files pertinent to the connection to be created between the custom
    distribution and this component.

    For an step-by-step example about how to connect to a custom distribution and
    share this connection, see Connecting to a custom Hadoop distribution.

 

HBase version

Select the version of the Hadoop distribution you are using. The available options vary
depending on the component you are using. Along with the evolution of Hadoop, please note
the following changes:

  • If you use Hortonworks Data Platform V2.2, the
    configuration files of your cluster might be using environment variables such as
    ${hdp.version}. If this is your situation, you
    need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value
    explicitly pointing to the MapReduce framework archive of your cluster. For
    example:

  • If you use Hortonworks Data Platform V2.0.0, the
    type of the operating system for running the distribution and a Talend
    Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend
    Jobserver to execute the Job in the same type of operating system in which the
    Hortonworks Data Platform V2.0.0 distribution you
    are using is run. For further information about Talend Jobserver, see
    Talend Installation
    and Upgrade Guide
    .

 

Hadoop version of the
distribution

This list is displayed only when you have selected Custom
from the distribution list to connect to a cluster not yet officially supported by the
Studio. In this situation, you need to select the Hadoop version of this custom cluster,
that is to say, Hadoop 1 or Hadoop
2
.

 

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between Talend and HBase. Note that when you configure the Zookeeper, you
might need to set the zookeeper.znode.parent property to
define the root of the relative path of an HBase’s Zookeeper file; then select the Set Zookeeper znode parent check box to define this
property.

 

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

 

Use kerberos authentication

If you are accessing an HBase database running with Kerberos security, select this check
box, then, enter the HBase related principal names in the corresponding fields. You should
be able to find the information in the hbase-site.xml
file of the cluster to be used.

If you need to use a Kerberos keytab file to log in, select Use a
keytab to authenticate
. A keytab file contains pairs of Kerberos principals
and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the
Keytab field.

Note that the user that executes a keytab-enabled Job is not necessarily the one a
principal designates but must have the right to read the keytab file being used. For
example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

 

Schema and Edit
schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

 

 

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

 

 

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

When the schema to be reused has default values that are integers or functions, ensure that
these default values are not enclosed within quotation marks. If they are, you must remove
the quotation marks manually.

For more details, see https://help.talend.com/display/KB/Verifying+default+values+in+a+retrieved+schema.

  Table name

Type in the name of the HBase table you need create.

  Action on table

Select the action you need to take for creating an HBase
table.

  Custom Row Key

Select this check box to use the customized row keys. Once
selected, the corresponding field appears. Then type in the
user-defined row key to index the rows of the HBase table being
created.

For example, you can type in
"France"+Numeric.sequence("s1",1,1) to produce the
row key series: France1,
France2, France3 and so on.

  Families

Complete this table to specify the column or columns to be created
and the corresponding column family or families they belong to
respectively. The Column column of
this table is automatically filled once you have defined the
schema.

 

Die on error

This check box is cleared by default, meaning to skip the row on
error and to complete the process for error-free rows.

Advanced settings

Properties

If you need to use custom configuration for your HBase, complete
this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override the
corresponding ones used by the Studio for its HBase engine.

For example, you need to define the value of the dfs.replication property as 1 for the HBase configuration. Then you
need to add one row to this table using the plus button and type in
the name and the value of this property in this row.

Note

This table is not available when you are using an existing
connection by selecting the Using an
existing connection
check box in the Basic settings view.

 

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Note

Only available for creating a HBase table

Family parameters

Type in the names and, when needs be, the custom performance
options of the column families to be created. These options are all
attributes defined by the HBase data model, so for further
explanation about these options, see Apache’s HBase
documentation.

Note

The parameter Compression type allows you to select
the format for output data compression.

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component is normally an end component of a Job and always
needs an input link.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by HBase.
For further information, see Apache’s HBase documentation on http://hbase.apache.org/.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with Talend Studio. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the
    path to the native library of that MapR client. This allows the subscription-based
    users to make full use of the Data viewer to view
    locally in the Studio the data stored in MapR. For further information about how to
    set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

tHBaseOutput in Talend
Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of
the Talend solutions with Big Data and is not applicable to
Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tHBaseOutput, as well as the whole Map/Reduce Job using it, generates
native Map/Reduce code. This section presents the specific properties of tHBaseOutput when it is used in that situation. For further
information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

MapReduce / Output

 

Function

In a Map/Reduce Job, tHBaseOutput
receives data from a transformation component and writes the data in
an existing HBase table.

Basic settings

Property type

Either Built-in or Repository.

Built-in : No property data
stored centrally.

Repository : Select the
repository file in which the properties are stored. The fields that
follow are completed automatically using the data retrieved.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

 

Save_Icon.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Version

Distribution

Select the cluster you are using from the drop-down list. The options in the list vary
depending on the component you are using. Among these options, the following ones requires
specific configuration:

  • If available in this Distribution drop-down list, the
    Microsoft HD Insight option allows you to use a
    Microsoft HD Insight cluster. For this purpose, you need to configure the
    connections to the WebHCat service, the HD Insight service and the Windows Azure
    Storage service of that cluster in the areas that are displayed. A demonstration
    video about how to configure this connection is available in the following link:
    https://www.youtube.com/watch?v=A3QTT6VsNoM

  • The Custom option allows you to connect to a
    cluster different from any of the distributions given in this list, that is to
    say, to connect to a cluster not officially supported by Talend.

In order to connect to a custom distribution, once selecting Custom, click the dotbutton.png button to display the dialog box in which you can
alternatively:

  1. Select Import from existing version to import an
    officially supported distribution as base and then add other required jar files
    which the base distribution does not provide.

  2. Select Import from zip to import a custom
    distribution zip that, for example, you can download from http://www.talendforge.org/exchange/index.php.

    Note

    In this dialog box, the active check box must be kept selected so as to import
    the jar files pertinent to the connection to be created between the custom
    distribution and this component.

    For an step-by-step example about how to connect to a custom distribution and
    share this connection, see Connecting to a custom Hadoop distribution.

In the Map/Reduce version of this component, the distribution you
select must be the same as the one you need to define in the
Hadoop Configuration view for
the whole Job.

 

HBase version

Select the version of the Hadoop distribution you are using. The available options vary
depending on the component you are using. Along with the evolution of Hadoop, please note
the following changes:

  • If you use Hortonworks Data Platform V2.2, the
    configuration files of your cluster might be using environment variables such as
    ${hdp.version}. If this is your situation, you
    need to set the mapreduce.application.framework.path property in the Hadoop properties table of this component with the path value
    explicitly pointing to the MapReduce framework archive of your cluster. For
    example:

  • If you use Hortonworks Data Platform V2.0.0, the
    type of the operating system for running the distribution and a Talend
    Job must be the same, such as Windows or Linux. Otherwise, you have to use Talend
    Jobserver to execute the Job in the same type of operating system in which the
    Hortonworks Data Platform V2.0.0 distribution you
    are using is run. For further information about Talend Jobserver, see
    Talend Installation
    and Upgrade Guide
    .

 

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between Talend and HBase. Note that when you configure the Zookeeper, you
might need to set the zookeeper.znode.parent property to
define the root of the relative path of an HBase’s Zookeeper file; then select the Set Zookeeper znode parent check box to define this
property.

 

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

 

Schema and Edit
schema

A schema is a row description, It defines the number of fields to
be processed and passed on to the next component. The schema is
either Built-in or stored remotely
in the Repository.

 

 

Built-in: The schema is created
and stored locally for this component only. Related topic: see
Talend Studio User Guide.

 

 

Repository: The schema already
exists and is stored in the Repository, hence can be reused. Related topic: see
Talend Studio User Guide.

  Table name

Type in the name of the HBase table in which you need to write
data. This table must already exist.

 

Row key column

Select the column used as the row key column of the HBase
table.

Then if needs be, select the Store row key
column to HBase column
check box to make the row key
column an HBase column belonging to a specific column family.

  Families

Complete this table to map the columns of the HBase table to be used with the schema
columns you have defined for the data flow to be processed.

The Column column of this table is automatically filled
once you have defined the schema; the syntax of the Column
family:qualifier
column requires each HBase column name (qualifier) to be
paired with its corresponding family name, for example, in an HBase table, if a Paris column belongs to a France family, then you need to write it as France:Paris.

Advanced settings

Properties

If you need to use custom configuration for your HBase, complete
this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override the
corresponding ones used by the Studio for its HBase engine.

For example, you need to define the value of the dfs.replication property as 1 for the HBase configuration. Then you
need to add one row to this table using the plus button and type in
the name and the value of this property in this row.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

In a Talend Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

The Hadoop configuration you use for the whole Job and the Hadoop distribution you use for
the HBase components must be the same. Actually, an HBase component requires that its Hadoop
distribution parameter be defined separately so as to launch its HBase driver only when that
component is used.

Once a Map/Reduce Job is opened in the workspace, tHBaseOutput as well as the MapReduce
family appears in the Palette of
the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by HBase.
For further information, see Apache’s HBase documentation on http://hbase.apache.org/.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with Talend Studio. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the
    path to the native library of that MapR client. This allows the subscription-based
    users to make full use of the Data viewer to view
    locally in the Studio the data stored in MapR. For further information about how to
    set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Related scenario

For related scenario to tHBaseOutput, see Scenario: Exchanging customer data with HBase.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x