July 30, 2023

tHBaseConfiguration – Docs for ESB 7.x

tHBaseConfiguration

Enables the reuse of the connection configuration to HBase in the same
Job.

tHBaseConfiguration provides HBase
connection information for the HBase components used in the same Spark
Job. The Spark cluster to be used reads this configuration to eventually
connect to HBase.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tHBaseConfiguration properties for Apache Spark Batch

These properties are used to configure tHBaseConfiguration running in the Spark Batch Job framework.

The Spark Batch
tHBaseConfiguration component belongs to the Storage and the Databases families.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

Distribution

Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:

  • If available in this Distribution drop-down
    list, the Microsoft HD
    Insight
    option allows you to use a Microsoft HD Insight
    cluster. For this purpose, you need to configure the connections to the HD
    Insightcluster and the Windows Azure Storage service of that cluster in the
    areas that are displayed. For
    detailed explanation about these parameters, search for configuring the
    connection manually on Talend Help Center (https://help.talend.com).

  • If you select Amazon EMR, find more details about Amazon EMR getting started in
    Talend Help Center (https://help.talend.com).

  • The Custom option
    allows you to connect to a cluster different from any of the distributions
    given in this list, that is to say, to connect to a cluster not officially
    supported by
    Talend
    .

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

HBase version

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

Use kerberos
authentication

If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    authentication
    check box and the Use Kerberos
    authentication
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to
be used in the Principal field and the access
path to the keytab file itself in the Keytab
field. This keytab file must be stored in the machine in which your Job actually
runs, for example, on a Talend
Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

HBase parameters

If you need to use custom configuration for your database, complete this table with the
property or properties to be customized. Then at runtime, the customized property or
properties will override the corresponding ones used by the Studio.

Usage

Usage rule

This component is used with no need to be connected to other
components.

You must drop tHBaseConfiguration along with the
HBase-related Subjob to be run in the same Job so that the configuration is used by the
whole Job at runtime.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your
database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    When using on-premise
    distributions, use the configuration component corresponding
    to the file system your cluster is using. Typically, this
    system is HDFS and so use tHDFSConfiguration.

    Actually, this component is relevant with the traditional on-premise
    Hadoop distributions only.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tHBaseConfiguration properties for Apache Spark Streaming

These properties are used to configure tHBaseConfiguration running in the Spark Streaming Job framework.

The Spark Streaming
tHBaseConfiguration component belongs to the Storage and the Databases families.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

Distribution

Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:

  • If available in this Distribution drop-down
    list, the Microsoft HD
    Insight
    option allows you to use a Microsoft HD Insight
    cluster. For this purpose, you need to configure the connections to the HD
    Insightcluster and the Windows Azure Storage service of that cluster in the
    areas that are displayed. For
    detailed explanation about these parameters, search for configuring the
    connection manually on Talend Help Center (https://help.talend.com).

  • If you select Amazon EMR, find more details about Amazon EMR getting started in
    Talend Help Center (https://help.talend.com).

  • The Custom option
    allows you to connect to a cluster different from any of the distributions
    given in this list, that is to say, to connect to a cluster not officially
    supported by
    Talend
    .

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

HBase version

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

Use kerberos
authentication

If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    authentication
    check box and the Use Kerberos
    authentication
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to
be used in the Principal field and the access
path to the keytab file itself in the Keytab
field. This keytab file must be stored in the machine in which your Job actually
runs, for example, on a Talend
Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

HBase parameters

If you need to use custom configuration for your database, complete this table with the
property or properties to be customized. Then at runtime, the customized property or
properties will override the corresponding ones used by the Studio.

Usage

Usage rule

This component is used with no need to be connected to other components.

You must drop tHBaseConfiguration along with the
HBase-related Subjob to be run in the same Job so that the configuration is used by the
whole Job at runtime.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your
database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    When using on-premise
    distributions, use the configuration component corresponding
    to the file system your cluster is using. Typically, this
    system is HDFS and so use tHDFSConfiguration.

    Actually, this component is relevant with the traditional on-premise
    Hadoop distributions only.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x