July 30, 2023

tHBaseInput – Docs for ESB 7.x

tHBaseInput

Reads data from a given HBase database and extracts columns of
selection.

HBase is a distributed, column-oriented database that hosts very large,
sparsely populated tables on clusters.

tHBaseInput extracts columns corresponding to schema
definition. Then it passes these columns to the next component via a Main row link.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

HBase filters

This table presents the HBase filters available in
Talend Studio
and the
parameters required by those filters.

Filter type

Filter column

Filter family Filter operation Filter value Filter comparator type Objective

Single Column Value Filter

Yes

Yes

Yes

Yes

Yes

It compares the values of a given column against the value defined
for the Filter value parameter. If
the filtering condition is met, all columns of the row will be
returned.

Family filter

 

Yes

Yes

 

Yes

It returns the columns of the family that meets the filtering
condition.

Qualifier filter

Yes

 

Yes

 

Yes

It returns the columns whose column qualifiers match the filtering
condition.

Column prefix filter

Yes

Yes

     

It returns all columns of which the qualifiers have the prefix
defined for the Filter column parameter.

Multiple column prefix filter

Yes (Multiple prefixes are separated by comma, for
example, id,id_1,id_2)

Yes

     

It works the same way as a Column prefix
filter
does but allows specifying multiple
prefixes.

Column range filter

Yes (The ends of a range are separated by comma. )

Yes

      It allows intra row scanning and returns all matching columns of a
scanned row.

Row filter

   

Yes

Yes

Yes It filters on row keys and returns all rows that matches the
filtering condition.

Value filter

   

Yes

Yes

Yes

It returns only columns that have a specific value.

The use explained above of the listed HBase filters is subject to revisions made by
Apache in its Apache HBase project; therefore, in order to fully understand how to use
these HBase filters, we recommend reading Apache’s HBase documentation.

tHBaseInput Standard properties

These properties are used to configure tHBaseInput running in the Standard Job framework.

The Standard
tHBaseInput component belongs to the Big Data and the Databases NoSQL families.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

tHBaseInput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Distribution

Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:

  • If available in this Distribution drop-down
    list, the Microsoft HD
    Insight
    option allows you to use a Microsoft HD Insight
    cluster. For this purpose, you need to configure the connections to the HD
    Insightcluster and the Windows Azure Storage service of that cluster in the
    areas that are displayed. For
    detailed explanation about these parameters, search for configuring the
    connection manually on Talend Help Center (https://help.talend.com).

  • If you select Amazon EMR, find more details about Amazon EMR getting started in
    Talend Help Center (https://help.talend.com).

  • The Custom option
    allows you to connect to a cluster different from any of the distributions
    given in this list, that is to say, to connect to a cluster not officially
    supported by
    Talend
    .

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

HBase version

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Hadoop version of the
distribution

This list is displayed only when you have selected Custom from the distribution list to connect to a cluster not yet
officially supported by the Studio. In this situation, you need to select the Hadoop
version of this custom cluster, that is to say, Hadoop
1
or Hadoop 2.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

Use kerberos authentication

If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    authentication
    check box and the Use Kerberos
    authentication
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to
be used in the Principal field and the access
path to the keytab file itself in the Keytab
field. This keytab file must be stored in the machine in which your Job actually
runs, for example, on a Talend
Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Set table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a
MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Table name

Type in the name of the table from which you need to extract columns.

Define a row selection

Select this check box and then in the Start row and the
End row fields, enter the corresponding row keys to
specify the range of the rows you want the current component to extract.

Different from the filters you can set using Is by
filter
requiring the loading of all records before filtering the ones to be
used, this feature allows you to directly select only the rows to be used.

Mapping

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Properties

If you need to use custom configuration for your database, complete this table with the
property or properties to be customized. Then at runtime, the customized property or
properties will override the corresponding ones used by the Studio.

For example, you need to define the value of the dfs.replication property as 1 for the
database configuration. Then you need to add one row to this table using the plus button and
type in the name and the value of this property in this row.

Note:

This table is not available when you are using an existing
connection by selecting the Using an existing
connection
check box in the Basic settings view.

Is by filter

Select this check box to use filters to perform fine-grained data selection from your
database, such as selection of keys, or values, based on regular expressions.

Once selecting it, the Filter table that is used to
define filtering conditions becomes available.

This feature leverages filters provided by HBase and subject to constraints explained in
Apache HBase documentation. Therefore, advanced knowledge of HBase is required to make full
use of these filters.

Logical operation
Select the operator you need to use to define the logical relation between filters. This
available operators are:

  • And: every defined filtering conditions must
    be satisfied. It represents the relationship
    FilterList.Operator.MUST_PASS_ALL

  • Or: at least one of the defined filtering
    conditions must be satisfied. It represents the relationship:
    FilterList.Operator.MUST_PASS_ONE

Filter
Click the button under this table to add as many rows as required, each row representing a
filter. The parameters you may need to set for a filter are:

  • Filter type: the drop-down list presents
    pre-existing filter types that are already defined by HBase. Select the type of
    the filter you need to use.

  • Filter column: enter the column qualifier on
    which you need to apply the active filter. This parameter becomes mandatory
    depending on the type of the filter and of the comparator you are using. For
    example, it is not used by the Row Filter type
    but is required by the Single Column Value
    Filter
    type.

  • Filter family: enter the column family on
    which you need to apply the active filter. This parameter becomes mandatory
    depending on the type of the filter and of the comparator you are using. For
    example, it is not used by the Row Filter type
    but is required by the Single Column Value
    Filter
    type.

  • Filter operation: select from the drop-down
    list the operation to be used for the active filter.

  • Filter Value: enter the value on which you
    want to use the operator selected from the Filter
    operation
    drop-down list.

  • Filter comparator type: select the type of
    the comparator to be combined with the filter you are using.

Depending on the Filter type you are using,
some or each of the parameters become mandatory. For further information, see HBase filters

Retrieve timestamps

Select this check box to load the timestamps of an HBase column into the
data flow.

  • Retrieve from an HBase column: select
    the HBase column which is tracked for changes in order to
    retrieve its corresponding timestamps.

  • Write to a schema column: select the column you have defined
    in the schema to store the retrieved timestamps.

    The type of this column must be
    Long.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is a start component of a Job and always needs an
output link.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your
database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Exchanging customer data with HBase

This scenario applies only to Talend products with Big Data.

In this scenario, a six-component Job is used to exchange customer data with a given
HBase.

tHBaseInput_2.png
The six components are:

  • tHBaseConnection: creates a connection to
    your HBase database.

  • tFixedFlowInput: creates the data to be
    written into your HBase. In the real use case, this component could be
    replaced by the other input components like tFileInputDelimited.

  • tHBaseOutput: writes the data it receives
    from the preceding component into your HBase.

  • tHBaseInput: extracts the columns of
    interest from your HBase.

  • tLogRow: presents the execution
    result.

  • tHBaseClose: closes the
    transaction.

To replicate this scenario, proceed as the following sections illustrate.

Note:

Before starting the replication, your Hbase and Zookeeper service should have been
correctly installed and well configured. This scenario explains only how to use
Talend solution to make data transaction with a given HBase.

Dropping and linking the components

To do this, proceed as follows:

  1. Drop tHBaseConnection, tFixedFlowInput, tHBaseOutput, tHBaseInput, tLogRow and
    tHBaseClose from Palette onto the
    Design workspace.
  2. Right-click tHBaseConnection to open its
    contextual menu and select the Trigger >
    On Subjob Ok link from this menu to
    connect this component to tFixedFlowInput.
  3. Do the same to create the OnSubjobOk link
    from tFixedFlowInput to tHBaseInput and then to tHBaseClose.
  4. Right-click tFixedFlowInput and select
    the Row > Main link to connect this component to tHBaseOutput.
  5. Do the same to create the Main link from
    tHBaseInput to tLogrow.

The components to be used in this scenario are all placed and linked. Then you
need continue to configure them sucessively.

Configuring the connection

To configure the connection to your Zookeeper service and thus to the HBase of
interest, proceed as follows:

  1. On the Design workspace of your Studio, double-click the tHBaseConnection component to open its Component view.

    tHBaseInput_3.png

  2. Select Hortonworks Data Platform 1.0 from
    the HBase version list.
  3. In the Zookeeper quorum field, type in
    the name or the URL of the Zookeeper service you are using. In this example,
    the name of the service in use is hbase.
  4. In the Zookeeper client port field, type
    in the number of client listening port. In this example, it is 2181.
  5. If the Zookeeper znode parent location has been defined in the Hadoop
    cluster you are connecting to, you need to select the Set zookeeper znode parent check box and enter the value of
    this property in the field that is displayed.

Configuring the process of writing data into the HBase

To do this, proceed as follows:

  1. On the Design workspace, double-click the tFixedFlowInput component to open its Component view.

    tHBaseInput_4.png

  2. In this view, click the three-dot button next to Edit schema to open the schema editor.

    tHBaseInput_5.png

  3. Click the plus button three times to add three rows and in the Column column, rename the three rows respectively
    as: id,
    name
    and age.
  4. In the Type column, click each of these
    rows and from the drop-down list, select the data type of every row. In this
    scenario, they are Integer for id and age,
    String for name.
  5. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.
  6. In the Mode area, select the Use Inline Content (delimited file) to display
    the fields for editing.
  7. In the Content field, type in the
    delimited data to be written into the HBase, separated with the semicolon
    ;“. In this example, they are:

  8. Double-click tHBaseOutput to open its
    Component view.

    Note: If this component does not have the same schema of the preceding
    component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema
    from the preceding one and once done, the warning icon disappears.

    tHBaseInput_6.png

  9. Select the Use an existing connection
    check box and then select the connection you have configured earlier. In
    this example, it is tHBaseConnection_1.
  10. In the Table name field, type in the name
    of the table to be created in the HBase. In this example, it is customer.
  11. In the Action on table field, select the
    action of interest from the drop-down list. In this scenario, select
    Drop table if exists and create. This
    way, if a table named customer exists already in the HBase, it will be
    disabled and deleted before creating this current table.
  12. Click the Advanced settings tab to open
    the corresponding view.

    tHBaseInput_7.png

  13. In the Family parameters table, add two
    rows by clicking the plus button, rename them as family1 and family2
    respectively and then leave the other columns empty. These two column
    families will be created in the HBase using the default family performance
    options.

    Note: The Family parameters table is
    available only when the action you have selected in the Action on table field is to create a table in
    HBase. For further information about this Family
    parameters
    table, see tHBaseOutput.

  14. In the Families table of the Basic settings view, enter the family names in
    the Family name column, each corresponding
    to the column this family contains. In this example, the id and the age columns belong to family1 and the name
    column to family2.

    Note: These column families should already exist in the HBase to be
    connected to; if not, you need to define them in the Family parameters table of the Advanced settings view for creating them at
    runtime.

Configuring the process of extracting data from the HBase

To do this, perform the following operations:

  1. Double-click tHBaseInput to open its
    Component view.

    tHBaseInput_8.png

  2. Select the Use an existing connection
    check box and then select the connection you have configured earlier. In
    this example, it is tHBaseConnection_1.
  3. Click the three-dot button next to Edit
    schema
    to open the schema editor.

    tHBaseInput_9.png

  4. Click the plus button three times to add three rows and rename them as
    id, name and age respectively
    in the Column column. This means that you
    extract these three columns from the HBase.
  5. Select the types for each of the three columns. In this example, Integer for id
    and age, String for name.
  6. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.
  7. In the Table name field, type in the
    table from which you extract the columns of interest. In this scenario, the
    table is customer.
  8. In the Mapping table, the Column column has been already filled
    automatically since the schema was defined, so simply enter the name of
    every family in the Column family column,
    each corresponding to the column it contains.
  9. Double-click tHBaseClose to open its
    Component view.

    tHBaseInput_10.png

  10. In the Component List field, select the
    connection you need to close. In this example, this connection is tHBaseConnection_1.

Executing the Job

To execute this Job, press F6.

Once done, the Run view is opened automatically,
where you can check the execution result.

tHBaseInput_11.png

These columns of interest are extracted and you can process them according to
your needs.

Login to your HBase database, you can check the customer table this Job has created.

tHBaseInput MapReduce properties (deprecated)

These properties are used to configure tHBaseInput running in the MapReduce Job framework.

The MapReduce
tHBaseInput component belongs to the MapReduce and the Databases families.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

tHBaseInput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Distribution

Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:

  • If available in this Distribution drop-down
    list, the Microsoft HD
    Insight
    option allows you to use a Microsoft HD Insight
    cluster. For this purpose, you need to configure the connections to the HD
    Insightcluster and the Windows Azure Storage service of that cluster in the
    areas that are displayed. For
    detailed explanation about these parameters, search for configuring the
    connection manually on Talend Help Center (https://help.talend.com).

  • If you select Amazon EMR, find more details about Amazon EMR getting started in
    Talend Help Center (https://help.talend.com).

  • The Custom option
    allows you to connect to a cluster different from any of the distributions
    given in this list, that is to say, to connect to a cluster not officially
    supported by
    Talend
    .

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

In the Map/Reduce version of this component, the distribution you select must be the same
as the one you need to define in the Hadoop configuration
view for the whole Job.

HBase version

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Hadoop version of the
distribution

This list is displayed only when you have selected Custom from the distribution list to connect to a cluster not yet
officially supported by the Studio. In this situation, you need to select the Hadoop
version of this custom cluster, that is to say, Hadoop
1
or Hadoop 2.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

Use kerberos authentication

If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    authentication
    check box and the Use Kerberos
    authentication
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to
be used in the Principal field and the access
path to the keytab file itself in the Keytab
field. This keytab file must be stored in the machine in which your Job actually
runs, for example, on a Talend
Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

Schema et Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Set table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a
MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Table name

Type in the name of the table from which you need to extract columns.

Mapping

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Advanced settings

Properties

If you need to use custom configuration for your database, complete this table with the
property or properties to be customized. Then at runtime, the customized property or
properties will override the corresponding ones used by the Studio.

For example, you need to define the value of the dfs.replication property as 1 for the
database configuration. Then you need to add one row to this table using the plus button and
type in the name and the value of this property in this row.

Is by filter

Select this check box to use filters to perform fine-grained data selection from your
database, such as selection of keys, or values, based on regular expressions.

Once selecting it, the Filter table that is used to
define filtering conditions becomes available.

This feature leverages filters provided by HBase and subject to constraints explained in
Apache HBase documentation. Therefore, advanced knowledge of HBase is required to make full
use of these filters.

Logical operation
Select the operator you need to use to define the logical relation between filters. This
available operators are:

  • And: every defined filtering conditions must
    be satisfied. It represents the relationship
    FilterList.Operator.MUST_PASS_ALL

  • Or: at least one of the defined filtering
    conditions must be satisfied. It represents the relationship:
    FilterList.Operator.MUST_PASS_ONE

Filter
Click the button under this table to add as many rows as required, each row representing a
filter. The parameters you may need to set for a filter are:

  • Filter type: the drop-down list presents
    pre-existing filter types that are already defined by HBase. Select the type of
    the filter you need to use.

  • Filter column: enter the column qualifier on
    which you need to apply the active filter. This parameter becomes mandatory
    depending on the type of the filter and of the comparator you are using. For
    example, it is not used by the Row Filter type
    but is required by the Single Column Value
    Filter
    type.

  • Filter family: enter the column family on
    which you need to apply the active filter. This parameter becomes mandatory
    depending on the type of the filter and of the comparator you are using. For
    example, it is not used by the Row Filter type
    but is required by the Single Column Value
    Filter
    type.

  • Filter operation: select from the drop-down
    list the operation to be used for the active filter.

  • Filter Value: enter the value on which you
    want to use the operator selected from the Filter
    operation
    drop-down list.

  • Filter comparator type: select the type of
    the comparator to be combined with the filter you are using.

Depending on the Filter type you are using,
some or each of the parameters become mandatory. For further information, see HBase filters

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

The Hadoop configuration you use for the whole Job and the Hadoop distribution you use for
this component must be the same. Actually, this component requires that its Hadoop
distribution parameter be defined separately so as to launch its database driver only when
that component is used.

For further information about a
Talend
Map/Reduce Job, see the sections
describing how to create, convert and configure a
Talend
Map/Reduce Job of the

Talend Open Studio for Big Data Getting Started Guide
.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your
database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tHBaseInput properties for Apache Spark Batch

These properties are used to configure tHBaseInput running in the Spark Batch Job framework.

The Spark Batch
tHBaseInput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Storage configuration

Select the tHBaseConfiguration component from which the
Spark system to be used reads the configuration information to connect to HBase.

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

tHBaseInput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Schema et Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table name

Type in the name of the table from which you need to extract columns.

Mapping

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

Is by filter

Select this check box to use filters to perform fine-grained data selection from your
database, such as selection of keys, or values, based on regular expressions.

Once selecting it, the Filter table that is used to
define filtering conditions becomes available.

This feature leverages filters provided by HBase and subject to constraints explained in
Apache HBase documentation. Therefore, advanced knowledge of HBase is required to make full
use of these filters.

Logical operation
Select the operator you need to use to define the logical relation between filters. This
available operators are:

  • And: every defined filtering conditions must
    be satisfied. It represents the relationship
    FilterList.Operator.MUST_PASS_ALL

  • Or: at least one of the defined filtering
    conditions must be satisfied. It represents the relationship:
    FilterList.Operator.MUST_PASS_ONE

Filter
Click the button under this table to add as many rows as required, each row representing a
filter. The parameters you may need to set for a filter are:

  • Filter type: the drop-down list presents
    pre-existing filter types that are already defined by HBase. Select the type of
    the filter you need to use.

  • Filter column: enter the column qualifier on
    which you need to apply the active filter. This parameter becomes mandatory
    depending on the type of the filter and of the comparator you are using. For
    example, it is not used by the Row Filter type
    but is required by the Single Column Value
    Filter
    type.

  • Filter family: enter the column family on
    which you need to apply the active filter. This parameter becomes mandatory
    depending on the type of the filter and of the comparator you are using. For
    example, it is not used by the Row Filter type
    but is required by the Single Column Value
    Filter
    type.

  • Filter operation: select from the drop-down
    list the operation to be used for the active filter.

  • Filter Value: enter the value on which you
    want to use the operator selected from the Filter
    operation
    drop-down list.

  • Filter comparator type: select the type of
    the comparator to be combined with the filter you are using.

Depending on the Filter type you are using,
some or each of the parameters become mandatory. For further information, see HBase filters

Die on HBase error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Usage

Usage rule

This component is used as a start component and requires an output
link..

This component uses a tHBaseConfiguration component present in the same Job to connect to
HBase.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x