July 30, 2023

tHiveInput – Docs for ESB 7.x

tHiveInput

Extracts data from Hive and sends the data to the component that
follows.

tHiveInput is the
dedicated component to the Hive database (the Hive data warehouse system). It can execute a
given HiveQL query in order to extract the data from Hive.

When ACID is enabled on the Hive side, a Spark Job cannot delete or
update a table and unless data is compacted, this Job cannot correctly read
aggregated data from a Hive table, either. This is a known limitation described in
the Spark bug tracking system: https://issues.apache.org/jira/browse/SPARK-15348.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tHiveInput Standard properties

These properties are used to configure tHiveInput running in the Standard Job framework.

The Standard
tHiveInput component belongs to the Big Data and the Databases families.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Connection configuration:

  • When you use this component with Qubole on AWS:
    API Token

    Click the button next to the
    API Token field to enter the
    authentication token generated for the Qubole user account
    to be used. For further information about how to obtain this
    token, see Manage Qubole
    account
    from the Qubole documentation.

    This
    token allows you to specify the user account you want to
    use to access Qubole. Your Job automatically uses
    the rights and permissions granted to this user account
    in Qubole.

    Cluster label

    Select the Cluster label check
    box and enter the name of the Qubole cluster to be used. If
    leaving this check box clear, the default cluster is
    used.

    If you need details about your default cluster,
    ask the administrator of your Qubole service. You can
    also read this article
    from the Qubole documentaiton to find more information
    about configuring a default Qubole cluster.

    Change API endpoint

    Select the Change API endpoint
    check box and select the region to be used. If leaving this
    check box clear, the default region is used.

    For further
    information about the Qubole Endpoints supported on
    QDS-on-AWS, see Supported Qubole
    Endpoints on Different Cloud
    Providers
    .

  • When you use this component with Google Dataproc:

    Project identifier

    Enter the ID of your Google Cloud Platform project.

    If you are not certain about your project ID, check it in the Manage
    Resources page of your Google Cloud Platform services.

    Cluster identifier

    Enter the ID of your Dataproc cluster to be used.

    Region

    From this drop-down list, select the Google Cloud region to
    be used.

    Google Storage staging bucket

    As a Talend Job expects its
    dependent jar files for execution, specify the Google Storage directory to
    which these jar files are transferred so that your Job can access these
    files at execution.

    The directory to be entered must end with a slash (/). If not existing, the
    directory is created on the fly but the bucket to be used must already
    exist.

    Database

    Fill this field with the name of the database.

    Access Key and Secret Key

    Enter the authentication information obtained from Google for
    tHiveInput to read temporary data from Google
    Storage.

    These keys can be consulted on the Interoperable Access
    tab view under the Google Cloud Storage tab of the project from the
    Google APIs Console.

    To enter the secret key, click the […] button next to
    the secret key field, and then in the pop-up dialog box enter the password between double
    quotes and click OK to save the settings.

    For more information about the access key and secret key, go
    to https://developers.google.com/storage/docs/reference/v1/getting-startedv1?hl=en/
    and see the description about developer keys.

    Provide Google Credentials in file

    Leave this check box clear, when you
    launch your Job from a given machine in which Google Cloud SDK has been
    installed and authorized to use your user account credentials to access
    Google Cloud Platform. In this situation, this machine is often your
    local machine.

    When you launch your Job from a remote
    machine, such as a Jobserver, select this check box and in the
    Path to Google Credentials file field that is
    displayed, enter the directory in which this JSON file is stored in the
    Jobserver machine.

    For further information about this Google
    Credentials file, see the administrator of your Google Cloud Platform or
    visit Google Cloud Platform Auth
    Guide
    .

  • When you use this component with HDInsight:

    WebHCat configuration

    Enter the address and the authentication information of the Microsoft HD
    Insight cluster to be used. For example, the address could be
    your_hdinsight_cluster_name.azurehdinsight.net and the
    authentication information is your Azure account name: ychen.
    The Studio uses this service to submit the Job to the HD Insight cluster.

    In the Job result folder field, enter
    the location in which you want to store the execution result of a Job in the Azure
    Storage to be used.

    HDInsight
    configuration

    • The Username is the one defined when
      creating your cluster. You can find it in the SSH
      + Cluster login
      blade of your cluster.
    • The Password is defined when creating your HDInsight
      cluster for authentication to this cluster.

    Windows Azure Storage
    configuration

    Enter the address and the authentication information of the Azure Storage
    account to be used. In this configuration, you do not define where to read or write
    your business data but define where to deploy your Job only. Therefore always use
    the Azure
    Storage
    system for this configuration.

    In the Container field, enter the name
    of the container to be
    used. You can
    find the available containers in the Blob blade of the Azure
    Storage account to be used.

    In the Deployment Blob field, enter the
    location in which you want to store the current Job and its dependent libraries in
    this Azure Storage account.

    In the Hostname field, enter the
    Primary Blob Service Endpoint of your Azure Storage account without the https:// part. You can find this endpoint in the Properties blade of this storage account.

    In the Username field, enter the name of the Azure Storage account to be used.

    In the Password field, enter the access key of the Azure Storage account to be used. This key can be found in the Access keys blade of this storage account.

    Database

    Fill this field with the name of the database.

  • When you use the other distributions:

    Connection mode

    Select a connection mode from the list. The
    options vary depending on the distribution you are
    using.

    Hive server

    Select the Hive server through which you want the Job using this component
    to execute queries on Hive.

    This Hive server list is available
    only when the Hadoop distribution to be used such as HortonWorks Data Platform V1.2.0 (Bimota) supports HiveServer2. It
    allows you to select HiveServer2 (Hive 2), the
    server that better support concurrent connections of multiple clients than
    HiveServer (Hive 1).

    For further information about HiveServer2, see https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2.

    Host

    Database server IP address.

    Port

    Listening port number of DB server.

    Database

    Fill this field with the name of the
    database.

    Note:

    This field is not available when you
    select Embedded
    from the Connection
    mode
    list.

    Username and
    Password

    DB user authentication data.

    To enter the password, click the […] button next to the
    password field, and then in the pop-up dialog box enter the password between double quotes
    and click OK to save the settings.

    Use kerberos authentication

    If you are accessing a Hive Metastore running with Kerberos security,
    select this check box and then enter the relevant parameters in the fields that
    appear.

    • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
      MapR ticket authentication configuration in addition or as an alternative by following
      the explanation in Connecting to a security-enabled MapR.

      Keep in mind that this configuration generates a new MapR security ticket for the username
      defined in the Job in each execution. If you need to reuse an existing ticket issued for the
      same username, leave both the Force MapR ticket
      authentication
      check box and the Use Kerberos
      authentication
      check box clear, and then MapR should be able to automatically
      find that ticket on the fly.

    The values of the following parameters can be found in the hive-site.xml file of the Hive system to be used.

    1. Hive principal uses the value
      of hive.metastore.kerberos.principal. This is
      the service principal of the Hive Metastore.

    2. HiveServer2 local user
      principal
      uses the value of hive.server2.authentication.kerberos.principal.

    3. HiveServer2 local user keytab
      uses the value of hive.server2.authentication.kerberos.keytab

    4. Metastore URL uses the value of
      javax.jdo.option.ConnectionURL. This is the
      JDBC connection string to the Hive Metastore.

    5. Driver class uses the value of
      javax.jdo.option.ConnectionDriverName. This
      is the name of the driver for the JDBC connection.

    6. Username uses the value of javax.jdo.option.ConnectionUserName. This, as
      well as the Password parameter, is the user credential for connecting to
      the Hive Metastore.

    7. Password uses the value of javax.jdo.option.ConnectionPassword.

    For the other parameters that are displayed, please consult the Hadoop
    configuration files they belong to. For example, the Namenode
    principal
    can be found in the hdfs-site.xml file
    or the hdfs-default.xml file of the distribution you are
    using.

    This check box is available depending on the Hadoop distribution you are
    connecting to.

    Use a keytab to authenticate

    Select the Use a keytab to authenticate
    check box to log into a Kerberos-enabled system using a given keytab file. A keytab
    file contains pairs of Kerberos principals and encrypted keys. You need to enter the
    principal to be used in the Principal field and
    the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in
    which your Job actually runs, for example, on a Talend Jobserver.

    Note that the user that executes a keytab-enabled Job is not necessarily
    the one a principal designates but must have the right to read the keytab file being
    used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
    situation, ensure that user1 has the right to read the keytab
    file to be used.

    Use SSL encryption

    Select this check box to enable the SSL or TLS encrypted connection.

    Then in the fields that are displayed, provide the authentication
    information:

    • In the Trust store
      path
      field, enter the path, or browse to the TrustStore
      file to be used. By default, the supported TrustStore types are JKS and PKCS 12.

    • To enter the password, click the […] button next to the
      password field, and then in the pop-up dialog box enter the password between double quotes
      and click OK to save the settings.

    This feature is available only to the HiveServer2 in the Standalone mode of the following distributions:

    • Hortonworks Data Platform 2.0 +

    • Cloudera CDH4 +

    • Pivotal HD 2.0 +

    • Amazon EMR 4.0.0 +

    Set Resource Manager

    Select this check box and in the displayed field, enter the location of the
    ResourceManager of your distribution. For example, tal-qa114.talend.lan:8050.

    Then you can continue to set the following parameters depending on the
    configuration of the Hadoop cluster to be used (if you leave the check box of a
    parameter clear, then at runtime, the configuration about this parameter in the
    Hadoop cluster to be used will be ignored ):

    1. Select the Set resourcemanager
      scheduler address
      check box and enter the Scheduler address in
      the field that appears.

    2. Select the Set jobhistory
      address
      check box and enter the location of the JobHistory
      server of the Hadoop cluster to be used. This allows the metrics information of
      the current Job to be stored in that JobHistory server.

    3. Select the Set staging
      directory
      check box and enter this directory defined in your
      Hadoop cluster for temporary files created by running programs. Typically, this
      directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files
      such as yarn-site.xml or mapred-site.xml of your distribution.

    4. Allocate proper memory volumes to the Map and the Reduce
      computations and the ApplicationMaster
      of YARN by selecting the Set memory
      check box in the Advanced settings
      view.

    5. Select the Set Hadoop
      user
      check box and enter the user name under which you
      want to execute the Job. Since a file or a directory in Hadoop has its
      specific owner with appropriate read or write rights, this field allows
      you to execute the Job directly under the user name that has the
      appropriate rights to access the file or directory to be processed.

    6. Select the Use datanode hostname check box to allow the
      Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname
      property to true. When connecting to a
      S3N filesystem, you must select this check box.

    For further information about these parameters, see the documentation or
    contact the administrator of the Hadoop cluster to be used.

    For further information about the Hadoop Map/Reduce framework, see the
    Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org.

    Set NameNode URI

    Select this check box and in the displayed field, enter the URI of the
    Hadoop NameNode, the master node of a Hadoop system. For example, assuming that you
    have chosen a machine called masternode as the NameNode, then
    the location is hdfs://masternode:portnumber. If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.

    For further information about the Hadoop Map/Reduce framework, see the
    Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org.

The other properties:

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection
    to be shared in the Basic
    settings
    view of the connection component which creates that very database
    connection.

  2. In the child level, use a dedicated connection
    component to read that registered database connection.

For an example about how to share a database connection
across Job levels, see

Talend Studio
User Guide
.

Distribution

Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following
ones requires specific configuration:

  • If available in this Distribution drop-down
    list, the Microsoft HD
    Insight
    option allows you to use a Microsoft HD Insight
    cluster. For this purpose, you need to configure the connections to the HD
    Insightcluster and the Windows Azure Storage service of that cluster in the
    areas that are displayed. For
    detailed explanation about these parameters, search for configuring the
    connection manually on Talend Help Center (https://help.talend.com).

  • If you select Amazon EMR, find more details about Amazon EMR getting started in
    Talend Help Center (https://help.talend.com).

  • The Custom option
    allows you to connect to a cluster different from any of the distributions
    given in this list, that is to say, to connect to a cluster not officially
    supported by
    Talend
    .

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Hive version

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-in: The schema is created and
stored locally for this component only. Related topic: see

Talend Studio


User
Guide
.

 

Repository: The schema already exists
and is stored in the Repository, hence can be reused. Related topic: see


Talend Studio


User
Guide
.

Table Name

Name of the table to be processed.

Query type

Either Built-in or Repository.

 

Built-in: Fill in manually the query
statement or build it graphically using SQLBuilder

 

Repository: Select the relevant query
stored in the Repository. The Query field gets accordingly filled
in.

Guess Query

Click the Guess Query button to
generate the query which corresponds to your table schema in the Query
field.

Guess schema

Click this button to retrieve the schema from the table.

This query uses Parquet objects

When available, select this check box to indicate that the table to be handled
uses the
PARQUET
format and thus make the component to call the required jar file.

Note that when the file format to be used is PARQUET, you might be prompted to find the specific
PARQUET jar file and install it into the Studio.

  • When the connection mode to Hive is Embedded, the Job is run in your local
    machine and calls this jar installed in the Studio.

  • When the connection mode to Hive is Standalone, the Job is run in the server
    hosting Hive and this jar file is sent to the HDFS system of the cluster you are
    connecting to. Therefore, ensure that you have properly defined the NameNode URI in the
    corresponding field of the Basic
    settings
    view.

This jar file can be downloaded from Apache’s site.
You can find more details about how to install
external modules in Talend Help Center (https://help.talend.com)
.

Query

Enter your DB query paying particularly attention to properly sequence
the fields in order to match the schema definition.

For further information about the Hive query language, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual.

Note: Compressed data in the form of Gzip or Bzip2 can be processed through the query
statements. For details, see https://cwiki.apache.org/confluence/display/Hive/CompressedStorage.

Hadoop provides different compression formats that help reduce the space
needed for storing files and speed up data transfer. When reading a compressed file, the
Studio needs to uncompress it before being able to feed it to the input flow.

Execution engine

Select this check box and from the drop-down list, select the framework you need to use to
run the Job.

This list is available only when you are using the Embedded mode for the Hive connection and the distribution you are working
with is:

  • Custom: this option allows you connect to a distribution
    supporting Tez but not officially supported by
    Talend
    .

Before using Tez, ensure that the Hadoop cluster you are using supports Tez. You will need to configure the access to the relevant Tez libraries via the Advanced settings view of this component.

For further information about Hive on Tez, see Apache’s related documentation in https://cwiki.apache.org/confluence/display/Hive/Hive+on+Tez. Some examples are presented there to show how Tez can be used to gain performance over MapReduce.

Advanced settings

Tez lib

Select how the Tez libraries are accessed:

  • Auto install: at runtime, the Job uploads and deploys the Tez
    libraries provided by the Studio into the directory you specified in the Install folder in HDFS field, for example, /tmp/usr/tez.

    If you have set the tez.lib.uris property in the properties
    table, this directory overrides the value of that property at runtime. But the other
    properties set in the properties table are still effective.

  • Use exist: the Job accesses the Tez libraries already
    deployed in the Hadoop cluster to be used. You need to enter the path pointing to
    those libraries in the Lib path (folder or file)
    field.

  • Lib jar: this table appears when you have selected Auto install from the Tez
    lib
    list and the distribution you are using is Custom. In this table, you need to add the Tez libraries to be
    uploaded.

Temporary path

If you do not want to set the Jobtracker and the NameNode when you execute
the query select * from your_table_name, you need to
set this temporary path. For example, /C:/select_all in
Windows.

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from
all the String/Char columns.

Trim column

Remove leading and trailing whitespace from defined columns.

Note:

Clear the Trim all the String/Char
columns
check box to enable Trim
column
in this field.

Hadoop properties


Talend Studio
uses a default configuration for its engine to perform
operations in a Hadoop distribution. If you need to use a custom configuration in a specific
situation, complete this table with the property or properties to be customized. Then at
runtime, the customized property or properties will override those default ones.

  • Note that if you are using the centrally stored metadata from the Repository, this table automatically inherits the
    properties defined in that metadata and becomes uneditable unless you change the
    Property type from Repository to Built-in.

For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you
are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:

Hive properties

Talend Studio uses a default configuration for its engine to
perform operations in a Hive database. If you need to use a custom configuration in
a specific situation, complete this table with the property or properties to be
customized. Then at runtime, the customized property or properties will override
those default ones. For further information for Hive dedicated properties, see https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration.

  • If you need to use Tez to run your Hive Job, add
    hive.execution.engine to the
    Properties column and Tez to the
    Value column, enclosing both of these strings in
    double quotation marks.
  • Note that if you are using the centrally stored metadata
    from the Repository, this
    table automatically inherits the properties defined in that metadata and
    becomes uneditable unless you change the Property type from Repository to Built-in.

Mapred job map memory mb and
Mapred job reduce memory mb

You can tune the map and reduce computations by
selecting the Set memory check box to set proper memory allocations
for the computations to be performed by the Hadoop system.

In that situation, you need to enter the values you need in the Mapred
job map memory mb
and the Mapred job reduce memory
mb
fields, respectively. By default, the values are both 1000 which are normally appropriate for running the
computations.

The memory parameters to be set are Map (in Mb),
Reduce (in Mb) and ApplicationMaster (in Mb). These fields allow you to dynamically allocate
memory to the map and the reduce computations and the ApplicationMaster of YARN.

Path separator in server

Leave the default value of the Path separator in
server
as it is, unless you have changed the separator used by your
Hadoop distribution’s host machine for its PATH variable or in other words, that
separator is not a colon (:). In that situation, you must change this value to the
one you are using in that host.

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

QUERY: the query statement being processed. This is a Flow
variable and it returns a string.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component offers the benefit of flexible DB queries and covers
all possible Hive QL queries.

If the Studio used to connect to a Hive database is operated on Windows,
you must manually create a folder called tmp in the root of the
disk where this Studio is installed.

HBase Configuration

Note:

Available only when the Use an existing
connection
check box is clear

Store by HBase

 

Zookeeper quorum

 

Zookeeper client port

 

Define the jars to register for
HBase

  Register jar for HBase

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Related scenarios

For a scenario about how an input component is used in a Job, see Writing columns from a MySQL database to an output file using tMysqlInput.

You need to keep in mind the parameters required by Hadoop, such as NameNode and
Jobtracker, when configuring this component since the component needs to connect to a
Hadoop distribution.

tHiveInput properties for Apache Spark Batch

These properties are used to configure tHiveInput running in the Spark Batch Job framework.

The Spark Batch
tHiveInput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Hive storage configuration

Select the tHiveConfiguration
component from which you want Spark to use the configuration details to connect to
Hive.

HDFS Storage configuration

Select the tHDFSConfiguration component from which you want Spark to use the configuration
details to connect to a given HDFS system and transfer the dependent jar files to this HDFS
system. This field is relevant only when you are using an on-premises distribution.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Always use lowercase when naming a field because the processing behind the scene could force the field names to be lowercase.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Input source

Select the type of the input data you want tHiveInput to read:

  • Hive table: the Database field and the Table name field are displayed. You need to
    enter the related information about the Hive database to be connected to and
    the Hive table from which you need to read data.

  • Hive query: the Hive query field is displayed. You need to
    enter the Hive query statement you want to use to select the data to be
    used.

  • ORC file: the Input file name field is displayed and the
    Hive storage configuration list is deactivated, because the ORC file should
    be stored in your HDFS system hosting Hive. You need to enter the directory
    where the file to be used is stored.

For further information about the Hive query language, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual.

Note: Compressed data in the form of Gzip or Bzip2 can be processed through the query
statements. For details, see https://cwiki.apache.org/confluence/display/Hive/CompressedStorage.

Hadoop provides different compression formats that help reduce the space
needed for storing files and speed up data transfer. When reading a compressed file, the
Studio needs to uncompress it before being able to feed it to the input flow.

Advanced settings

Register Hive UDF jars

Add the Hive user-defined function (UDF) jars you want tHiveInput to use. Note that you must define a function
alias for each UDF to be used in the Temporary UDF
functions
table.

Once you add one row to this table, click it to display the […] button and then click this button to display the
jar import wizard. Through this wizard, import the UDF jar files you want to
use.

A registered function is often used in a Hive query that you edit in the
Hive Query field in the Basic settings view. Note that this Hive Query field is displayed only when you select
Hive query from the Input source list.

Temporary UDF functions

Complete this table to give each imported UDF class a temporary function
name to be used in the Hive query in the current tHiveInput component.

Usage

Usage rule

This component is used as a start component and requires an output
link..

This component should use a tHiveConfiguration component present in the same Job to connect to
Hive.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tHiveInput properties for Apache Spark Streaming

These properties are used to configure tHiveInput running in the Spark Streaming Job framework.

The Spark Streaming
tHiveInput component belongs to the Databases family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Hive storage configuration

Select the tHiveConfiguration
component from which you want Spark to use the configuration details to connect to
Hive.

HDFS Storage configuration

Select the tHDFSConfiguration component from which you want Spark to use the configuration
details to connect to a given HDFS system and transfer the dependent jar files to this HDFS
system. This field is relevant only when you are using an on-premises distribution.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Always use lowercase when naming a field because the processing behind the scene could force the field names to be lowercase.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Input source

Select the type of the input data you want tHiveInput to read:

  • Hive table: the Database field and the Table name field are displayed. You need to
    enter the related information about the Hive database to be connected to and
    the Hive table from which you need to read data.

  • Hive query: the Hive query field is displayed. You need to
    enter the Hive query statement you want to use to select the data to be
    used.

  • ORC file: the Input file name field is displayed and the
    Hive storage configuration list is deactivated, because the ORC file should
    be stored in your HDFS system hosting Hive. You need to enter the directory
    where the file to be used is stored.

For further information about the Hive query language, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual.

Note: Compressed data in the form of Gzip or Bzip2 can be processed through the query
statements. For details, see https://cwiki.apache.org/confluence/display/Hive/CompressedStorage.

Hadoop provides different compression formats that help reduce the space
needed for storing files and speed up data transfer. When reading a compressed file, the
Studio needs to uncompress it before being able to feed it to the input flow.

Advanced settings

Register Hive UDF jars

Add the Hive user-defined function (UDF) jars you want tHiveInput to use. Note that you must define a function
alias for each UDF to be used in the Temporary UDF
functions
table.

Once you add one row to this table, click it to display the […] button and then click this button to display the
jar import wizard. Through this wizard, import the UDF jar files you want to
use.

A registered function is often used in a Hive query that you edit in the
Hive Query field in the Basic settings view. Note that this Hive Query field is displayed only when you select
Hive query from the Input source list.

Temporary UDF functions

Complete this table to give each imported UDF class a temporary function
name to be used in the Hive query in the current tHiveInput component.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component should use a tHiveConfiguration component present in the same Job to connect to
Hive.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x