July 30, 2023

tHiveConfiguration – Docs for ESB 7.x

tHiveConfiguration

Enables the reuse of the connection configuration to Hive in the same
Job.

tHiveConfiguration provides Hive
connection information for the Hive related components used in the same
Spark Job. The Spark cluster to be used reads this configuration to
eventually connect to Hive.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tHiveConfiguration properties for Apache Spark Batch

These properties are used to configure tHiveConfiguration running in the Spark Batch Job framework.

The Spark Batch
tHiveConfiguration component belongs to the Storage family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Distribution and Version

Select the Hadoop distribution you are using for Hive.

Note that the Hive version required by Spark must be 0.13+.

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Hive thrift
metastore

Enter the location of the metastore of the Hive system to be used by
specifying the name of its Host and the number
of its listening Port. If HA metastore has been
defined for this Hive system, select the Enable high
availability
check box and in the field that is displayed, enter the
URIs of the multiple remote metastore services, each being separated with a
comma(,).

Use Kerberos authentication

If you are accessing a Hive Metastore running with Kerberos security,
select this check box.

Then you need to enter the Hive principal that should have been defined in
the hive-site.xml file of the cluster to be used.

Hive principal uses the value
of hive.metastore.kerberos.principal. This is
the service principal of the Hive Metastore.

Force MapR Ticket authentication

If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
MapR ticket authentication configuration in addition or as an alternative by following
the explanation in Connecting to a security-enabled MapR.

Keep in mind that this configuration generates a new MapR security ticket for the username
defined in the Job in each execution. If you need to reuse an existing ticket issued for the
same username, leave both the Force MapR ticket
authentication
check box and the Use Kerberos
authentication
check box clear, and then MapR should be able to automatically
find that ticket on the fly.

Usage

Usage rule

This component is used with no need to be connected to other
components.

You need to drop tHiveConfiguration
along with the Hive-related Subjob to be run in the same Job so that the
configuration is used by the whole Job at runtime.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tHiveConfiguration properties for Apache Spark Streaming

These properties are used to configure tHiveConfiguration running in the Spark Streaming Job framework.

The Spark Streaming
tHiveConfiguration component belongs to the Storage family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Distribution and Version

Select the Hadoop distribution you are using for Hive.

Note that the Hive version required by Spark must be 0.13+.

Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using.

Hive thrift
metastore

Enter the location of the metastore of the Hive system to be used by
specifying the name of its Host and the number
of its listening Port. If HA metastore has been
defined for this Hive system, select the Enable high
availability
check box and in the field that is displayed, enter the
URIs of the multiple remote metastore services, each being separated with a
comma(,).

Use Kerberos authentication

If you are accessing a Hive Metastore running with Kerberos security,
select this check box.

Then you need to enter the Hive principal that should have been defined in
the hive-site.xml file of the cluster to be used.

Hive principal uses the value
of hive.metastore.kerberos.principal. This is
the service principal of the Hive Metastore.

Force MapR Ticket authentication

If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
MapR ticket authentication configuration in addition or as an alternative by following
the explanation in Connecting to a security-enabled MapR.

Keep in mind that this configuration generates a new MapR security ticket for the username
defined in the Job in each execution. If you need to reuse an existing ticket issued for the
same username, leave both the Force MapR ticket
authentication
check box and the Use Kerberos
authentication
check box clear, and then MapR should be able to automatically
find that ticket on the fly.

Usage

Usage rule

This component is used with no need to be connected to other components.

You need to drop tHiveConfiguration
along with the Hive-related Subjob to be run in the same Job so that the
configuration is used by the whole Job at runtime.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x