August 15, 2023

tHiveConfiguration – Docs for ESB 6.x

tHiveConfiguration

Enables the reuse of the connection configuration to Hive in the same
Job.

tHiveConfiguration provides Hive
connection information for the Hive related components used in the same
Spark Job. The Spark cluster to be used reads this configuration to
eventually connect to Hive.

Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:

tHiveConfiguration properties for Apache Spark Batch

These properties are used to configure tHiveConfiguration running in the Spark Batch Job framework.

The Spark Batch
tHiveConfiguration component belongs to the Storage family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Usage

Usage rule

This component is used with no need to be connected to other components.

You need to drop tHiveConfiguration along
with the Hive-related Subjob to be run in the same Job so that the configuration is used
by the whole Job at runtime.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tHiveConfiguration properties for Apache Spark Streaming

These properties are used to configure tHiveConfiguration running in the Spark Streaming Job framework.

The Spark Streaming
tHiveConfiguration component belongs to the Storage family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Usage

Usage rule

This component is used with no need to be connected to other components.

You need to drop tHiveConfiguration along
with the Hive-related Subjob to be run in the same Job so that the configuration is used
by the whole Job at runtime.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x