August 15, 2023

tHiveOutput – Docs for ESB 6.x

tHiveOutput

Connects to a given Hive database and writes the data it receives into a given Hive
table or a directory in HDFS.

Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:

tHiveOutput properties for Apache Spark Batch

These properties are used to configure tHiveOutput running in the Spark Batch Job framework.

The Spark Batch
tHiveOutput component belongs to the Databases family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Hive storage configuration

Select the tHiveConfiguration component
from which you want Spark to use the configuration details to connect to Hive.

HDFS Storage configuration

Select the tHDFSConfiguration component from
which you want Spark to use the configuration details to connect to a given HDFS system and
transfer the dependent jar files to this HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Output source

Select the type of the output data you want tHiveOutput to change:

  • Hive table: the Database field, the Table name field, the Table
    format
    list and the Enable Hive
    partitions
    check box are displayed. You need to enter the
    related information about the Hive database to be connected to and the Hive
    table you need to modify.

    By default, the format of the output data is JSON, but you can
    change it to ORC or Parquet by selecting the corresponding option from the
    Table format list.

  • ORC file: the Output folder field is displayed and the Hive
    storage configuration list is deactivated, because the ORC file should be stored
    in your HDFS system hosting Hive. You need to enter the directory in which the
    output data is written.

Save mode

Select the type of changes you want to make regarding the target Hive
table.

Enable Hive partitions

Select the Enable Hive partitions check box and in
the Partition keys table, define partitions for
the Hive table you are creating or changing. In the Partition
keys
table, select columns from the input schema of
tHiveOutput to use as partition keys.

Bear in mind that:

  • When the Save mode to be used is
    Append, meaning that you are
    adding data to an existing Hive table, the partition columns
    you select in the Partition keys
    table must be already partition keys of the Hive table to be
    updated.

  • A partitioned Hive table created by
    tHiveOutput can only be read by tHiveInput due to Spark specific limitations. If
    you need to read your partitioned table through Hive itself,
    it is recommended to use tHiveRow or tHiveCreateTable in a Standard Job to create
    this table and then use tHiveOutput to
    append data to it.

  • Defining columns as partition keys does not alter your data
    but only create subfolders using the partition keys and put
    data in them.

Usage

Usage rule

This component is used as a start component and requires an output link..

This component should use a tHiveConfiguration component present in the same Job to connect to
Hive.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tHiveOutput properties for Apache Spark Streaming

These properties are used to configure tHiveOutput running in the Spark Streaming Job framework.

The Spark Streaming
tHiveOutput component belongs to the Databases family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Hive storage configuration

Select the tHiveConfiguration component
from which you want Spark to use the configuration details to connect to Hive.

HDFS Storage configuration

Select the tHDFSConfiguration component from
which you want Spark to use the configuration details to connect to a given HDFS system and
transfer the dependent jar files to this HDFS system.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Output source

Select the type of the output data you want tHiveOutput to change:

  • Hive table: the Database field, the Table name field, the Table
    format
    list and the Enable Hive
    partitions
    check box are displayed. You need to enter the
    related information about the Hive database to be connected to and the Hive
    table you need to modify.

    By default, the format of the output data is JSON, but you can
    change it to ORC or Parquet by selecting the corresponding option from the
    Table format list.

  • ORC file: the Output folder field is displayed and the Hive
    storage configuration list is deactivated, because the ORC file should be stored
    in your HDFS system hosting Hive. You need to enter the directory in which the
    output data is written.

Save mode

Select the type of changes you want to make regarding the target Hive
table.

Enable Hive partitions

Select the Enable Hive partitions check box and in
the Partition keys table, define partitions for
the Hive table you are creating or changing. In the Partition
keys
table, select columns from the input schema of
tHiveOutput to use as partition keys.

Bear in mind that:

  • When the Save mode to be used is
    Append, meaning that you are
    adding data to an existing Hive table, the partition columns
    you select in the Partition keys
    table must be already partition keys of the Hive table to be
    updated.

  • A partitioned Hive table created by
    tHiveOutput can only be read by tHiveInput due to Spark specific limitations. If
    you need to read your partitioned table through Hive itself,
    it is recommended to use tHiveRow or tHiveCreateTable in a Standard Job to create
    this table and then use tHiveOutput to
    append data to it.

  • Defining columns as partition keys does not alter your data
    but only create subfolders using the partition keys and put
    data in them.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component should use a tHiveConfiguration component present in the same Job to connect to
Hive.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x