tHiveOutput
table or a directory in HDFS.
Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:
-
Spark Batch: see tHiveOutput properties for Apache Spark Batch.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data. -
Spark Streaming: see tHiveOutput properties for Apache Spark Streaming.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
tHiveOutput properties for Apache Spark Batch
These properties are used to configure tHiveOutput running in the Spark Batch Job framework.
The Spark Batch
tHiveOutput component belongs to the Databases family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
|
Hive storage configuration |
Select the tHiveConfiguration component |
|
HDFS Storage configuration |
Select the tHDFSConfiguration component from |
|
Schema and Edit Schema |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
Output source |
Select the type of the output data you want tHiveOutput to change:
|
|
Save mode |
Select the type of changes you want to make regarding the target Hive |
|
Enable Hive partitions |
Select the Enable Hive partitions check box and in Bear in mind that:
|
Usage
|
Usage rule |
This component is used as a start component and requires an output link.. This component should use a tHiveConfiguration component present in the same Job to connect to This component, along with the Spark Batch component Palette it belongs to, appears only Note that in this documentation, unless otherwise |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.
tHiveOutput properties for Apache Spark Streaming
These properties are used to configure tHiveOutput running in the Spark Streaming Job framework.
The Spark Streaming
tHiveOutput component belongs to the Databases family.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
|
Hive storage configuration |
Select the tHiveConfiguration component |
|
HDFS Storage configuration |
Select the tHDFSConfiguration component from |
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
Output source |
Select the type of the output data you want tHiveOutput to change:
|
|
Save mode |
Select the type of changes you want to make regarding the target Hive |
|
Enable Hive partitions |
Select the Enable Hive partitions check box and in Bear in mind that:
|
Usage
|
Usage rule |
This component is used as a start component and requires an output link. This component should use a tHiveConfiguration component present in the same Job to connect to This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.