tTeradataConfiguration
Defines a connection to Teradata and enables the reuse of the connection
configuration in the same Job.
tTeradataConfiguration provides Teradata connection information for the
Oracle components used in the same Spark Job. The Spark cluster to be used reads this
configuration to eventually connect to Teradata.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Spark Batch: see tTeradataConfiguration properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming: see tTeradataConfiguration properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tTeradataConfiguration properties for Apache Spark Batch
These properties are used to configure tTeradataConfiguration running in the Spark Batch Job framework.
The Spark Batch
tTeradataConfiguration component belongs to the Storage and the Databases families.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
Host |
Enter the IP address of the database server. |
Database |
Enter the name of the database to be used. |
Username and |
Enter the database user authentication data. To enter the password, click the […] button next to the |
Additional JDBC parameters |
Specify additional connection properties for the database For example, you can enter CHARSET=KANJISJIS_OS to get support of Japanese characters. Note:
You can set the encoding parameters through this field. |
Advanced settings
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The
|
Usage
Usage rule |
This component is used with no need to be connected to other You need to drop tTeradataConfiguration along with the This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use |
Related scenarios
For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.
tTeradataConfiguration properties for Apache Spark Streaming
These properties are used to configure tTeradataConfiguration running in the Spark Streaming Job framework.
The Spark Streaming
tTeradataConfiguration component belongs to the Storage and the Databases families.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
Host |
Enter the IP address of the database server. |
Database |
Enter the name of the database to be used. |
Username and |
Enter the database user authentication data. To enter the password, click the […] button next to the |
Additional JDBC parameters |
Specify additional connection properties for the database For example, you can enter CHARSET=KANJISJIS_OS to get support of Japanese characters. Note:
You can set the encoding parameters through this field. |
Advanced settings
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The
|
Usage
Usage rule |
This component is used with no need to be connected to other components. You need to drop tTeradataConfiguration along with the This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use |
Related scenarios
For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.