tRedshiftConfiguration
Reuses the connection configuration to a Redshift database in the same
Job.
tRedshiftConfiguration provides the
connection information to a given Redshift database for the Redshift
related components used in the same Spark Job. The Spark cluster to be
used reads this configuration to eventually connect to Redshift.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Spark Batch: see tRedshiftConfiguration properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming: see tRedshiftConfiguration properties for Apache Spark Streaming. The
streaming version of this component does not support Spark 1.3.This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tRedshiftConfiguration properties for Apache Spark Batch
These properties are used to configure tRedshiftConfiguration running in the Spark Batch Job framework.
The Spark Batch
tRedshiftConfiguration component belongs to the Storage and the Databases families.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
Host |
Enter the endpoint of the database you need to connect to in |
Port |
Enter the port number of the database you need to connect to in The related information can be found in the Cluster Database For further information, see Managing clusters console. |
Username and Password |
Enter the authentication information to the Redshift database you To enter the password, click the […] button next to the |
Database |
Enter the name of the database you need to connect to in The related information can be found in the Cluster Database For further information, see Managing clusters console. The bucket and the Redshift database to be used |
Schema |
Enter the name of the database schema to be used in Redshift. The A schema in terms of Redshift is similar to a operating system |
Additional JDBC Parameters |
Specify additional JDBC properties for the connection you are creating. The |
S3 configuration |
Select the tS3Configuration component from which you want Spark to use the You need drop the tS3Configuration component to be used alongside |
S3 temp path |
Enter the location in S3 in which the data to be This path is independent of the temporary path you need |
Usage
Usage rule |
This component is used with no need to be connected to other You need to drop tRedshiftConfiguration alongside the other Redshift related Subjobs to be run Since Redshift uses S3 to store temporary data, you need This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.
tRedshiftConfiguration properties for Apache Spark Streaming
These properties are used to configure tRedshiftConfiguration running in the Spark Streaming Job framework.
The Spark Streaming
tRedshiftConfiguration component belongs to the Storage and the Databases families.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
Host |
Enter the endpoint of the database you need to connect to in |
Port |
Enter the port number of the database you need to connect to in The related information can be found in the Cluster Database For further information, see Managing clusters console. |
Username and Password |
Enter the authentication information to the Redshift database you To enter the password, click the […] button next to the |
Database |
Enter the name of the database you need to connect to in The related information can be found in the Cluster Database For further information, see Managing clusters console. |
Schema |
Enter the name of the database schema to be used in Redshift. The A schema in terms of Redshift is similar to a operating system |
Additional JDBC Parameters |
Specify additional JDBC properties for the connection you are creating. The |
S3 configuration |
Select the tS3Configuration component from which you want Spark to use the You need drop the tS3Configuration component to be used alongside |
S3 temp path |
Enter the location in S3 in which the data to be This path is independent of the temporary path you need |
Advanced settings
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The
|
Usage
Usage rule |
This component is used with no need to be connected to other components. You need to drop tRedshiftConfiguration alongside the other Redshift related Subjobs to be run Since Redshift uses S3 to store temporary data, you need This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.