tRedshiftOutputBulk
Prepares a delimited/CSV file that can be used by tRedshiftBulkExec to feed Amazon Redshift.
The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process
to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first
step, a delimited/CSV file is generated. In the second step, this file is used in the
INSERT statement used to feed Amazon Redshift. These two steps are fused together in the
tRedshiftOutputBulkExec component. The advantage of
using two separate steps is that the data can be transformed before it is loaded to
Amazon Redshift.
This component receives data from the preceding component,
generates a single delimited/CSV file and then uploads the file to
Amazon S3.
tRedshiftOutputBulk Standard properties
These properties are used to configure tRedshiftOutputBulk running in the Standard
Job framework.
The Standard
tRedshiftOutputBulk component belongs to the Cloud and the Databases families.
The component in this framework is available in all Talend
products.
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.
Basic settings
Database |
Select a type of database from the list and click |
Data file path at local |
Specify the local path to the file to be Note that the file is generated on the same |
Schema and Edit schema |
A schema is a row description. It defines the number of fields |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
 |
Click Edit
|
Compress the data file |
Select this check box and select a compression This check box disappears when the Append the local file check box |
Encrypt |
Select this check box to generate and upload By default, this check box is cleared and the Note: This option is available when
Use an existing S3 connection is not selected. For more information about the client-side and |
Access Key |
Specify the Access Key ID that uniquely Note: This option is available when both Use an existing S3 connection and Inherit credentials from AWS role are cleared.
|
Secret Key |
Specify the Secret Access Key, constituting the To enter the secret key, click the […] button next to Note: This option is available when both Use an existing S3 connection and Inherit credentials from AWS role are cleared.
|
Inherit credentials from AWS |
Select this check box to obtain AWS security credentials Note: This option is available when
Use an existing S3 connection is not selected. |
Assume role |
If you temporarily need some access permissions associated Ensure that access to this role has been Note: This option is available when
Use an existing S3 connection is not selected.
For an example about an IAM role and its related policy types, see Create and Manage AWS IAM Roles from the AWS |
Region |
Specify the AWS region by selecting a region name from the Note: This option is available when
Use an existing S3 connection is not selected. |
STS Endpoint |
Select this check box and in the field displayed, specify the This check box is available only when the Assume role check box is selected. |
Bucket |
Type in the name of the Amazon S3 bucket, The bucket and the Redshift database to be used |
Key |
Type in an object key to assign to the file |
Advanced settings
Field Separator |
Enter the character used to separate |
Text enclosure |
Select the character in a pair of which the |
Delete local file after putting it |
Select this check box to delete the local file |
Create directory if not |
Select this check box to create the directory |
Encoding |
Select an encoding type for the data in the |
Config client |
Select this check box to configure client
For information about S3 client parameterts , go to Client Configuration. |
tStatCatcher Statistics |
Select this check box to gather the Job |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component is more commonly used with the |
Related scenario
For a related scenario, see Loading/unloading data to/from Amazon S3.