The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process
to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first
step, a delimited/CSV file is generated. In the second step, this file is used in the
INSERT statement used to feed Amazon Redshift. These two steps are fused together in the
tRedshiftOutputBulkExec component. The advantage of
using two separate steps is that the data can be transformed before it is loaded to
Amazon Redshift.
Component family |
Databases/Amazon Redshift |
|
Function |
This component loads data into a table on Amazon Redshift from a |
|
Purpose |
This component allows you to load data to Amazon Redshift from a |
|
Basic settings |
Property Type |
Either Built-In or Repository. Since version 5.6, both the Built-In mode and the Repository mode are |
|
|
Built-In: No property data stored |
|
|
Repository: Select the repository |
Database settings |
Use an existing connection |
Select this check box and in the Component List click the |
|
Host |
Type in the IP address or hostname of the database server. |
|
Port |
Type in the listening port number of the database server. |
|
Database |
Type in the name of the database. |
|
Schema |
Type in the name of the schema. |
|
Username and Password |
Type in the database user authentication data. To enter the password, click the […] button next to the |
|
Table Name |
Specify the name of the table to be written. Note that only one table can be written at |
|
Action on table |
On the table defined, you can perform one of the following
|
|
Schema and Edit schema |
A schema is a row description. It defines the number of fields to be processed and passed on Since version 5.6, both the Built-In mode and the Repository mode are |
|
|
Built-In: You create and store the schema locally for this |
|
|
Repository: You have already created the schema and |
|
|
Click Edit schema to make changes to the schema. If the
|
S3 Setting |
Access Key |
Specify the Access Key ID that uniquely identifies an AWS Account. |
|
Secret Key |
Specify the Secret Access Key, constituting the security To enter the secret key, click the […] button next to |
|
Bucket |
Type in the name of the Amazon S3 bucket in which the file is located. |
|
Key |
Type in the object key assigned to the file on Amazon S3 to be loaded. |
Advanced settings |
File type |
Select the type of the file on Amazon S3 from the list:
|
|
Fields terminated by |
Enter the character used to separate fields. This field appears only when Delimited file |
|
Enclosed by |
Select the character in which the fields are enclosed. This list appears only when Delimited file or CSV is |
|
JSON mapping |
Specify how to map the data elements in the JSON source file on Amazon S3 to the
This field appears only when JSON |
|
Fixed width mapping |
Enter a string that specifies a user-defined column label and
Note that the column label in the string has no relation to the table column name and This field appears only when Fixed |
|
Compressed by |
Select this check box and from the list displayed select the |
|
Decrypt |
Select this check box if the file is encrypted using Amazon S3 client-side encryption. |
|
Encryption key |
Specify the encryption key that was used to encrypt the file. This field appears only when the Decrypt check box is |
|
Encoding |
Select the encoding type of the data to be loaded from the |
|
Date format |
Select one of the following items from the list to specify the
|
|
Time format |
Select one of the following items from the list to specify the
|
|
Settings |
Click the [+] button below the
For more information about the parameters, see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html. |
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the |
Dynamic settings |
Click the [+] button to add a row in the table and fill The Dynamic settings table is available only when the For more information on Dynamic settings and context |
|
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see Talend Studio |
|
Usage |
The tRedshiftBulkExec component supports loading data |
|
Log4j |
The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html. |
This scenario describes a Job that generates a delimited file and uploads the file to S3,
loads data from the file on S3 to Redshift and displays the data on the console, then
unloads the data from Redshift to files on S3 per slice of the Redshift cluster, and
finally lists and gets the unloaded files on S3.
Prerequisites:
The following context variables have been created and saved in the Repository tree view. For more information about context
variables, see Talend Studio User Guide.
-
redshift_host: the connection endpoint URL of the
Redshift cluster. -
redshift_port: the listening port number of the
database server. -
redshift_database: the name of the database.
-
redshift_username: the username for the database
authentication. -
redshift_password: the password for the database
authentication. -
redshift_schema: the name of the schema.
-
s3_accesskey: the access key for accessing Amazon
S3. -
s3_secretkey: the secret key for accessing Amazon
S3. -
s3_bucket: the name of the Amazon S3 bucket.
Note that all context values in the above screenshot are for demonstration purposes
only.
-
Create a new Job and apply all context variables listed above to the new
Job. -
Add the following components by typing their names in the design workspace or dropping
them from the Palette: a tRowGenerator component, a tRedshiftOutputBulk component, a tRedshiftBulkExec component, a tRedshiftInput component, a tLogRow component, a tRedshiftUnload component, a tS3List component, and a tS3Get component. -
Link tRowGenerator to tRedshiftOutputBulk using a Row > Main
connection. -
Do the same to link tRedshiftInput to
tLogRow. -
Link tS3List to tS3Get using a Row >
Iterate connection. -
Link tRowGenerator to tRedshiftBulkExec using a Trigger > On Subjob Ok
connection. -
Do the same to link tRedshiftBulkExec to
tRedshiftInput, link tRedshiftInput to tRedshiftUnload, link tRedshiftUnload to tS3List.
Preparing a file and uploading the file to S3
-
Double-click tRowGenerator to open its
RowGenerator Editor. -
Click the [+] button to add two columns: ID of Integer type and Name of String type.
-
Click the cell in the Functions column and select a
function from the list for each column. In this example, select Numeric.sequence to generate sequence numbers for
the ID column and select TalendDataGenerator.getFirstName to generate
random first names for the Name
column. -
In the Number of Rows for RowGenerator field, enter the
number of data rows to generate. In this example, it is 20. -
Click OK to close the schema editor and
accept the propagation prompted by the pop-up dialog box. -
Double-click tRedshiftOutputBulk to open its Basic settings view on the Component tab.
-
In the Data file path at local field,
specify the local path for the file to be generated. In this example, it is
E:/Redshift/redshift_bulk.txt. -
In the Access Key field, press Ctrl + Space and from the list select context.s3_accesskey to fill in this
field.Do the same to fill the Secret Key field
with context.s3_accesskey and the
Bucket field with context.s3_bucket. -
In the Key field, enter a new name for the file to be
generated after being uploaded on Amazon S3. In this example, it is
person_load.
Loading data from the file on S3 to Redshift
-
Double-click tRedshiftBulkExec to open its Basic settings view on the Component tab.
-
In the Host field, press Ctrl + Space and from the list select context.redshift_host to fill in this
field.Do the same to fill:
-
the Port field with context.redshift_port,
-
the Database field with context.redshift_database,
-
the Schema field with context.redshift_schema,
-
the Username field with context.redshift_username,
-
the Password field with context.redshift_password,
-
the Access Key field with
context.s3_accesskey, -
the Secret Key field with context.s3_secretkey, and
-
the Bucket field with context.s3_bucket.
-
-
In the Table Name field, enter the name
of the table to be written. In this example, it is person. -
From the Action on table list, select
Drop table if exists and create. -
In the Key field, enter the name of the file on Amazon
S3 to be loaded. In this example, it is person_load. -
Click the […] button next to Edit schema and in the pop-up window define the schema by
adding two columns: ID of Integer type
and Name of String type.
Retrieving data from the table on Redshift
-
Double-click tRedshiftInput to open its
Basic settings view on the Component tab. -
Fill the Host, Port, Database, Schema, Username, and Password fields
with their corresponding context variables. -
In the Table Name field, enter the name
of the table to be read. In this example, it is person. -
Click the […] button next to Edit schema and in the pop-up window define the
schema by adding two columns: ID of
Integer type and Name of String
type. -
In the Query field, enter the following
SQL statement based on which the data are retrieved.1"SELECT * FROM" + context.redshift_schema + "person ORDER BY "ID"" -
Double-click tLogRow to open its
Basic settings view on the Component tab. -
In the Mode area, select Table (print values in cells of a table) for a
better display of the result.
Unloading data from Redshift to file(s) on S3
-
Double-click tRedshiftUnload to open its
Basic settings view on the Component tab. -
Fill the Host, Port, Database, Schema, Username, and Password fields
with their corresponding context variables.Fill the Access Key, Secret Key, and Bucket fields also with their corresponding context
variables. -
In the Table Name field, enter the name
of the table from which the data will be read. In this example, it is
person. -
Click the […] button next to Edit schema and in the pop-up window define the
schema by adding two columns: ID of
Integer type and Name of String
type. -
In the Query field, enter the following
SQL statement based on which the result will be unloaded.1"SELECT * FROM person" -
In the Key prefix field, enter the name
prefix for the unload files. In this example, it is person_unload_.
Retrieving files unloaded to Amazon S3
-
Double-click tS3List to open its
Basic settings view on the Component tab. -
Fill the Access Key and Secret
Key fields with their corresponding context variables. -
From the Region list, select the AWS
region where the unload files are created. In this example, it is US Standard. -
Clear the List all buckets objects check
box, and click the [+] button under the
table displayed to add one row.Fill in the Bucket name column with the name of the
bucket in which the unload files are created. In this example, it is the
context variable context.s3_bucket.Fill in the Key prefix column with the name prefix for
the unload files. In this example, it is person_unload_. -
Double-click tS3Get to open its Basic settings view on the Component tab.
-
Fill the Access Key field and Secret Key field with their corresponding context
variables. -
From the Region list, select the AWS
region where the unload files are created. In this example, it is US Standard. -
In the Bucket field, enter the name of
the bucket in which the unload files are created. In this example, it is the
context variable context.s3_bucket.In the Key field, enter the name of the
unload files by pressing Ctrl + Space and
from the list selecting the global variable ((String)globalMap.get(“tS3List_1_CURRENT_KEY”)). -
In the File field, enter the local path
where the unload files are saved. In this example, it is “E:/Redshift/” +
((String)globalMap.get(“tS3List_1_CURRENT_KEY”)).
-
Press Ctrl + S to save the Job.
-
Execute the Job by pressing F6 or
clicking Run on the Run tab.As shown above, the generated data is written into the local file redshift_bulk.txt, the file is uploaded on S3 with the new
name person_load, and then the data is
loaded from the file on S3 to the table person in Redshift and displayed on the console. After that,
the data is unloaded from the table person in Redshift to two files person_unload_0000_part_00 and person_unload_0001_part_00 on S3 per slice of the Redshift
cluster, and finally the unloaded files on S3 are listed and retrieved in
the local folder.