July 30, 2023

tRedshiftBulkExec – Docs for ESB 7.x

tRedshiftBulkExec

Loads data into Amazon Redshift from Amazon S3, Amazon EMR cluster, Amazon
DynamoDB, or remote hosts.

The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process
to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first
step, a delimited/CSV file is generated. In the second step, this file is used in the
INSERT statement used to feed Amazon Redshift. These two steps are fused together in the
tRedshiftOutputBulkExec component. The advantage of
using two separate steps is that the data can be transformed before it is loaded to
Amazon Redshift.

tRedshiftBulkExec loads data into an Amazon Redshift
table from an Amazon DynamoDB table or from data files located in an
Amazon S3 bucket, an Amazon EMR cluster, or a remote host that is
accessed using an SSH connection.

tRedshiftBulkExec Standard properties

These properties are used to configure tRedshiftBulkExec running in the Standard
Job framework.

The Standard
tRedshiftBulkExec component belongs to the Cloud and the Databases families.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property Type

Either Built-In or Repository.

  • Built-In: No property data stored centrally.

  • Repository: Select the repository file where the
    properties are stored.

Use an existing
connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Host

Type in the IP address or hostname of the
database server.

Port

Type in the listening port number of the
database server.

Database

Type in the name of the database.

Schema

Type in the name of the schema.

Username and Password

Type in the database user authentication
data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Additional JDBC
Parameters

Specify additional JDBC properties for the connection you are creating. The
properties are separated by ampersand & and each property is a key-value pair. For
example, ssl=true &
sslfactory=com.amazon.redshift.ssl.NonValidatingFactory
, which means the
connection will be created using SSL.

Table Name

Specify the name of the table to be written.
Note that only one table can be written at a time.

Action on table

On the table defined, you can perform one of
the following operations:

  • None: No operation
    is carried out.

  • Drop and create
    table
    : The table is removed and created again.

  • Create table: The
    table does not exist and gets created.

  • Create table if not
    exists
    : The table is created if it does not exist.

  • Drop table if exists and
    create
    : The table is removed if it already exists and
    created again.

  • Clear table: The
    table content is deleted. You have the possibility to rollback the
    operation.

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

  • Built-In: You create and store the schema locally for this component
    only.

  • Repository: You have already created the schema and stored it in the
    Repository. You can reuse it in various projects and Job designs.

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Data source type

Select the location of the source data to be
loaded.

  • S3: load data
    from a file in an Amazon S3 bucket.

  • EMR: load data
    from an Amazon EMR cluster.

  • DynamoDB: load
    data from an existing DynamoDB table.

  • Remote host: load
    data from one or more remote hosts, such as Amazon Elastic Compute Cloud
    (Amazon EC2) instances or other computers.

For more information, see Data Sources.

Use an existing S3 connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

This option is available when S3 is selected from the
Data source type drop-down list.

Access Key/S3 Access Key

Specify the Access Key ID that uniquely
identifies an AWS account. For how to get your Access Key and Access Secret
Key, visit Getting Your AWS Access
Keys
.

Note:

  • This option is not available if Use an existing S3
    connection
    is selected.
  • This option appears as S3 Access
    Key
    if you select Remote
    host
    from the Data source
    type
    drop-down list.

Secret Key/S3 Secret Key

Specify the Secret Access Key, constituting
the security credentials in combination with the Access Key.

To enter the secret key, click the […] button next to
the secret key field, and then in the pop-up dialog box enter the password between double
quotes and click OK to save the settings.

Note:

  • This option is not available if Use an existing S3
    connection
    is selected.
  • This option appears as S3 Secret
    Key
    if you select Remote
    host
    from the Data source
    type
    drop-down list.

Assume Role

Select this check box and specify the
values for the following parameters used to create a new assumed role session.

  • IAM Role ARNs chains: a series of chained roles, which
    may belong to other accounts, that your cluster can assume to access resources.

    You
    can chain a maximum of 10 roles.

  • Role ARN: the Amazon Resource Name (ARN) of the role
    to assume.

This option is not available if Use an existing S3
connection
is selected.

For more information on IAM Role ARNs
chains, see Authorizing Redshift service.

Bucket/S3 bucket

Specify the name of the Amazon S3 bucket in
which the file is located.

This field is available only when S3 or Remote host is selected from
the Data source type
drop-down list.

Note: This field appears as Bucket if you
select S3 from the Data source type drop-down list; it appears as
S3 bucket if you select Remote host from the drop-down list.

The bucket and the Redshift database to be used
must be in the same region on Amazon. This could avoid the S3ServiceException errors known to
Amazon. For further information about these errors, see S3ServiceException Errors.

Key

Specify the path to the file that contains the
data to be loaded.

This field is available only when S3 is selected from the
Data source type
drop-down list.

Cluster id

Specify the ID of the cluster that stores the
data to be loaded.

This field is available only when EMR is selected from the
Data source type
drop-down list.

HDFS path

Specify the HDFS file path that references the
data file.

This field is available only when EMR is selected from the
Data source type
drop-down list.

Table

Specify the name of the DynamoDB table that
contains the data to be loaded.

This field is available only when DynamoDB is selected from the
Data source type
drop-down list.

Read ratio

Specify the percentage of the DynamoDB table’s
provisioned throughput to use for the data load.

This field is available only when DynamoDB is selected from the
Data source type
drop-down list.

SSH manifest file

Specify the object key for the SSH manifest
file that provides the information used to open SSH connections and execute
remote commands.

This field is available only when Remote host is selected from
the Data source type
drop-down list.

Advanced settings

File type

Select the type of the file that contains the
data to be loaded.

  • Delimited file or
    CSV
    : a delimited/CSV file.

  • JSON: a JSON
    file.

  • AVRO: an Avro
    file.

  • Fixed width: a
    fixed-width file.

This list is available when S3, EMR, or Remote host is selected from the Data source type drop-down
list.

Fields terminated by

Enter the character used to separate
fields.

This field is available only when Delimited file or CSV is
selected from the File
type
list.

Enclosed by

Select the character in which the fields are
enclosed.

This list is available only when Delimited file or CSV is
selected from the File
type
list.

JSON mapping

Specify how to map the data elements in the
source file to the columns in the target table on Amazon Redshift. The valid
values are:

  • auto: Map the
    data by matching object keys or names in the source name/value pairs for
    a JSON file or field names in the Avro schema for an Avro file to the
    names of columns in the target table. The argument is case-sensitive and
    must be enclosed in double quotation marks.

  • s3://jsonpaths_file: Map the data
    using the named JSONPaths file. The parameter must be an Amazon S3 object
    key that is enclosed in double quotation marks and explicitly references
    a single file, for example, s3://mybucket/jsonpaths.txt. For more information, see Data Format
    Parameters
    .

This field is available only when JSON or AVRO is selected from the
File type list.

Fixed width mapping

Enter a string that specifies a user-defined
column label and column width between double quotation marks. The format of the
string is:

ColumnLabel1:ColumnWidth1,ColumnLabel2:ColumnWidth2,....

Note that the column label in the string has
no relation to the table column name and it can be either a text string or an
integer. The order of the label/width pairs must match the order of the table
columns exactly.

This field is available only when Fixed width is selected from
the File type
list.

Compressed by

Select this check box and from the list
displayed select the compression type of the source file.

This check box is available when S3, EMR, or Remote host is selected from
the Data source type
drop-down list.

Decrypt

Select this check box if the file is
encrypted using Amazon S3 client-side encryption. In the Encryption key field
displayed, specify the encryption key used to encrypt the file. Note that only
a base64 encoded AES 128-bit or AES 256-bit envelope key is supported. For more
information, see Loading Encrypted Data
Files from Amazon S3
.

This check box is available when S3 is selected from the
Data source type
drop-down list and Use an existing S3 connection is not
selected in the Basic settings view.

Encoding

Select the encoding type of the data to be
loaded from the list.

This list is available when S3, EMR, or Remote host is selected from
the Data source type
drop-down list.

Date format

Select one of the following items from the
list to specify the date format in the source data:

  • NONE: No date
    format is specified.

  • PATTERN: Select
    this item and specify the date format in the field displayed. The default
    date format is YYYY-MM-DD.

  • AUTO: Select
    this item if you want Amazon Redshift to recognize and convert
    automatically the date format.

Time format

Select one of the following items from the
list to specify the time format in the source data:

  • NONE: No time
    format is specified.

  • PATTERN: Select
    this item and specify the time format in the field displayed. The default
    time format is YYYY-MM-DD HH:MI:SS.

  • AUTO: Select
    this item if you want Amazon Redshift to recognize and convert
    automatically the time format.

  • EPOCHSECS:
    Select this item if the source data is represented as epoch time, the
    number of seconds since Jan 1, 1970 00:00:00 UTC.

  • EPOCHMILLISECS:
    Select this item if the source data is represented as epoch time, the
    number of milliseconds since Jan 1, 1970 00:00:00 UTC.

Settings

Click the [+] button below the table to specify more
parameters for loading the data.

  • Parameter: Click
    the cell and select a parameter from the drop-down list.

  • Value: Set the
    value for the corresponding parameter. Note that you cannot set the value
    for a parameter (such as IGNOREBLANKLINES) that does not need a value.

For more information about the parameters,
see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html.

JDBC url
Select a way to access to an Amazon Redshift database from the
JDBC url drop-down list.

  • Standard: Use the
    standard way to access the Redshift database.
  • SSO: Use the IAM
    Single Sign-ON (SSO) authentication way to access the Redshift Database. Before selecting
    this option, ensure that the IAM role added to your Redshift cluster has appropriate
    access rights and permissions to this cluster. You can ask the administrator of your AWS
    services for more details.

    This option is available only when Use an existing connection check box is not selected from
    the Basic settings.

tStatCatcher
Statistics

Select this check box to gather the Job processing metadata at the Job level
as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Note:

This component does not support the Row
> Reject
link.

Usage

Usage rule

The tRedshiftBulkExec component supports loading
data to Amazon Redshift from a delimited/CSV, JSON, or fixed-width file on
Amazon S3, but the tRedshiftOutputBulk component now only supports generating and
uploading a delimited/CSV file to Amazon S3. When you need to load data from a
JSON or fixed-width file, you can use the tFileOutputJSON or tFileOutputPositional component together with
the tS3Put component
instead of using the tRedshiftOutputBulk component to generate and upload the file
to Amazon S3.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Loading/unloading data to/from Amazon S3

This scenario describes a Job that generates a delimited file and uploads the file to S3,
loads data from the file on S3 to Redshift and displays the data on the console, then
unloads the data from Redshift to files on S3 per slice of the Redshift cluster, and
finally lists and gets the unloaded files on S3.

tRedshiftBulkExec_1.png

Prerequisites:

The following context variables have been created and saved in the Repository tree view. For more information about context
variables, see
Talend Studio User Guide
.

  • redshift_host: the connection endpoint URL of the
    Redshift cluster.

  • redshift_port: the listening port number of the
    database server.

  • redshift_database: the name of the database.

  • redshift_username: the username for the database
    authentication.

  • redshift_password: the password for the database
    authentication.

  • redshift_schema: the name of the schema.

  • s3_accesskey: the access key for accessing Amazon
    S3.

  • s3_secretkey: the secret key for accessing Amazon
    S3.

  • s3_bucket: the name of the Amazon S3 bucket.

tRedshiftBulkExec_2.png

Note that all context values in the above screenshot are for demonstration purposes
only.

Adding and linking components

  1. Create a new Job and apply all context variables listed above to the new
    Job.
  2. Add the following components by typing their names in the design workspace or dropping
    them from the Palette: a tRowGenerator component, a tRedshiftOutputBulk component, a tRedshiftBulkExec component, a tRedshiftInput component, a tLogRow component, a tRedshiftUnload component, a tS3List component, and a tS3Get component.
  3. Link tRowGenerator to tRedshiftOutputBulk using a Row > Main
    connection.
  4. Do the same to link tRedshiftInput to
    tLogRow.
  5. Link tS3List to tS3Get using a Row >
    Iterate connection.
  6. Link tRowGenerator to tRedshiftBulkExec using a Trigger > On Subjob Ok
    connection.
  7. Do the same to link tRedshiftBulkExec to
    tRedshiftInput, link tRedshiftInput to tRedshiftUnload, link tRedshiftUnload to tS3List.

Configuring the components

Preparing a file and uploading the file to S3

  1. Double-click tRowGenerator to open its
    RowGenerator Editor.

    tRedshiftBulkExec_3.png

  2. Click the [+] button to add two columns: ID of Integer type and Name of String type.
  3. Click the cell in the Functions column and select a
    function from the list for each column. In this example, select Numeric.sequence to generate sequence numbers for
    the ID column and select TalendDataGenerator.getFirstName to generate
    random first names for the Name
    column.
  4. In the Number of Rows for RowGenerator field, enter the
    number of data rows to generate. In this example, it is 20.
  5. Click OK to close the schema editor and
    accept the propagation prompted by the pop-up dialog box.
  6. Double-click tRedshiftOutputBulk to open its Basic settings view on the Component tab.

    tRedshiftBulkExec_4.png

  7. In the Data file path at local field,
    specify the local path for the file to be generated. In this example, it is
    E:/Redshift/redshift_bulk.txt.
  8. In the Access Key field, press Ctrl + Space and from the list select context.s3_accesskey to fill in this
    field.

    Do the same to fill the Secret Key field
    with context.s3_accesskey and the
    Bucket field with context.s3_bucket.
  9. In the Key field, enter a new name for the file to be
    generated after being uploaded on Amazon S3. In this example, it is
    person_load.

Loading data from the file on S3 to Redshift

  1. Double-click tRedshiftBulkExec to open its Basic settings view on the Component tab.

    tRedshiftBulkExec_5.png

  2. In the Host field, press Ctrl + Space and from the list select context.redshift_host to fill in this
    field.

    Do the same to fill:
    • the Port field with context.redshift_port,

    • the Database field with context.redshift_database,

    • the Schema field with context.redshift_schema,

    • the Username field with context.redshift_username,

    • the Password field with context.redshift_password,

    • the Access Key field with
      context.s3_accesskey,

    • the Secret Key field with context.s3_secretkey, and

    • the Bucket field with context.s3_bucket.

  3. In the Table Name field, enter the name
    of the table to be written. In this example, it is person.
  4. From the Action on table list, select
    Drop table if exists and create.
  5. In the Key field, enter the name of the file on Amazon
    S3 to be loaded. In this example, it is person_load.
  6. Click the […] button next to Edit schema and in the pop-up window define the schema by
    adding two columns: ID of Integer type
    and Name of String type.

    tRedshiftBulkExec_6.png

Retrieving data from the table on Redshift

  1. Double-click tRedshiftInput to open its
    Basic settings view on the Component tab.

    tRedshiftBulkExec_7.png

  2. Fill the Host, Port, Database, Schema, Username, and Password fields
    with their corresponding context variables.
  3. In the Table Name field, enter the name
    of the table to be read. In this example, it is person.
  4. Click the […] button next to Edit schema and in the pop-up window define the
    schema by adding two columns: ID of
    Integer type and Name of String
    type.
  5. In the Query field, enter the following
    SQL statement based on which the data are retrieved.

  6. Double-click tLogRow to open its
    Basic settings view on the Component tab.

    tRedshiftBulkExec_8.png

  7. In the Mode area, select Table (print values in cells of a table) for a
    better display of the result.

Unloading data from Redshift to file(s) on S3

  1. Double-click tRedshiftUnload to open its
    Basic settings view on the Component tab.

    tRedshiftBulkExec_9.png

  2. Fill the Host, Port, Database, Schema, Username, and Password fields
    with their corresponding context variables.

    Fill the Access Key, Secret Key, and Bucket fields also with their corresponding context
    variables.
  3. In the Table Name field, enter the name
    of the table from which the data will be read. In this example, it is
    person.
  4. Click the […] button next to Edit schema and in the pop-up window define the
    schema by adding two columns: ID of
    Integer type and Name of String
    type.
  5. In the Query field, enter the following
    SQL statement based on which the result will be unloaded.

  6. In the Key prefix field, enter the name
    prefix for the unload files. In this example, it is person_unload_.

Retrieving files unloaded to Amazon S3

  1. Double-click tS3List to open its
    Basic settings view on the Component tab.

    tRedshiftBulkExec_10.png

  2. Fill the Access Key and Secret
    Key
    fields with their corresponding context variables.
  3. From the Region list, select the AWS
    region where the unload files are created. In this example, it is US Standard.
  4. Clear the List all buckets objects check
    box, and click the [+] button under the
    table displayed to add one row.

    Fill in the Bucket name column with the name of the
    bucket in which the unload files are created. In this example, it is the
    context variable context.s3_bucket.
    Fill in the Key prefix column with the name prefix for
    the unload files. In this example, it is person_unload_.
  5. Double-click tS3Get to open its Basic settings view on the Component tab.

    tRedshiftBulkExec_11.png

  6. Fill the Access Key field and Secret Key field with their corresponding context
    variables.
  7. From the Region list, select the AWS
    region where the unload files are created. In this example, it is US Standard.
  8. In the Bucket field, enter the name of
    the bucket in which the unload files are created. In this example, it is the
    context variable context.s3_bucket.

    In the Key field, enter the name of the
    unload files by pressing Ctrl + Space and
    from the list selecting the global variable ((String)globalMap.get(“tS3List_1_CURRENT_KEY”)).
  9. In the File field, enter the local path
    where the unload files are saved. In this example, it is “E:/Redshift/” +
    ((String)globalMap.get(“tS3List_1_CURRENT_KEY”))
    .

Saving and executing the Job

  1. Press Ctrl + S to save the Job.
  2. Execute the Job by pressing F6 or
    clicking Run on the Run tab.

    tRedshiftBulkExec_12.png

    tRedshiftBulkExec_13.png

    tRedshiftBulkExec_14.png

    As shown above, the generated data is written into the local file redshift_bulk.txt, the file is uploaded on S3 with the new
    name person_load, and then the data is
    loaded from the file on S3 to the table person in Redshift and displayed on the console. After that,
    the data is unloaded from the table person in Redshift to two files person_unload_0000_part_00 and person_unload_0001_part_00 on S3 per slice of the Redshift
    cluster, and finally the unloaded files on S3 are listed and retrieved in
    the local folder.

Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x