July 30, 2023

tRedshiftOutputBulkExec – Docs for ESB 7.x

tRedshiftOutputBulkExec

Executes the Insert action on the data provided.

As a dedicated component, it allows gains in performance during Insert
operations to Amazon Redshift.

The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process
to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first
step, a delimited/CSV file is generated. In the second step, this file is used in the
INSERT statement used to feed Amazon Redshift. These two steps are fused together in the
tRedshiftOutputBulkExec component. The
advantage of using two separate steps is that the data can be transformed before it is
loaded to Amazon Redshift.

This component receives data from the preceding component, generates a single
delimited/CSV file and uploads the file to Amazon S3, finally it
loads the data from Amazon S3 to Redshift.

tRedshiftOutputBulkExec Standard properties

These properties are used to configure tRedshiftOutputBulkExec running in the Standard Job framework.

The Standard
tRedshiftOutputBulkExec component belongs to the Cloud and the Databases families.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property Type

Either Built-In or Repository.

 

Built-In: No property
data stored centrally.

 

Repository: Select the
repository file in which the properties are stored. The database connection
fields that follow are completed automatically using the data retrieved.

Use an existing
connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Host

Type in the IP address or hostname of the
database server.

Port

Type in the listening port number of the
database server.

Database

Type in the name of the database.

Schema

Type in the name of the schema.

Username and Password

Type in the database user authentication
data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Additional JDBC
Parameters

Specify additional JDBC properties for the connection you are creating. The
properties are separated by ampersand & and each property is a key-value pair. For
example, ssl=true &
sslfactory=com.amazon.redshift.ssl.NonValidatingFactory
, which means the
connection will be created using SSL.

Table Name

Specify the name of the table to be written.
Note that only one table can be written at a time.

Action on table

On the table defined, you can perform one of
the following operations:

  • None: No operation
    is carried out.

  • Drop and create
    table
    : The table is removed and created again.

  • Create table: The
    table does not exist and gets created.

  • Create table if not
    exists
    : The table is created if it does not exist.

  • Drop table if exists and
    create
    : The table is removed if it already exists and
    created again.

  • Clear table: The
    table content is deleted. You have the possibility to rollback the
    operation.

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Data file path at local

Specify the local path to the file to be
generated.

Note that the file is generated on the same
machine where the Studio is installed or where the Job using this component is
deployed.

Append the local file

Select this check box to append data to the
specified local file if it already exists, instead of overwriting it.

Create directory if not
exists

Select this check box to create the directory
specified in the Data file path at
local
field if it does not exist. By default, this check box is
selected.

Use an existing S3
connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Access Key

Specify the Access Key ID that uniquely
identifies an AWS Account. For how to get your Access Key and Access Secret,
visit Getting Your AWS Access
Keys
.

Note: This option is available when both Use an existing S3 connection and Inherit credentials from AWS role are cleared.

Secret Key

Specify the Secret Access Key, constituting
the security credentials in combination with the access Key.

To enter the secret key, click the […] button next to
the secret key field, and then in the pop-up dialog box enter the password between double
quotes and click OK to save the settings.

Note: This option is available when both Use an existing S3 connection and Inherit credentials from AWS role are cleared.

Inherit credentials from AWS
role

Select this check box to obtain AWS security credentials
from Amazon EC2 instance metadata. To use this option, the Amazon EC2 instance must
be started and your Job must be running on Amazon EC2. For more information, see
Using an IAM Role to Grant
Permissions to Applications Running on Amazon EC2 Instances
.

Note: This option is available when
Use an existing S3 connection is not
selected.

S3 Assume Role

If you temporarily need some access permissions associated
to an AWS IAM role that is not granted to your user account, select this check box to
assume that role. Then specify the values for the following parameters to create a new
assumed role session.

Ensure that access to this role has been
granted to your user account by the trust policy associated to this role. If you are not
certain about this, ask the owner of this role or your AWS administrator.

Note: This option is available when
Use an existing S3 connection is not
selected.
  • Role ARN: the Amazon Resource Name (ARN) of the role to assume. You
    can find this ARN name on the Summary page
    of the role to be used on your AWS portal, for example, this role ARN could read
    like am:aws:iam::[aws_account_number]:role/[role_name].

  • Role session name: enter the name you want to use to uniquely
    identify your assumed role session. This name can contain upper- and lower-case
    alphanumeric characters with no spaces. You can also include underscores or any of
    the following characters: =,.@-.

  • Session duration (minutes): the duration (in minutes) for which you
    want the assumed role session to be active. This duration cannot exceed the
    maximum duration which your AWS administrator has set.

For an example about an IAM role and its related policy types, see Create and Manage AWS IAM Roles from the AWS
documentation.

Region

Specify the AWS region by selecting a region name from the
list or entering a region between double quotation marks (e.g. “us-east-1”) in the list. For more information about the AWS
Region, see Regions and Endpoints.

Note: This field is available when Use an existing S3
connection
is not selected.

Bucket

Type in the name of the Amazon S3 bucket,
namely the top level folder, to which the file is uploaded.

The bucket and the Redshift database to be used
must be in the same region on Amazon. This could avoid the S3ServiceException errors known to
Amazon. For further information about these errors, see S3ServiceException Errors.

Key

Type in an object key to assign to the file
uploaded to Amazon S3.

Redshift Assume Role

Select this check box and specify the
values for the following parameters used to create a new assumed role session.

  • IAM Role ARNs chains: a series of chained roles, which
    may belong to other accounts, that your cluster can assume to access resources.

    You
    can chain a maximum of 10 roles.

  • Role ARN: the Amazon Resource Name (ARN) of the role
    to assume.

For more information on IAM Role ARNs
chains, see Authorizing Redshift service.

Advanced settings

Fields terminated by

Enter the character used to separate
fields.

Enclosed by

Select the character in a pair of which the
fields are enclosed.

Compressed by

Select this check box and select a compression
type from the list displayed to compress the data file.

This field disappears when the Append the local file check box
is selected.

Encrypt

Select this check box to generate and upload
the data file to Amazon S3 using client-side encryption. In the Encryption key field displayed,
specify the encryption key used to encrypt the file. Note that only a base64
encoded AES 128-bit or AES 256-bit envelope key is supported. For more
information, see Loading Encrypted Data
Files from Amazon S3
.

By default, this check box is cleared and the
data file will be uploaded to Amazon S3 using server-side encryption.

For more information about the client-side and
server-side encryption, see Protecting Data Using
Encryption
.

Note: This field is available when Use an existing S3
connection
is not selected.

Encoding

Select an encoding type for the data in the
file to be generated.

Delete local file after putting it
to s3

Select this check box to delete the local file
after being uploaded to Amazon S3. By default, this check box is selected.

Date format

Select one of the following items from the
list to specify the date format in the source data:

  • NONE: No date
    format is specified.

  • PATTERN: Select
    this item and specify the date format in the field displayed. The default
    date format is YYYY-MM-DD.

  • AUTO: Select this
    item if you want Amazon Redshift to recognize and convert automatically
    the date format.

Time format

Select one of the following items from the
list to specify the time format in the source data:

  • NONE: No time
    format is specified.

  • PATTERN: Select
    this item and specify the time format in the field displayed. The default
    time format is YYYY-MM-DD HH:MI:SS.

  • AUTO: Select this
    item if you want Amazon Redshift to recognize and convert automatically
    the time format.

  • EPOCHSECS: Select
    this item if the source data is represented as epoch time, the number of
    seconds since Jan 1, 1970 00:00:00 UTC.

  • EPOCHMILLISECS:
    Select this item if the source data is represented as epoch time, the
    number of milliseconds since Jan 1, 1970 00:00:00 UTC.

Settings

Click the [+] button below the table to specify more
parameters for loading the data.

  • Parameter: Click
    the cell and select a parameter from the drop-down list.

  • Value: Set the
    value for the corresponding parameter. Note that you cannot set the value
    for a parameter (such as IGNOREBLANKLINES) that does not need a value.

For more information about the parameters,
see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html.

Config client

Select this check box to configure client
parameters for Amazon S3. Click the [+] button below the table displayed to add as
many rows as needed, each row for a client parameter, and set the following
attributes for each parameter:

  • Client
    Parameter
    : Click the cell and select a parameter from the
    drop-down list.

  • Value: Enter the
    value for the corresponding client parameter.

For information about S3 client parameterts , go to Client Configuration.

JDBC url
Select a way to access to an Amazon Redshift database from the
JDBC url drop-down list.

  • Standard: Use the
    standard way to access the Redshift database.
  • SSO: Use the IAM
    Single Sign-ON (SSO) authentication way to access the Redshift Database. Before selecting
    this option, ensure that the IAM role added to your Redshift cluster has appropriate
    access rights and permissions to this cluster. You can ask the administrator of your AWS
    services for more details.

    This option is available only when Use an existing connection check box is not selected from
    the Basic settings.

tStatCatcher
Statistics

Select this check box to gather the Job
processing metadata at the Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is mainly used when no
particular transformation is required on the data to be loaded to Amazon
Redshift.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Related scenario

For a related scenario, see Loading/unloading data to/from Amazon S3.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x