August 17, 2023

tRedshiftOutputBulkExec – Docs for ESB 5.x

tRedshiftOutputBulkExec

tRedshiftOutputBulkExec_icon32.png

tRedshiftOutputBulkExec properties

The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process
to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first
step, a delimited/CSV file is generated. In the second step, this file is used in the
INSERT statement used to feed Amazon Redshift. These two steps are fused together in the
tRedshiftOutputBulkExec component. The advantage of
using two separate steps is that the data can be transformed before it is loaded to
Amazon Redshift.

Component family

Databases/Amazon Redshift

 

Function

This component receives data from the preceding component, generates a single
delimited/CSV file and uploads the file to Amazon S3, finally it
loads the data from Amazon S3 to Redshift.

Purpose

As a dedicated component, it allows gains in performance during
Insert operations to Amazon Redshift.

Basic settings

Property Type

Either Built-In or Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

 

 

Built-In: No property data stored
centrally.

 

 

Repository: Select the repository file in which the
properties are stored. The database connection fields that follow
are completed automatically using the data retrieved.

Database settings

Use an existing connection

Select this check box and in the Component List click the
relevant connection component to reuse the connection details you already defined.

 

Host

Type in the IP address or hostname of the database server.

 

Port

Type in the listening port number of the database server.

 

Database

Type in the name of the database.

 

Schema

Type in the name of the schema.

 

Username and Password

Type in the database user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

 

Table Name

Specify the name of the table to be written. Note that only one
table can be written at a time.

 

Action on table

On the table defined, you can perform one of the following
operations:

  • None: No operation is
    carried out.

  • Drop and create table:
    The table is removed and created again.

  • Create table: The table
    does not exist and gets created.

  • Create table if not
    exists
    : The table is created if it does not
    exist.

  • Drop table if exists and
    create
    : The table is removed if it already
    exists and created again.

  • Clear table: The table
    content is deleted. You have the possibility to rollback the
    operation.

 

Schema and Edit schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

 

 

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

 

 

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

 

 

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

File Generate Setting

Data file path at local

Specify the local path to the file to be generated.

Note that the file is generated on the same machine where the
Studio is installed or where the Job using this component is
deployed.

 

Append the local file

Select this check box to append data to the specified local file
if it already exists, instead of overwriting it.

 

Create directory if not exists

Select this check box to create the directory specified in the
Data file path at local field
if it does not exist. By default, this check box is selected.

S3 Setting

Access Key

Specify the Access Key ID that uniquely identifies an AWS Account.
For how to get your Access Key and Access Secret, visit Getting Your AWS Access Keys.

 

Secret Key

Specify the Secret Access Key, constituting the security
credentials in combination with the access Key.

To enter the secret key, click the […] button next to
the secret key field, and then in the pop-up dialog box enter the password between double
quotes and click OK to save the settings.

 

Bucket

Type in the name of the Amazon S3 bucket to which the file is
uploaded.

 

Key

Type in an object key to assign to the file uploaded to Amazon
S3.

Advanced settings

Fields terminated by

Enter the character used to separate fields.

 

Enclosed by

Select the character in a pair of which the fields are
enclosed.

 

Compressed by

Select this check box and select a compression type from the list
displayed to compress the data file.

This field disappears when the Append the
local file
check box is selected.

 

Encoding

Select an encoding type for the data in the file to be
generated.

 

Delete local file after putting it to s3

Select this check box to delete the local file after being
uploaded to Amazon S3. By default, this check box is
selected.

 

Date format

Select one of the following items from the list to specify the
date format in the source data:

  • NONE: No date format is
    specified.

  • PATTERN: Select this item and specify the date
    format in the field displayed. The default date format is
    YYYY-MM-DD.

  • AUTO: Select this item if
    you want Amazon Redshift to recognize and convert
    automatically the date format.

 

Time format

Select one of the following items from the list to specify the
time format in the source data:

  • NONE: No time format is
    specified.

  • PATTERN: Select this item and specify the time
    format in the field displayed. The default time format is
    YYYY-MM-DD HH:MI:SS.

  • AUTO: Select this item if
    you want Amazon Redshift to recognize and convert
    automatically the time format.

  • EPOCHSECS: Select this
    item if the source data is represented as epoch time, the
    number of seconds since Jan 1, 1970 00:00:00
    UTC
    .

  • EPOCHMILLISECS: Select
    this item if the source data is represented as epoch time,
    the number of milliseconds since Jan 1, 1970 00:00:00
    UTC
    .

 

Settings

Click the [+] button below the
table to specify more parameters for loading the data.

  • Parameter: Click the cell
    and select a parameter from the drop-down list.

  • Value: Set the value for
    the corresponding parameter. Note that you cannot set the
    value for a parameter (such as IGNOREBLANKLINES) that does not need a
    value.

For more information about the parameters, see http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html.

S3 Setting

Config client

Select this check box to configure client parameters for Amazon S3. Click the [+] button below the table displayed to
add as many rows as needed, each row for a client parameter, and set
the following attributes for each parameter:

  • Client Parameter: Click
    the cell and select a parameter from the drop-down
    list.

  • Value: Enter the value
    for the corresponding client parameter.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the
Job level as well as at each component level.

Dynamic settings

Click the [+] button to add a row in the table and fill
the Code field with a context variable to choose your
database connection dynamically from multiple connections planned in your Job. This feature
is useful when you need to access database tables having the same data structure but in
different databases, especially when you are working in an environment where you cannot
change your Job settings, for example, when your Job has to be deployed and executed
independent of Talend Studio.

The Dynamic settings table is available only when the
Use an existing connection check box is selected in the
Basic settings view. Once a dynamic parameter is
defined, the Component List box in the Basic settings view becomes unusable.

For more information on Dynamic settings and context
variables, see Talend Studio User Guide.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component is mainly used when no particular transformation is required on the data
to be loaded to Amazon Redshift.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Related scenario

For a related scenario, see Loading/unloading data from/to Amazon S3.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x