July 30, 2023

tRedshiftOutputBulk – Docs for ESB 7.x

tRedshiftOutputBulk

Prepares a delimited/CSV file that can be used by tRedshiftBulkExec to feed Amazon Redshift.

The tRedshiftOutputBulk and tRedshiftBulkExec components can be used together in a two step process
to load data to Amazon Redshift from a delimited/CSV file on Amazon S3. In the first
step, a delimited/CSV file is generated. In the second step, this file is used in the
INSERT statement used to feed Amazon Redshift. These two steps are fused together in the
tRedshiftOutputBulkExec component. The advantage of
using two separate steps is that the data can be transformed before it is loaded to
Amazon Redshift.

This component receives data from the preceding component,
generates a single delimited/CSV file and then uploads the file to
Amazon S3.

tRedshiftOutputBulk Standard properties

These properties are used to configure tRedshiftOutputBulk running in the Standard
Job framework.

The Standard
tRedshiftOutputBulk component belongs to the Cloud and the Databases families.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Data file path at local

Specify the local path to the file to be
generated.

Note that the file is generated on the same
machine where the Studio is installed or where the Job using this component is
deployed.

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Compress the data file

Select this check box and select a compression
type from the list displayed to compress the data file.

This check box disappears when the Append the local file check box
is selected.

Encrypt

Select this check box to generate and upload
the data file to Amazon S3 using client-side encryption. In the Encryption key field displayed,
enter the encryption key used to encrypt the file.

By default, this check box is cleared and the
data file will be uploaded to Amazon S3 using server-side encryption.

Note: This option is available when
Use an existing S3 connection is not
selected.

For more information about the client-side and
server-side encryption, see Protecting Data Using
Encryption
.

Access Key

Specify the Access Key ID that uniquely
identifies an AWS Account. For how to get your Access Key and Access Secret,
visit Getting Your AWS Access
Keys
.

Note: This option is available when both Use an existing S3 connection and Inherit credentials from AWS role are cleared.

Secret Key

Specify the Secret Access Key, constituting the
security credentials in combination with the access Key.

To enter the secret key, click the […] button next to
the secret key field, and then in the pop-up dialog box enter the password between double
quotes and click OK to save the settings.

Note: This option is available when both Use an existing S3 connection and Inherit credentials from AWS role are cleared.

Inherit credentials from AWS
role

Select this check box to obtain AWS security credentials
from Amazon EC2 instance metadata. To use this option, the Amazon EC2 instance must
be started and your Job must be running on Amazon EC2. For more information, see
Using an IAM Role to Grant
Permissions to Applications Running on Amazon EC2 Instances
.

Note: This option is available when
Use an existing S3 connection is not
selected.

Assume role

If you temporarily need some access permissions associated
to an AWS IAM role that is not granted to your user account, select this check box to
assume that role. Then specify the values for the following parameters to create a new
assumed role session.

Ensure that access to this role has been
granted to your user account by the trust policy associated to this role. If you are not
certain about this, ask the owner of this role or your AWS administrator.

Note: This option is available when
Use an existing S3 connection is not
selected.
  • Role ARN: the Amazon Resource Name (ARN) of the role to assume. You
    can find this ARN name on the Summary page
    of the role to be used on your AWS portal, for example, this role ARN could read
    like am:aws:iam::[aws_account_number]:role/[role_name].

  • Role session name: enter the name you want to use to uniquely
    identify your assumed role session. This name can contain upper- and lower-case
    alphanumeric characters with no spaces. You can also include underscores or any of
    the following characters: =,.@-.

  • Session duration (minutes): the duration (in minutes) for which you
    want the assumed role session to be active. This duration cannot exceed the
    maximum duration which your AWS administrator has set.

For an example about an IAM role and its related policy types, see Create and Manage AWS IAM Roles from the AWS
documentation.

Region

Specify the AWS region by selecting a region name from the
list or entering a region between double quotation marks (e.g. “us-east-1”) in the list. For more information about the AWS
Region, see Regions and Endpoints.

Note: This option is available when
Use an existing S3 connection is not
selected.

STS Endpoint

Select this check box and in the field displayed, specify the
AWS Security Token Service endpoint, for example, sts.amazonaws.com, where session credentials are retrieved from.

This check box is available only when the Assume role check box is selected.

Bucket

Type in the name of the Amazon S3 bucket,
namely the top level folder, to which the file is uploaded.

The bucket and the Redshift database to be used
must be in the same region on Amazon. This could avoid the S3ServiceException errors known to
Amazon. For further information about these errors, see S3ServiceException Errors.

Key

Type in an object key to assign to the file
uploaded to Amazon S3.

Advanced settings

Field Separator

Enter the character used to separate
fields.

Text enclosure

Select the character in a pair of which the
fields are enclosed.

Delete local file after putting it
to s3

Select this check box to delete the local file
after being uploaded to Amazon S3. By default, this check box is selected.

Create directory if not
exists

Select this check box to create the directory
specified in the Data file path at
local
field if it does not exist. By default, this check box is
selected.

Encoding

Select an encoding type for the data in the
file to be generated.

Config client

Select this check box to configure client
parameters for Amazon S3. Click the [+] button below the table displayed to add as many rows as
needed, each row for a client parameter, and set the following attributes for
each parameter:

  • Client Parameter:
    Click the cell and select a parameter from the drop-down list.

  • Value: Enter the
    value for the corresponding client parameter.

For information about S3 client parameterts , go to Client Configuration.

tStatCatcher Statistics

Select this check box to gather the Job
processing metadata at the Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is more commonly used with the
tRedshiftBulkExec
component to feed Amazon Redshift with a delimited/CSV file. Used together they
offer gains in performance while feeding Amazon Redshift.

Related scenario

For a related scenario, see Loading/unloading data to/from Amazon S3.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x