July 30, 2023

tS3Output – Docs for ESB 7.x

tS3Output

Writes data into a given S3 filesystem.

The tS3Output component receives data processed by its preceding
component and writes the data into a given S3 filesystem. By default, the
format used to write the data is S3N (S3 Native Filesystem).

tS3Output MapReduce properties (deprecated)

These properties are used to configure tS3Output running in the MapReduce Job framework.

The MapReduce
tS3Output component belongs to the MapReduce family.

The information in this section is only for users who have subscribed to
Talend Data Fabric or to any Talend product with Big Data but it is not
applicable to Talend Open Studio for Big Data users.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The following fields are
pre-filled in using fetched data.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Force S3 format instead of the recommended
S3N

Select this check box to use the S3 format to write the data.
Otherwise, the default S3N format is used.

Note that the S3N format allows you to write structured data while
in the S3 format the data is flat.

Bucket and Folder

Enter the bucket name and its folder you need to use. You
need to separate the bucket name and the folder name using a slash (/).

Access key and Secret
key

Enter the authentication information required to connect to
the Amazon S3 bucket to be used.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Type

Select the type of the file to be processed. The type of the file may be:

  • Text file.

  • Sequence file: a Hadoop sequence file
    consists of binary key/value pairs and is suitable for the Map/Reduce framework.
    For further information, see http://wiki.apache.org/hadoop/SequenceFile.

    Once you select the Sequence file format, the
    Key column list and the Value column list appear to allow you to select the
    keys and the values of that Sequence file to be processed.

Action

Select an operation for writing data:

Create: Creates a file and write
data in it.

Overwrite: Overwrites the file
existing in the directory specified in the Folder field.

Row separator

The separator used to identify the end of a row.

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

Include Header

Select this check box to output the header of the data.

This option is not available for a Sequence file.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

This option is not available for a Sequence file.

Compress the data

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

This option is not available for a Sequence file.

Use local timezone for date Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tS3Output as well as the MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

When you configure this connection for tS3Output, you need to select the Use Datanode hostname check box.

This connection is effective on a per-Job basis.

Related scenario

This component is used in the similar way as the other input components writing data
into a given filesystem. But note that when you configure the Hadoop connection in the
Hadoop configuration tab of the Run view, you need to select the Use
Datanode hostname
check box.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x