August 17, 2023

tS3Input – Docs for ESB 5.x

tS3Input

ts3input_icon32_white.png

tS3Input properties

Component family

MapReduce/Input

 

Function

The tS3Input component loads S3N-formatted (S3 Native
Filesystem) files into the MapReduce process you are designing.

This component, along with the MapReduce family it belongs to, appears only when you are
creating a Map/Reduce Job.

Purpose

tS3Input reads data from a given
S3N system (S3 Native Filesystem).

Basic settings

Property type

Either Built-in or Repository.

 

 

Built-in: No property data stored
centrally.

 

 

Repository: Select the Repository
file where Properties are stored. The following fields are
pre-filled in using fetched data.

 

Schema and Edit
Schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

   

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

   

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

 

Bucket and Folder

Enter the bucket name and its folder in which you need to write data. You need to separate
the bucket name and the folder name using a slash (/).

 

Access key and Secret
key

Enter the authentication information required to connect to the Amazon S3 bucket to be
used.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

File type

Type

Select the type of the file to be processed. The type of the file may be:

  • Text file.

  • Sequence file: a Hadoop sequence file
    consists of binary key/value pairs and is suitable for the Map/Reduce framework.
    For further information, see http://wiki.apache.org/hadoop/SequenceFile.

    Once you select the Sequence file format, the
    Key column list and the Value column list appear to allow you to select the
    keys and the values of that Sequence file to be processed.

 

Row separator

Enter the separator used to identify the end of a row.

 

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

 

Header

Enter the number of rows to be skipped in the beginning of file.

 

Custom encoding

You may encounter encoding issues when you process the stored data. In that situation, select
this check box to display the Encoding list.

Select the encoding from the list or select Custom and
define it manually. This field is compulsory for database data handling.

This option is not available for a Sequence file.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By
default, the thousands separator is a coma (,) and the decimal separator is a period (.).

This option is not available for a Sequence file.

 

Trim all column

Select this check box to remove the leading and trailing
whitespaces from all columns. When this check box is cleared, the
Check column to trim table is
displayed, which lets you select particular columns to trim.

This option is not available for a Sequence file.

 

Check column to trim

This table is filled automatically with the schema being used. Select the check box(es)
corresponding to the column(s) to be trimmed.

This option is not available for a Sequence file.

 

Enable parallel execution

Select this check box to perform high-speed data processing, by treating multiple data flows
simultaneously. Note that this feature depends on the database or the application ability to
handle multiple inserts in parallel as well as the number of CPU affected. In the Number of parallel executions field, either:

  • Enter the number of parallel executions desired.

  • Press Ctrl + Space and select the appropriate
    context variable from the list. For further information, see Talend Studio
    User Guide
    .

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

In a Talend Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tS3Input as well as the MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenario

This component is used in the similar way as the other input components reading data
from a given filesystem. But note that when you configure the Hadoop connection in the
Hadoop configuration tab of the Run view, you need to select the Use
Datanode hostname
check box.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x