August 17, 2023

tFileOutputPositional – Docs for ESB 5.x

tFileOutputPositional

tFileOutputPositional_icon32.png

tFileOutputPositional Properties

Component Family

File/Output

 

Function

tFileOutputPositional writes a
file row by row according to the length and the format of the fields
or columns in a row.

If you have subscribed to one of the Talend solutions with Big Data, you are
able to use this component in a Talend Map/Reduce Job to generate
Map/Reduce code. For further information, see tFileOutputPositional in Talend
Map/Reduce Jobs

Purpose

It writes a file row by row, according to the data structure
(schema) coming from the input flow.

Basic settings

Property type

Either Built-in or Repository.

 

 

Built-in: No property data stored
centrally.

 

 

Repository: Select the repository
file where the properties are stored. The fields that follow are
completed automatically using the data retrieved.

  Use existing dynamic

Select this check box to reuse an existing dynamic schema to
handle data from unknown columns.

When this check box is selected, a Component
list
appears allowing you to select the component
used to set the dynamic schema.

 

Use Output Stream

Select this check box process the data flow of interest. Once you
have selected it, the Output Stream
field displays and you can type in the data flow of interest.

The data flow to be processed must be added to the flow in order
for this component to fetch these data via the corresponding
representative variable.

This variable could be already pre-defined in your Studio or
provided by the context or the components you are using along with
this component; otherwise, you could define it manually and use it
according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could
select the variable of interest from the auto-completion list
(Ctrl+Space) to fill the
current field on condition that this variable has been properly
defined.

For further information about how to use a stream, see Scenario 2: Reading data from a remote file in streaming mode.

 

File Name

Name or path to the file to be processed and or the variable to be
used.

This field becomes unavailable once you have selected the
Use Output Stream check
box.

For further information about how to define and use a variable in
a Job, see Talend Studio
User Guide.

 

Schema
and
Edit Schema

A schema is a row description, that is to say, it defines the
number of fields to be processed and passed on to the next
component. The schema is either Built-in or stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

This component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see Talend Studio
User Guide.

This dynamic schema feature is designed for the purpose of retrieving unknown columns
of a table and is recommended to be used for this purpose only; it is not recommended
for the use of creating tables.

 

 

Built-in: You create and store
the schema locally for this component only. Related topic: see
Talend Studio User
Guide
.

 

 

Repository: You have already
created the schema and stored it in the Repository. You can reuse it
in various projects and Job designs. Related topic: see
Talend Studio User
Guide
.

 

Row separator

Enter the separator used to identify the end of a row.

 

Append

Select this check box to add the new rows at the end of the
file.

 

Include header

Select this check box to include the column header to the
file.

 

Compress as zip file

Select this check box to compress the output file in zip
format.

 

Formats

Customize the positional file data format and fill in the columns
in the Formats table.

Column: Select the column you
want to customize.

Size: Enter the column
size.

Padding char: Type in between
quotes the padding characters used. A space by default.

Alignment: Select the appropriate
alignment parameter.

Keep: If the data in the column or
in the field are too long, select the part you want to keep.

Advanced settings

Advanced separator (for numbers)

Select this check box to modify the separators used for
numbers:

Thousands separator: define
separators for thousands.

Decimal separator: define
separators for decimals.

 

Use byte length as the cardinality

Select this check box to add support of double-byte character to
this component. JDK 1.6 is required for this feature.

 

Create directory if not exists

This check box is selected by default. It creates a directory to
hold the output table if it does not exist.

 

Custom the flush buffer size

Select this check box to define the number of lines to write
before emptying the buffer.

Row Number: set the number of
lines to write.

 

Output in row mode

Writes in row mode.

 

Encoding

Select the encoding from the list or select Custom and
define it manually. This field is compulsory for database data handling.

 

Don’t generate empty file

Select this check box if you do not want to generate empty
files.

 

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a
Job level as well as at each component level.

Dynamic settings

Click the [+] button to add a row in the table and fill the
Code field with a context variable to choose your HDFS
connection dynamically from multiple connections planned in your Job. This feature is useful
when you need to access files in different HDFS systems or different distributions,
especially when you are working in an environment where you cannot change your Job settings,
for example, when your Job has to be deployed and executed independent of Talend Studio.

The Dynamic settings table is available only when the
Use an existing connection check box is selected in the
Basic settings view. Once a dynamic parameter is
defined, the Component List box in the Basic settings view becomes unusable.

For more information on Dynamic settings and context
variables, see Talend Studio User Guide.

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

Use this component to read a file and separate the fields using
the specified separator.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

tFileOutputPositional in Talend
Map/Reduce Jobs

Warning

The information in this section is only for users that have subscribed to one of
the Talend solutions with Big Data and is not applicable to
Talend Open Studio for Big Data users.

In a Talend Map/Reduce Job, tFileOutputPositional, as well as the whole Map/Reduce Job using it,
generates native Map/Reduce code. This section presents the specific properties of
tFileOutputPositional when it is used in that
situation. For further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.

Component family

MapReduce/Output

 

Basic settings

Property type

Either Built-in or Repository.

 

 

Built-in: No property data stored
centrally.

 

 

Repository: reuse properties
stored centrally under the Hadoop
Cluster
node of the Repository tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

 

Save_Icon.png

Click this icon to open a database connection wizard and store the database connection
parameters you set in the component Basic settings
view.

For more information about setting up and storing database connection parameters, see
Talend Studio User Guide.

 

Schema and Edit
Schema

A schema is a row description, it defines the number of fields
that will be processed and passed on to the next component. The
schema is either built-in or remote in the Repository.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

 

 

Built-in: The schema will be
created and stored locally for this component only. Related topic:
see Talend Studio User Guide.

 

 

Repository: The schema already
exists and is stored in the Repository, hence can be reused in
various projects and job flowcharts. Related topic: see
Talend Studio User
Guide
.

 

Folder

Browse to, or enter the directory in HDFS where the data you need to use is.

This path must point to a folder rather than a file, because a
Talend Map/Reduce
Job need to write in its target folder not only the final result but
also multiple part- files
generated in performing Map/Reduce computations.

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
configuration
tab in the Run view.

 

Action

Select an operation for writing data:

Create: Creates a file and write
data in it.

Overwrite: Overwrites the file
existing in the directory specified in the Folder field.

 

Compress the data

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

 

Formats

Customize the positional file data format and fill in the columns
in the Formats table.

Column: Select the column you
want to customize.

Size: Enter the column
size.

Padding char: Type in between
quotes the padding characters used. A space by default.

Alignment: Select the appropriate
alignment parameter.

Keep: If the data in the column or
in the field are too long, select the part you want to keep.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

In a Talend Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileOutputPositional as well as the
MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenario

For a related scenario, see Scenario: Regex to Positional file.

For scenario about the usage of Use Output Stream
check box, see Scenario 2: Utilizing Output Stream to save filtered data to a local file.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x