July 31, 2023

tFileOutputDelimited – Docs for ESB File Delimited 7.x

tFileOutputDelimited

Outputs the input data to a delimited file according to the defined
schema.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tFileOutputDelimited Standard properties

These properties are used to configure tFileOutputDelimited running in the Standard Job framework.

The Standard
tFileOutputDelimited component belongs to the File family.

The component in this framework is available in all Talend
products
.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

Use Output Stream

Select this check box process the data flow of
interest. Once you have selected it, the Output
Stream
field displays and you can type in the data flow of
interest.

The data flow to be processed must be added to the flow in order for this component to fetch
these data via the corresponding representative variable.

This variable could be already
pre-defined in your Studio or provided by the context or the components you
are using along with this component; otherwise, you could define it manually
and use it according to the design of your Job, for example, using tJava or tJavaFlex.

In order to avoid the inconvenience of
hand writing, you could select the variable of interest from the
auto-completion list (Ctrl+Space) to fill
the current field on condition that this variable has been properly
defined.

For further information about how to use a stream,
see Reading data from a remote file in streaming mode.

File Name

Name or path to the output file and/or the variable
to be used.

This field becomes unavailable once you have
selected the Use Output Stream check
box.

For further information about how to define and use a
variable in a Job, see
Talend Studio

User Guide.

Warning: Use absolute path (instead of relative path) for
this field to avoid possible errors.

Row Separator

The separator used to identify the end of a row.

Field Separator

Enter character, string or regular expression to separate fields for the transferred
data.

Append

Select this check box to add the new rows at the end
of the file.

Include Header

Select this check box to include the column header to
the file.

Compress as zip file

Select this check box to compress the output file in
zip format.

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

This
component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see
Talend Studio

User Guide.

This
dynamic schema feature is designed for the purpose of retrieving unknown columns of a
table and is recommended to be used for this purpose only; it is not recommended for the
use of creating tables.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Sync columns

Click to synchronize the output file schema with the
input file schema. The Sync function only displays once the Row connection is linked with the output
component.

Advanced settings

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators
for thousands.

Decimal separator: define separators for
decimals.

CSV options

Select this check box to specify the following CSV parameters:

  • Escape char: enter the escape character between double
    quotation marks.

  • Text enclosure: enter the enclosure character (only one
    character) between double quotation marks. For example, """
    needs to be entered when double quotation marks (“) are used as the enclosure
    character.

It is recommended to use standard escape character, that is "".
Otherwise, you should set the same character for Escape char and
Text enclosure. For example, if the escape character is set to
"", the text enclosure can be set to any other character. On
the other hand, if the escape character is set to other character rather than
"", the text enclosure can be set to any other characters.
However, the escape character will be changed to the same character as the text
enclosure. For instance, if the escape character is set to "#"
and the text enclosure is set to "@", the escape character will
be changed to "@", not "#".

Create directory if not exists

This check box is selected by default. It creates the directory
that holds the output delimited file, if it does not already exist.

Split output in several files

In case of very big output files, select this check box to
divide the output delimited file into several files.

Rows in each output file: set the number
of lines in each of the output files.

Custom the flush buffer size

Select this check box to define the number of lines to write
before emptying the buffer.

Row Number: set the number of lines to
write.

Output in row mode

Select this check box to ensure atomicity of the flush so that
each row of data can remain consistent as a set and incomplete rows of data are
never written to a file.

This check box is mostly useful when using this component in
the multi-thread situation.

Encoding

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Don’t generate empty file

Select this check box if you do not want to generate empty
files.

Throw an error if the file already exist

Select this check box to throw an exception if the output file specified in the
File Name field on the Basic
settings
view already exists.

Clear this check box to overwrite the existing file.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at
a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

FILE_NAME: the name of the file being processed. This is
a Flow variable and it returns a string.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

Use this component to write
a delimited file and separate fields using a field separator value.

Limitation

Due to license incompatibility, one or more JARs required to use
this component are not provided. You can install the missing JARs for this particular
component by clicking the Install button
on the Component tab view. You can also
find out and add all missing JARs easily on the Modules tab in the
Integration
perspective of your studio. You can find more details about how to install external modules in
Talend Help Center (https://help.talend.com)
.

Writing data in a delimited file

This scenario describes a three-component Job that extracts certain data from a file
holding information about clients, customers, and then writes the
extracted data in a delimited file.

In the following example, we have already stored the input schema under the Metadata node in the Repository tree view. For more information about storing schema metadata
in the Repository, see
Talend Studio User Guide
.

tFileOutputDelimited_1.png

Dropping and linking components

  1. In the Repository tree view, expand
    Metadata and File
    delimited
    in succession and then browse to your input schema,
    customers, and drop it on the design workspace. A
    dialog box displays where you can select the component type you want to
    use.

    tFileOutputDelimited_2.png

  2. Click tFileInputDelimited and then
    OK to close the dialog box. A tFileInputDelimited component holding the name of
    your input schema appears on the design workspace.
  3. Drop a tMap component and a tFileOutputDelimited component from the Palette to the design workspace.
  4. Link the components together using Row >
    Main connections.

Configuring the components

Configuring the input component

  1. Double-click tFileInputDelimited to open
    its Basic settings view. All its property
    fields are automatically filled in because you defined your input file
    locally.

    tFileOutputDelimited_3.png

  2. If you do not define your input file locally in the Repository tree view, fill in the details manually after
    selecting Built-in in the Property type list.
  3. Click the […] button next to the
    File Name field and browse to the input
    file, customer.csv in this example.

    Warning:

    If the path of the file contains some accented characters, you
    will get an error message when executing your Job.

  4. In the Row Separators and Field Separators fields, enter respectively

    ” and “;” as line and field
    separators.
  5. If needed, set the number of lines used as header and the number of lines
    used as footer in the corresponding fields and then set a limit for the
    number of processed rows.

    In this example, Header is set to 6 while
    Footer and Limit are not set.
  6. In the Schema field,
    schema is automatically set to Repository
    and your schema is already defined since you have stored your input file locally
    for this example. Otherwise, select Built-in and click the […] button next to Edit
    Schema
    to open the Schema
    dialog box where you can define the input schema, and then click OK to close the dialog box.

    tFileOutputDelimited_4.png

Configuring the mapping component

  1. In the design workspace, double-click tMap to open its editor.

    tFileOutputDelimited_5.png

  2. In the tMap editor, click

    tFileOutputDelimited_6.png

    on top of the panel to the right to open the Add a new output table dialog box.

    tFileOutputDelimited_7.png

  3. Enter a name for the table you want to create, row2
    in this example.
  4. Click OK to validate your changes and
    close the dialog box.
  5. In the table to the left, row1, select the first
    three lines (Id, CustomerName and
    CustomerAddress) and drop them to the table to the
    right
  6. In the Schema editor view situated in the
    lower left corner of the tMap editor,
    change the type of RegisterTime to String in the table to the right.

    tFileOutputDelimited_8.png

  7. Click OK to save your changes and close
    the editor.

Configuring the output component

  1. In the design workspace, double-click tFileOutputDelimited to open its Basic
    settings
    view and define the component properties.

    tFileOutputDelimited_9.png

  2. In the Property Type field, set the type
    to Built-in and fill in the fields that
    follow manually.
  3. Click the […] button next to the
    File Name field and browse to the
    output file you want to write data in,
    customerselection.txt in this example.
  4. In the Row Separator and Field Separator fields, set

    ” and “;” respectively as
    row and field separators.
  5. Select the Include Header check box if
    you want to output columns headers as well.
  6. Click Edit schema to open the schema
    dialog box and verify if the recuperated schema corresponds to the input
    schema. If not, click Sync Columns to
    recuperate the schema from the preceding component.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.
  2. Press F6 or click Run on the Run tab to
    execute the Job.

    tFileOutputDelimited_10.png

    The three specified columns Id,
    CustomerName and
    CustomerAddress are output in the defined output
    file.

    For an example of how to use dynamic schemas with tFileOutputDelimited, see Writing dynamic columns from a database to an output file.

Utilizing Output Stream to save filtered data to a local
file

Based on the preceding scenario, this scenario saves the filtered data to a
local file using output stream.

tFileOutputDelimited_11.png

Dropping and linking components

  1. Drop tJava from the Palette to the design workspace.
  2. Connect tJava to tFileInputDelimited using a Trigger > On Subjob OK
    connection.

Configuring the components

  1. Double-click tJava to open its Basic settings view.

    tFileOutputDelimited_12.png

  2. In the Code area, type in the following
    command:

    Note:

    In this scenario, the command we use in the Code area of tJava will
    create a new folder C:/myFolder where the output
    file customerselection.txt will be saved. You can
    customize the command in accordance with actual practice.

  3. Double-click tFileOutputDelimited to open
    its Basic settings view.

    tFileOutputDelimited_13.png

  4. Select Use Output Stream check box to
    enable the Output Stream field in which you
    can define the output stream using command.

    Fill in the Output Stream field with
    following command:
    Note:

    You can customize the command in the Output
    Stream
    field by pressing CTRL+SPACE to
    select built-in command from the list or type in the command into
    the field manually in accordance with actual practice. In this
    scenario, the command we use in the Output
    Stream
    field will call the
    java.io.OutputStream class to output the filtered
    data stream to a local file which is defined in the Code area of tJava in this scenario.

  5. Click Sync columns to retrieve the schema
    defined in the preceding component.
  6. Leave rest of the components as they were in the previous scenario.

Saving and executing the Job

  1. Press Ctrl+S to save
    your Job.
  2. Press F6 or click
    Run on the Run tab to execute the Job.

    The three specified columns Id, CustomerName and CustomerAddress
    are output in the defined output file.
    tFileOutputDelimited_10.png

    For an example of how to use dynamic schemas with
    tFileOutputDelimited, see Writing dynamic columns from a database to an output file.

tFileOutputDelimited MapReduce properties (deprecated)

These properties are used to configure tFileOutputDelimited running in the MapReduce Job framework.

The MapReduce
tFileOutputDelimited component belongs to the MapReduce family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

tFileOutputDelimited_15.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder

Browse to, or enter the path pointing to the data to be used in the file system.

This path must point to a folder rather than a file, because a

Talend
Map/Reduce Job need to write in its target folder not only the
final result but also multiple part- files generated in
performing Map/Reduce computations.

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
configuration
tab in the Run view.

Action

Select an operation for writing data:

Create: Creates a file and write data in
it.

Overwrite: Overwrites the file existing
in the directory specified in the Folder
field.

Row separator

The separator used to identify the end of a row.

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

Include Header

Select this check box to include the column header to the
file.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Compress the data

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

Merge result to single file

Select this check box to merge the final part files into a single file and put that file in a
specified directory.

Once selecting it, you need to enter the path to, or browse to the
folder you want to store the merged file in. This directory is
automatically created if it does not exist.

The following check boxes are used to manage the source and the
target files:

  • Remove source dir: select
    this check box to remove the source files after the merge.

  • Override target file:
    select this check box to override the file already existing in the target
    location. This option does not override the folder.

If this component is writing merged files with a Databricks cluster, add the following
parameter to the Spark configuration console of your
cluster:

This
parameter prevents the merge file including the log file automatically generated by the
DBIO service of Databricks.

This option is not available for a Sequence file.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

This option is not available for a Sequence file.

CSV options

Select this check box to include CSV specific parameters such as Escape char and Text
enclosure
.

Use local timezone for date Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileOutputDelimited as well as the MapReduce
family appears in the Palette of the
Studio.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileOutputDelimited properties for Apache Spark Batch

These properties are used to configure tFileOutputDelimited running in the Spark Batch Job framework.

The Spark Batch
tFileOutputDelimited component belongs to the File family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

tFileOutputDelimited_15.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder

Browse to, or enter the path pointing to the data to be used in the file system.

Note that this path must point to a folder rather than a file.

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Action

Select an operation for writing data:

Create: Creates a file and write data
in it.

Overwrite: Overwrites the file
existing in the directory specified in the Folder field.

Row separator

The separator used to identify the end of a row.

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

Include Header

Select this check box to include the column header to the file.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Compress the data

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

Merge result to single file

Select this check box to merge the final part files into a single file and put that file in a
specified directory.

Once selecting it, you need to enter the path to, or browse to the
folder you want to store the merged file in. This directory is
automatically created if it does not exist.

The following check boxes are used to manage the source and the
target files:

  • Remove source dir: select
    this check box to remove the source files after the merge.

  • Override target file:
    select this check box to override the file already existing in the target
    location. This option does not override the folder.

If this component is writing merged files with a Databricks cluster, add the following
parameter to the Spark configuration console of your
cluster:

This
parameter prevents the merge file including the log file automatically generated by the
DBIO service of Databricks.

This option is not available for a Sequence file.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

This option is not available for a Sequence file.

CSV options

Select this check box to include CSV specific parameters such as Escape char and Text
enclosure
.

Use local timezone for date Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component
yet.

tFileOutputDelimited properties for Apache Spark Streaming

These properties are used to configure tFileOutputDelimited running in the Spark Streaming Job framework.

The Spark Streaming
tFileOutputDelimited component belongs to the File family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

tFileOutputDelimited_15.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder

Browse to, or enter the path pointing to the data to be used in the file system.

Note that this path must point to a folder rather than a file.

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Action

Select an operation for writing data:

Create: Creates a file and write data
in it.

Overwrite: Overwrites the file
existing in the directory specified in the Folder field.

Row separator

The separator used to identify the end of a row.

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

Include Header

Select this check box to include the column header to the file.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

Compress the data

Select the Compress the data check box to compress the
output data.

Hadoop provides different compression formats that help reduce the space needed for
storing files and speed up data transfer. When reading a compressed file, the Studio needs
to uncompress it before being able to feed it to the input flow.

Advanced settings

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

This option is not available for a Sequence file.

CSV options

Select this check box to include CSV specific parameters such as Escape char and Text
enclosure
.

Write empty batches Select this check box to allow your Spark Job to create an empty batch when the
incoming batch is empty.

For further information about when this is desirable
behavior, see this discussion.

Use local timezone for date Select this check box to use the local date of the machine in which your Job is executed. If leaving this check box clear, UTC is automatically used to format the Date-type data.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x