August 16, 2023

tWriteXMLFields – Docs for ESB 6.x

tWriteXMLFields

Converts records into byte arrays.

tWriteXMLFields generates strings or
byte arrays to be used by the output components, such as tKafkaOutput requiring serialized data while tJMSOutput requiring strings. tWriteXMLFields embeds the incoming data into a single XML column.

tWriteXMLFields properties for Apache Spark Streaming

These properties are used to configure tWriteXMLFields running in the Spark Streaming Job framework.

The Spark Streaming
tWriteXMLFields component belongs to the Processing family.

The streaming version of this component is available in the Palette of the Studio only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Output type

Select the type of the data to be outputted into the target file. The data is
byte arrays if you select byte[].

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

The schema of this component is read-only. You can click Edit schema to view the schema.

When the output type is String, the read-only
single column is messageContent. This column is used to
provide strings to the output components such as tJMSOutput.

When the output type is byte[], the read-only
single column is serializedValue. This column is used to
provide byte arrays to the output components such as tKafkaOutput.

The output schema and its read-only column can be seen by clicking the Row > Output link to the component that follows in the same Job.
The schema is displayed in the Basic settings tab of the
Component view

Row tag

Specify the tag that will wrap data and structure per row.

Custom encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually. This field is compulsory for database
data handling.

Advanced settings

Root tags

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Output format

Define the output format.

  • Column: The columns retrieved
    from the input schema.

  • As attribute: select check box
    for the column(s) you want to use as attribute(s) of the parent
    element in the XML output.

Note:

If the same column is selected in both the Output format table as an attribute
and in the Use dynamic grouping
setting as the criterion for dynamic grouping, only the dynamic
group setting will take effect for that column.

Use schema column name: By
default, this check box is selected for all columns so that the
column labels from the input schema are used as data wrapping tags.
If you want to use a different tag than from the input schema for
any column, clear this check box for that column and specify a tag
label between quotation marks in the Label field.

Use dynamic grouping

Select this check box if you want to dynamically group the output
columns. Click the plus button to add one ore more grouping criteria
in the Group by table.

Column: Select a column you want
to use as a wrapping element for the grouped output rows.

Attribute label: Enter an
attribute label for the group wrapping element, between quotation
marks.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x