August 16, 2023

tWriteAvroFields – Docs for ESB 6.x

tWriteAvroFields

Transforms the incoming data into Avro files.

tWriteAvroFields generates Avro
binaries to be used by the components requiring serialized data as input such as tKafkaOutput.

tWriteAvroFields properties for Apache Spark Streaming

These properties are used to configure tWriteAvroFields running in the Spark Streaming Job framework.

The Spark Streaming
tWriteAvroFields component belongs to the Processing
family.

The streaming version of this component is available in the Palette of the Studio only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

The schema of this component is read-only. You can click Edit schema to view the schema.

This read-only schema of tWriteAvroFields receives data
as an entire object from the schema of its input component without caring about what this
input schema should look like and serializes the incoming object into Avro binaries.

That is to say, it does not require the input flow to have the identical schema. For
example, an input schema composed of a user column and an
age column can be directly serialized. Note that the
supported data types by this component are listed in its Basic
settings
view.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x