tWriteDelimitedFields
Converts records into byte arrays.
tWriteDelimitedFields generates
delimited strings or byte arrays to be used by the output components, such as tKafkaOutput requiring serialized data while tJMSOutput requiring strings. tWriteDelimitedFields embeds the incoming data into a single delimited
column.
tWriteDelimitedFields properties for Apache Spark Streaming
These properties are used to configure tWriteDelimitedFields running in the Spark Streaming Job framework.
The Spark Streaming
tWriteDelimitedFields component belongs to the Processing family.
The streaming version of this component is available in the Palette of the Studio only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
|
Output type |
Select the type of the data to be outputted into the target file. The data is |
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to The schema of this component is read-only. You can click Edit schema to view the schema. When the output type is String, the read-only When the output type is byte[], the read-only |
|
Include header |
Select this check box to include the column header to the file. |
|
Field separator |
Enter character, string or regular expression to separate fields for the transferred |
|
Custom encoding |
You may encounter encoding issues when you process the stored data. In that Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
Advanced settings
|
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.). |
|
CSV options |
Select this check box to include CSV specific parameters such as Escape char and Text |
Usage
|
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.