tLogRow properties for Apache Spark Batch
These properties are used to configure tLogRow running in the Spark Batch Job framework.
The Spark Batch
tLogRow component belongs to the Misc family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
Define a storage configuration |
Select the configuration component to be used to provide the configuration If you leave this check box clear, the target file system is the local The configuration component to be used must be present in the same Job. For |
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
Built-In: You create and store the |
|
Repository: You have already created |
|
Sync columns | Click to synchronize the output file schema with the input file schema. The Sync function is available only when the component is linked with the preceding component using a Row connection. |
Basic | Displays the output flow in basic mode. |
Table | Displays the output flow in table cells. |
Vertical |
Displays each row of the output flow as a key-value list. With this mode selected, you can choose to show either the unique name |
Separator (For Basic mode only) |
Enter the separator which will delimit data on the Log display. |
Print header (For Basic mode only) |
Select this check box to include the header of the input flow in the output display. |
Print component unique name in front of each (For Basic mode only) |
Select this check box to show the unique name the component in front |
Print schema column name in front of each (For Basic mode only) |
Select this check box to retrieve column labels from output |
Use fixed length for values (For Basic mode only) |
Select this check box to set a fixed width for the value |
Usage
Usage rule |
This component is used as an intermediate or an end step. This component, along with the Spark Batch component Palette it belongs to, appears only Note that in this documentation, unless otherwise |
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |