
|
Component family |
File/Input |
|
|
Function |
tFileInputPositional reads a If you have subscribed to one of the Talend solutions with Big Data, you are |
|
|
Purpose |
This component opens a file and reads it row by row to split them |
|
|
Basic settings |
Property type |
Either Built-in or Repository. |
|
|
|
Built-in: No property data stored |
|
|
|
Repository: Select the repository |
| Use existing dynamic |
Select this check box to reuse an existing dynamic schema to When this check box is selected, a Component |
|
|
|
File Name/Stream |
File name: Name and path of the Stream: The data flow to be This variable could be already pre-defined in your Studio or In order to avoid the inconvenience of hand writing, you could Related topic to the available variables: see Talend Studio User Guide. Related scenario to the input stream, see Scenario 2: Reading data from a remote file in streaming mode. |
|
|
Row separator |
Enter the separator used to identify the end of a row. |
|
|
Use byte length as the cardinality |
Select this check box to enable the support of double-byte |
|
|
Customize |
Select this check box to customize the data format of the Column: Select the column you Size: Enter the column Padding char: Enter, between double quotation marks, Alignment: Select the appropriate |
|
|
Pattern |
Length values separated by commas, interpreted as a string between |
|
|
Skip empty rows |
Select this check box to skip the empty rows. |
|
|
Uncompress as zip file |
Select this check box to uncompress the input file. |
|
|
Die on error |
Select this check box to stop the execution of the Job when an error occurs. Clear the check box to skip any rows on error and complete the process for error-free rows. |
|
|
Header |
Enter the number of rows to be skipped in the beginning of file. |
|
|
Footer |
Number of rows to be skipped at the end of the file. |
|
|
Limit |
Maximum number of rows to be processed. If Limit = 0, no row is |
|
|
Schema and Edit |
A schema is a row description. It defines the number of fields to be processed and passed on Click Edit schema to make changes to the schema. If the
This component offers the advantage of the dynamic schema feature. This allows you to This dynamic schema feature is designed for the purpose of retrieving unknown columns This component must work with tSetDynamicSchema to leverage the dynamic schema |
|
|
|
Built-in: The schema will be |
|
|
|
Repository: The schema already |
|
Advanced settings |
Needed to process rows longer than 100 000 |
Select this check box if the rows to be processed in the input |
|
|
Advanced separator (for numbers) |
Select this check box to modify the separators used for Thousands separator: define Decimal separator: define |
|
|
Trim all column |
Select this check box to remove leading and trailing whitespaces |
|
|
Validate date |
Select this check box to check the date format strictly against the input schema. |
|
|
Encoding |
Select the encoding from the list or select Custom and |
|
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at a |
|
Global Variables |
NB_LINE: the number of rows processed. This is an After ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see Talend Studio |
|
|
Usage |
Use this component to read a file and separate fields using a |
|
|
Log4j |
The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html. |
|
Warning
The information in this section is only for users that have subscribed to one of
the Talend solutions with Big Data and is not applicable to
Talend Open Studio for Big Data users.
In a Talend Map/Reduce Job, tFileInputPositional, as well as the whole Map/Reduce Job using it,
generates native Map/Reduce code. This section presents the specific properties of
tFileInputPositional when it is used in that
situation. For further information about a Talend Map/Reduce Job, see the Talend Big Data Getting Started Guide.
|
Component family |
MapReduce / Input |
|
|
Basic settings |
Property type |
Either Built-in or Repository. |
|
Built-in: no property data stored |
||
|
Repository: reuse properties The fields that come after are pre-filled in using the fetched For further information about the Hadoop |
||
|
Schema and Edit |
A schema is a row description. It defines the number of fields to be processed and passed on Click Edit Schema to make changes to the schema. Note that if you make changes, the schema automatically becomes |
|
|
Built-In: You create and store the schema locally for this |
||
|
Repository: You have already created the schema and |
||
|
|
Folder/File |
Browse to, or enter the directory in HDFS where the data you need to use is. If the path you set points to a folder, this component will read If you want to specify more than one files or directories in this If the file to be read is a compressed one, enter the file name
Note that you need |
|
Die on error |
Select this check box to stop the execution of the Job when an error occurs. Clear the check box to skip any rows on error and complete the process for error-free rows. |
|
|
Row separator |
Enter the separator used to identify the end of a row. |
|
|
|
Customize |
Select this check box to customize the data format of the Column: Select the column you Size: Enter the column Padding char: Enter, between double quotation marks, Alignment: Select the appropriate |
|
|
Pattern |
Enter between double quotes the length values separated by commas, interpreted as a |
|
Header |
Enter the number of rows to be skipped in the beginning of file. For example, enter 0 to ignore |
|
|
Skip empty rows |
Select this check box to skip the empty rows. |
|
|
Advanced settings |
Custom Encoding |
You may encounter encoding issues when you process the stored data. In that situation, select Then select the encoding to be used from the list or select |
|
Advanced separator (for number) |
Select this check box to change the separator used for numbers. By |
|
|
Trim columns |
Select this check box to remove the leading and trailing |
|
|
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see Talend Studio |
|
|
Usage |
In a Talend Map/Reduce Job, it is used as a start component and requires Once a Map/Reduce Job is opened in the workspace, tFileInputDelimited as well as the Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
|
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
|
The following scenario describes a two-component Job, which aims at reading data from
an input file that contains contract numbers, customer references, and insurance numbers
as shown below, and outputting the selected data (according to the data position) into
an XML
file.
|
1 2 3 4 5 |
Contract CustomerRef InsuranceNr 00001 8200 50330 00001 8201 50331 00002 8202 50332 00002 8203 50333 |

-
Drop a tFileInputPositional component
from the Palette to the design workspace. -
Drop a tFileOutputXML component as well.
This file is meant to receive the references in a structured way. -
Right-click the tFileInputPositional
component and select Row >
Main. Then drag it onto the
tFileOutputXML component and release
when the plug symbol shows up.
-
Double-click the tFileInputPositional
component to show its Basic settings view
and define its properties.
-
Define the Job Property type if needed. For this scenario, we use the
built-in Property type.As opposed to the Repository, this means that the Property type is set for
this station only. -
Fill in a path to the input file in the File
Name field. This field is mandatory. -
Define the Row separator identifying the
end of a row if needed, by default, a carriage return. -
If required, select the Use byte length as the
cardinality check box to enable the support of double-byte
character. -
Define the Pattern to delimit fields in a
row. The pattern is a series of length values corresponding to the values of
your input files. The values should be entered between quotes, and separated
by a comma. Make sure the values you enter match the schema defined. -
Fill in the Header, Footer and Limit fields
according to your input file structure and your need. In this scenario, we
only need to skip the first row when reading the input file. To do this,
fill the Header field with 1 and leave the other fields as they are. -
Next to Schema, select Repository if the input schema is stored in the
Repository. In this use case, we use a Built-In input schema to define the data to pass on to the
tFileOutputXML component. -
You can load and/or edit the schema via the Edit
Schema function. For this schema, define three columns,
respectively Contract, CustomerRef
and InsuranceNr matching the structure of the input
file. Then, click OK to close the [Schema] dialog box and propagate the
changes.
-
Double-click tFileOutputXML to show its
Basic settings view.
-
Enter the XML output file path.
-
Define the row tag that will wrap each row of data, in this use case
ContractRef. -
Click the three-dot button next to Edit
schema to view the data structure, and click Sync columns to retrieve the data structure from
the input component if needed. -
Switch to the Advanced settings tab view
to define other settings for the XML output.
-
Click the plus button to add a line in the Root
tags table, and enter a root tag (or more) to wrap the XML
output structure, in this case ContractsList. -
Define parameters in the Output format
table if needed. For example, select the As
attribute check box for a column if you want to use its name
and value as an attribute for the parent XML element, clear the Use schema column name check box for a column to
reuse the column label from the input schema as the tag label. In this use
case, we keep all the default output format settings as they are. -
To group output rows according to the contract number, select the
Use dynamic grouping check box, add a
line in the Group by table, select
Contract from the Column list field, and enter an attribute for it in the
Attribute label field.
-
Leave all the other parameters as they are.
-
Press Ctrl+S to save your Job to ensure
that all the configured parameters take effect. -
Press F6 or click Run on the Run tab to
execute the Job.The file is read row by row based on the length values defined in the
Pattern field and output as an XML file
as defined in the output settings. You can open it using any standard XML
editor.
This scenario describes a four-component Job that reads data from a positional file,
writes the data to another positional file, and replaces the padding characters with
space. The schema column details are not defined in the positional file components;
instead, they leverages a reusable dynamic schema. The input file used in this scenario
is as follows:
|
1 2 3 4 5 |
id----name--------city-------- 1-----Andrews-----Paris------- 2-----Mark--------London------ 3-----Marie-------Paris------- 4-----Michael-----Washington-- |
-
Drop the following components from the Palette onto the design workspace: tFixedFlowInput, tSetDynamicSchema, tFileInputPositional, and tFileOutputPositional.
-
Connect the tFixedFlowInput component to
the tSetDynamicSchema using a Row > Main
connection to form a subjob. This subjob will define a reusable dynamic
schema. -
Connect the tFileInputPositional
component to the tFileOutputPositional
component using a Row > Main connection to form another subjob. This
subjob will read data from the input positional file and write the data to
another positional file based on the dynamic schema set in the previous
subjob. -
Connect the tFixedFlowInput component to
the tFileInputPositional component using a
Trigger > On
Subjob Ok connection to link the two subjobs together.
-
Double-click the tFixedFlowInput
component to show its Basic settings view
and define its properties.
-
Click the […] button next to Edit schema to open the [Schema] dialog box.

-
Click the [+] button to add three
columns: ColumnName, ColumnType, and ColumnLength, and set their types to String, String, and
Integer respectively to define the
minimum properties required for a positional file schema. Then, click
OK to close the dialog box. -
Select the Use Inline Table option, click
the [+] button three times to add three
lines, give them a name in the ColumnName
field, according to the actual columns of the input file to read: ID, Name,
and City, set their types in the
corresponding ColumnType field: id_Interger for column ID, and id_String for
columns Name and City, and set the length values of the columns in the
corresponding ColumnLength field. Note that
the column names you give in this table will compose the header of the
output file. -
Double-click the tSetDynamicSchema
component to open its Basic settings view.
-
Click Sync columns to ensure that the
schema structure is properly retrieved from the preceding component. -
Under the Parameters table, click the
[+] button to add three lines in the
table. -
Click in the Property field for each
line, and select ColumnName, Type, and Length
respectively. -
Click in the Value field for each line,
and select ColumnName, ColumnType, and ColumnLength respectively.Now, with the values set in the inline table of the tFixedFlowInput component retrieved, the following data
structure is defined in the dynamic schema:Column Name Type Length ID Integer 6 Name String 12 City String 12
-
Double-click the tFileInputPositional
component to open its Basic settings
view.
Warning
The dynamic schema feature is only supported in Built-In mode and requires the input file
to have a header row. -
Select the Use existing dynamic check
box, and in from the Component List that
appears, select the tSetDynamicSchema
component you use to create the dynamic schema. In this use case, only one
tSetDynamicSchema component is used, so
it is automatically selected. -
In the File name/Stream field, enter the
path to the input positional file, or browse to the file path by clicking
the […] button. -
Fill in the Header, Footer and Limit fields
according to your input file structure and your need. In this scenario, we
only need to skip the first row when reading the input file. To do this,
fill the Header field with 1 and leave the other fields as they are. -
Click the […] button next to Edit schema to open the [Schema] dialog box, define only one column, dyn in this example, and select Dynamic from the Type list. Then, click OK
to close the [Schema] dialog box and
propagate the changes.
-
Select the Customize check box, enter
'-'in the Padding char
field, and keep the other settings as they are. -
Double-click the tFileOutputPositional
component to open its Basic settings
view.
-
Select the Use existing dynamic check
box, specify the output file path, and select the Include header check box. -
In the Padding char field, enter
'so that the padding characters will be replaced with space in
'
the output file.
-
Press Ctrl+S to save your Job to ensure
that all the configured parameters take effect. -
Press F6 or click Run on the Run tab to
execute the Job.
The data is read from the input positional file and written into the
output positional file, with the padding characters replaced by
space.