July 30, 2023

tFileInputPositional – Docs for ESB 7.x

tFileInputPositional

Reads a positional file row by row to split them up into fields based on a given
pattern and then sends the fields as defined in the schema to the next
component.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tFileInputPositional Standard properties

These properties are used to configure tFileInputPositional
running in the Standard Job framework.

The Standard
tFileInputPositional component belongs to the File family.

The component in this framework is available in all Talend
products
.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

Use existing dynamic

Select this check box to reuse an existing dynamic schema to
handle data from unknown columns.

When this check box is selected, a Component list appears allowing you to select the component
used to set the dynamic schema.

File name/Stream

File name: Name and path of the file to
be processed.

Warning: Use absolute path (instead of relative path) for
this field to avoid possible errors.

Stream: The data flow to be processed.
The data must be added to the flow in order for tFileInputPositional to fetch these data via the corresponding
representative variable.

This variable could be already pre-defined in your Studio or
provided by the context or the components you are using along with this
component, for example, the INPUT_STREAM variable of
tFileFetch; otherwise, you could
define it manually and use it according to the design of your Job, for example,
using tJava or tJavaFlex.

In order to avoid the inconvenience of hand writing, you could
select the variable of interest from the auto-completion list (Ctrl+Space) to fill the current field on
condition that this variable has been properly defined.

Related topic to the available variables: see
Talend Studio User Guide
.

Related scenario to the input stream, see Reading data from a remote file in streaming mode.

Row separator

The separator used to identify the end of a row.

Use byte length as the cardinality

Select this check box to enable the support of double-byte
character to this component. JDK 1.6 is required for this feature.

Customize

Select this check box to customize the data format of the
positional file and define the table columns:

Column: Select the column you want to
customize.

Size: Enter the column size.

Padding char: Enter, between double
quotation marks, the padding charater you need to remove from the field. A
space by default.

Alignment: Select the appropriate
alignment parameter.

Pattern

Length values separated by commas, interpreted as a string
between quotes. Make sure the values entered in this field are consistent with
the schema defined.

Pattern Units

The unit of the length values specified in the Pattern
field.

  • Bytes: With this option selected, the length
    values in the Pattern field should be the count of
    bytes that represent symbols in original encoding of the input file.

  • Symbols: With this option selected, the length
    values in the Pattern field should be the count of
    regular symbols, not including surrogate pairs.

  • Symbols (including rare): With this option
    selected, the length values in the Pattern field
    should be the count of symbols, including rare symbols such as surrogate
    pairs, and each surrogate pair counts as a single symbol. Considering the
    performance factor, it is not recommended to use this option when your
    input data consists of only regular symbols.

Skip empty rows

Select this check box to skip the empty rows.

Uncompress as zip file

Select this check box to uncompress the input file.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Header

Enter the number of rows to be skipped in the beginning of file.

Footer

Number of rows to be skipped at the end of the file.

Limit

Maximum number of rows to be processed. If Limit = 0, no row is
read or processed.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

This
component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see
Talend Studio

User Guide.

This
dynamic schema feature is designed for the purpose of retrieving unknown columns of a
table and is recommended to be used for this purpose only; it is not recommended for the
use of creating tables.

This component must work with tSetDynamicSchema to leverage the dynamic schema feature.

 

Built-in: The schema will be created and
stored locally for this component only. Related topic: see
Talend Studio User Guide
.

 

Repository: The schema already exists
and is stored in the Repository, hence can be reused in various projects and
Job flowcharts. Related topic: see
Talend Studio User Guide
.

Advanced settings

Needed to process rows longer than 100 000
characters

Select this check box if the rows to be processed in the input
file are longer than 100 000 characters.

Advanced separator (for numbers)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Thousands separator: define separators
for thousands.

Decimal separator: define separators for
decimals.

Trim all column

Select this check box to remove leading and trailing
whitespaces from defined columns.

Validate date

Select this check box to check the date format strictly against the input schema.

Encoding

Select the encoding from the list or select Custom
and define it manually. This field is compulsory for database data handling. The
supported encodings depend on the JVM that you are using. For more information, see
https://docs.oracle.com.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at
a Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

Use this component to read a file and separate fields using a
position separator value. You can also create a rejection flow using a
Row > Reject link to filter the
data which does not correspond to the type defined. For an example of how to
use these two links, see Procedure.

Reading a Positional file and saving filtered results to XML

The following scenario describes a two-component Job, which aims at reading data from
an input file that contains contract numbers, customer references, and insurance numbers
as shown below, and outputting the selected data (according to the data position) into
an XML
file.

Dropping and linking components

tFileInputPositional_1.png

  1. Drop a tFileInputPositional component
    from the Palette to the design workspace.
  2. Drop a tFileOutputXML component as well.
    This file is meant to receive the references in a structured way.
  3. Right-click the tFileInputPositional
    component and select Row
    >

    Main. Then drag it onto the
    tFileOutputXML component and release
    when the plug symbol shows up.

Configuring data input

  1. Double-click the tFileInputPositional
    component to show its Basic settings view
    and define its properties.

    tFileInputPositional_2.png

  2. Define the Job Property type if needed. For this scenario, we use the
    built-in Property type.

    As opposed to the Repository, this means that the Property type is set for
    this station only.
  3. Fill in a path to the input file in the File
    Name
    field. This field is mandatory.
  4. Define the Row separator identifying the
    end of a row if needed, by default, a carriage return.
  5. If required, select the Use byte length as the
    cardinality
    check box to enable the support of double-byte
    character.
  6. Define the Pattern to delimit fields in a
    row. The pattern is a series of length values corresponding to the values of
    your input files. The values should be entered between quotes, and separated
    by a comma. Make sure the values you enter match the schema defined.
  7. Fill in the Header, Footer and Limit fields
    according to your input file structure and your need. In this scenario, we
    only need to skip the first row when reading the input file. To do this,
    fill the Header field with 1 and leave the other fields as they are.
  8. Next to Schema, select Repository if the input schema is stored in the
    Repository. In this use case, we use a Built-In input schema to define the data to pass on to the
    tFileOutputXML component.
  9. You can load and/or edit the schema via the Edit
    Schema
    function. For this schema, define three columns,
    respectively Contract, CustomerRef
    and InsuranceNr matching the structure of the input
    file. Then, click OK to close the Schema dialog box and propagate the
    changes.

    tFileInputPositional_3.png

Configuring data output

  1. Double-click tFileOutputXML to show its
    Basic settings view.

    tFileInputPositional_4.png

  2. Enter the XML output file path.
  3. Define the row tag that will wrap each row of data, in this use case
    ContractRef.
  4. Click the three-dot button next to Edit
    schema
    to view the data structure, and click Sync columns to retrieve the data structure from
    the input component if needed.
  5. Switch to the Advanced settings tab view
    to define other settings for the XML output.

    tFileInputPositional_5.png

  6. Click the plus button to add a line in the Root
    tags
    table, and enter a root tag (or more) to wrap the XML
    output structure, in this case ContractsList.
  7. Define parameters in the Output format
    table if needed. For example, select the As
    attribute
    check box for a column if you want to use its name
    and value as an attribute for the parent XML element, clear the Use schema column name check box for a column to
    reuse the column label from the input schema as the tag label. In this use
    case, we keep all the default output format settings as they are.
  8. To group output rows according to the contract number, select the
    Use dynamic grouping check box, add a
    line in the Group by table, select
    Contract from the Column list field, and enter an attribute for it in the
    Attribute label field.

    tFileInputPositional_6.png

  9. Leave all the other parameters as they are.

Saving and executing the Job

  1. Press Ctrl+S to save your Job to ensure
    that all the configured parameters take effect.
  2. Press F6 or click Run on the Run tab to
    execute the Job.

    The file is read row by row based on the length values defined in the
    Pattern field and output as an XML file
    as defined in the output settings. You can open it using any standard XML
    editor.
    tFileInputPositional_7.png

Handling a positional file based on a dynamic schema

This scenario applies only to subscription-based Talend products.

This scenario describes a four-component Job that reads data from a positional
file, writes the data to another positional file, and replaces the padding characters with
space. The schema column details are not defined in the positional file components; instead,
they leverages a reusable dynamic schema. The input file used in this scenario is as
follows:

Dropping and linking components

  1. Drop the following components from the Palette onto the design workspace: tFixedFlowInput, tSetDynamicSchema, tFileInputPositional, and tFileOutputPositional.
  2. Connect the tFixedFlowInput component to
    the tSetDynamicSchema using a Row > Main
    connection to form a subJob. This subJob will define a reusable dynamic
    schema.
  3. Connect the tFileInputPositional
    component to the tFileOutputPositional
    component using a Row > Main connection to form another subJob. This
    subJob will read data from the input positional file and write the data to
    another positional file based on the dynamic schema set in the previous
    subJob.
  4. Connect the tFixedFlowInput component to
    the tFileInputPositional component using a
    Trigger > On
    Subjob Ok
    connection to link the two subJobs together.

    tFileInputPositional_8.png

Configuring the first subJob: creating a dynamic schema

  1. Double-click the tFixedFlowInput component to show its Basic settings view and define its
    properties.

    tFileInputPositional_9.png

  2. Click the […] button
    next to Edit schema to open the Schema dialog box.

    tFileInputPositional_10.png

  3. Click the [+] button to
    add three columns: ColumnName, ColumnType, and ColumnLength, and set their
    types to String, String, and Integer respectively to define the minimum properties required
    for a positional file schema. Then, click OK to close the dialog box.
  4. Select the Use Inline
    Table
    option, click the [+] button three times to add three lines, give them a name in
    the ColumnName field, according to the
    actual columns of the input file to read: ID, Name, and City, set their types
    in the corresponding ColumnType field: id_Interger for column ID, and id_String for columns Name and City, and set the length values of the columns in the
    corresponding ColumnLength field. Note
    that the column names you give in this table will compose the header of the
    output file.
  5. Double-click the tSetDynamicSchema component to open its Basic settings view.

    tFileInputPositional_11.png

  6. Click Sync columns to
    ensure that the schema structure is properly retrieved from the preceding
    component.
  7. Under the Parameters
    table, click the [+] button to add three
    lines in the table.
  8. Click in the Property
    field for each line, and select ColumnName, Type, and
    Length respectively.
  9. Click in the Value field
    for each line, and select ColumnName,
    ColumnType, and ColumnLength respectively.

    Now, with the values set in the inline table of the tFixedFlowInput component retrieved, the
    following data structure is defined in the dynamic schema:
    Column Name Type Length
    ID Integer 6
    Name String 12
    City String 12

Configuring the second subJob: reading and writing positional data

  1. Double-click the tFileInputPositional
    component to open its Basic settings
    view.

    tFileInputPositional_12.png

    Warning:

    The dynamic schema feature is only supported in Built-In mode and requires the input file
    to have a header row.

  2. Select the Use existing dynamic check
    box, and in from the Component List that
    appears, select the tSetDynamicSchema
    component you use to create the dynamic schema. In this use case, only one
    tSetDynamicSchema component is used, so
    it is automatically selected.
  3. In the File name/Stream field, enter the
    path to the input positional file, or browse to the file path by clicking
    the […] button.
  4. Fill in the Header, Footer and Limit fields
    according to your input file structure and your need. In this scenario, we
    only need to skip the first row when reading the input file. To do this,
    fill the Header field with 1 and leave the other fields as they are.
  5. Click the […] button next to Edit schema to open the Schema dialog box, define only one column, dyn in this example, and select Dynamic from the Type list. Then, click OK
    to close the Schema dialog box and
    propagate the changes.

    tFileInputPositional_13.png

  6. Select the Customize check box, enter
    '-' in the Padding char
    field, and keep the other settings as they are.
  7. Double-click the tFileOutputPositional
    component to open its Basic settings
    view.

    tFileInputPositional_14.png

  8. Select the Use existing dynamic check
    box, specify the output file path, and select the Include header check box.
  9. In the Padding char field, enter '
    '
    so that the padding characters will be replaced with space in
    the output file.

Saving and executing the Job

  1. Press Ctrl+S to save your Job to ensure
    that all the configured parameters take effect.
  2. Press F6 or click Run on the Run tab to
    execute the Job.

    tFileInputPositional_15.png

    The data is read from the input positional file and written into the
    output positional file, with the padding characters replaced by
    space.

tFileInputPositional MapReduce properties (deprecated)

These properties are used to configure tFileInputPositional running in the MapReduce Job framework.

The MapReduce
tFileInputPositional component belongs to the MapReduce family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema.

Note: If you
make changes, the schema automatically becomes built-in.
 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will read all of the files stored in that folder, for example,/user/talend/in; if sub-folders exist, the sub-folders are automatically ignored unless you define the property mapreduce.input.fileinputformat.input.dir.recursive to be true in the Hadoop properties table in the Hadoop configuration tab.

If you want to specify more than one files or directories in this
field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then tFileInputPositional automatically decompresses it
at runtime. The supported compression formats and their
corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
configuration
tab in the Run view.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Row separator

The separator used to identify the end of a row.

Customize

Select this check box to customize the data format of the
positional file and define the table columns:

Column: Select the column you
want to customize.

Size: Enter the column
size.

Padding char: Enter, between double quotation marks,
the padding charater you need to remove from the field. A space by
default.

Alignment: Select the appropriate
alignment parameter.

Pattern

Enter between double quotes the length values separated by commas, interpreted as a
string. Make sure the values entered in this field are consistent
with the schema defined.

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore
no rows for the data without header and set 1 for the data with header at the first row.

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Custom Encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Then select the encoding to be used from the list or select
Custom and define it
manually.

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Trim columns

Select this check box to remove the leading and trailing whitespaces from all
columns. When this check box is cleared, the Check column to
trim
table is displayed, which lets you select particular columns to
trim.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tFileInputDelimited as well as the
MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFileInputPositional properties for Apache Spark Batch

These properties are used to configure tFileInputPositional running in the Spark Batch Job framework.

The Spark Batch
tFileInputPositional component belongs to the File family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema.

Note: If you
make changes, the schema automatically becomes built-in.
 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will
read all of the files stored in that folder, for example,
/user/talend/in; if sub-folders exist, the sub-folders are automatically
ignored unless you define the property
spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be
true in the Advanced properties table in the
Spark configuration tab.

  • Depending on the filesystem to be used, properly configure the corresponding
    configuration component placed in your Job, for example, a
    tHDFSConfiguration component for HDFS, a
    tS3Configuration component for S3 and a
    tAzureFSConfiguration for Azure Storage and Azure Data Lake
    Storage.

If you want to specify more than one files or directories in this
field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then tFileInputPositional automatically decompresses it
at runtime. The supported compression formats and their
corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Row separator

The separator used to identify the end of a row.

Customize

Select this check box to customize the data format of the
positional file and define the table columns:

Column: Select the column you
want to customize.

Size: Enter the column
size.

Padding char: Enter, between double quotation marks,
the padding charater you need to remove from the field. A space by
default.

Alignment: Select the appropriate
alignment parameter.

Pattern

Enter between double quotes the length values separated by commas, interpreted as a
string. Make sure the values entered in this field are consistent
with the schema defined.

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore
no rows for the data without header and set 1 for the data with header at the first row.

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Set minimum partitions

Select this check box to control the number of partitions to be created from the input
data over the default partitioning behavior of Spark.

In the displayed field, enter, without quotation marks, the minimum number of partitions
you want to obtain.

When you want to control the partition number, you can generally set at least as many partitions as
the number of executors for parallelism, while bearing in mind the available memory and the
data transfer pressure on your network.

Custom Encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Trim columns

Select this check box to remove the leading and trailing whitespaces from all
columns. When this check box is cleared, the Check column to
trim
table is displayed, which lets you select particular columns to
trim.

Usage

Usage rule

This component is used as a start component and requires an output
link..

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component
yet.

tFileInputPositional properties for Apache Spark Streaming

These properties are used to configure tFileInputPositional running in the Spark Streaming Job framework.

The Spark Streaming
tFileInputPositional component belongs to the File family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see the Getting Started Guide.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema.

Note: If you
make changes, the schema automatically becomes built-in.
 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Folder/File

Browse to, or enter the path pointing to the data to be used in the file system.

If the path you set points to a folder, this component will
read all of the files stored in that folder, for example,
/user/talend/in; if sub-folders exist, the sub-folders are automatically
ignored unless you define the property
spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive to be
true in the Advanced properties table in the
Spark configuration tab.

  • Depending on the filesystem to be used, properly configure the corresponding
    configuration component placed in your Job, for example, a
    tHDFSConfiguration component for HDFS, a
    tS3Configuration component for S3 and a
    tAzureFSConfiguration for Azure Storage and Azure Data Lake
    Storage.

If you want to specify more than one files or directories in this
field, separate each path using a comma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then tFileInputPositional automatically decompresses it
at runtime. The supported compression formats and their
corresponding extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Row separator

The separator used to identify the end of a row.

Customize

Select this check box to customize the data format of the
positional file and define the table columns:

Column: Select the column you
want to customize.

Size: Enter the column
size.

Padding char: Enter, between double quotation marks,
the padding charater you need to remove from the field. A space by
default.

Alignment: Select the appropriate
alignment parameter.

Pattern

Enter between double quotes the length values separated by commas, interpreted as a
string. Make sure the values entered in this field are consistent
with the schema defined.

Header

Enter the number of rows to be skipped in the beginning of file.

For example, enter 0 to ignore
no rows for the data without header and set 1 for the data with header at the first row.

Skip empty rows

Select this check box to skip the empty rows.

Advanced settings

Set minimum partitions

Select this check box to control the number of partitions to be created from the input
data over the default partitioning behavior of Spark.

In the displayed field, enter, without quotation marks, the minimum number of partitions
you want to obtain.

When you want to control the partition number, you can generally set at least as many partitions as
the number of executors for parallelism, while bearing in mind the available memory and the
data transfer pressure on your network.

Custom Encoding

You may encounter encoding issues when you process the stored data. In that
situation, select this check box to display the Encoding list.

Select the encoding from the list or select Custom and define it manually.

Advanced separator (for number)

Select this check box to change the separator used for numbers. By default, the thousands separator is a comma (,) and the decimal separator is a period (.).

Trim columns

Select this check box to remove the leading and trailing whitespaces from all
columns. When this check box is cleared, the Check column to
trim
table is displayed, which lets you select particular columns to
trim.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component is only used to provide the lookup flow (the right side of a join
operation) to the main flow of a tMap component. In this
situation, the lookup model used by this tMap must be
Load once.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x