August 17, 2023

tFileInputRegex – Docs for ESB 5.x



tFileInputRegex properties

Component family




Powerful feature which can replace number of other components of
the File family. Requires some advanced knowledge on regular
expression syntax


Opens a file and reads it row by row to split them up into fields
using regular expressions. Then sends fields as defined in the
Schema to the next Job component, via a Row link.

Basic settings

Property type

Either Built-in or Repository.



Built-in: No property data stored



Repository: Select the repository
file where the properties are stored. The fields that follow are
completed automatically using the data retrieved.


File Name/Stream

File name: Name of the file
and/or the variable to be processed

Stream: Data flow to be
processed. The data must be added to the flow so that it can be
collected by the tFileInputRegex
via the INPUT_STREAM variable in the
autocompletion list (Ctrl+Space)

For further information about how to define and use a variable in
a Job, see Talend Studio
User Guide.

Note that in the Map/Reduce version of
tFileInputRegex, this field to
be used to read files is Folder/File.



Browse to, or enter the directory in HDFS where the data you need to use is.

If the path you set points to a folder, this component will read
all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist,
the sub-folders are automatically ignored unless you define the path

If you want to specify more than one files or directories in this
field, separate each path using a coma (,).

If the file to be read is a compressed one, enter the file name
with its extension; then ttFileInputRegex automatically decompresses it at
runtime. The supported compression formats and their corresponding
extensions are:

  • DEFLATE: *.deflate

  • gzip: *.gz

  • bzip2: *.bz2

  • LZO: *.lzo

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
tab in the Run view.


Row separator

Enter the separator used to identify the end of a row.



This field can contain multiple lines. Type in your regular
expressions including the subpattern matching the fields to be

Note: Antislashes need to be
doubled in regexp


Regex syntax requires double quotes.



Enter the number of rows to be skipped in the beginning of file.



Number of rows to be skipped at the end of the file.



Maximum number of rows to be processed. If Limit = 0, no row is
read or processed.


Ignore error message for the unmatched record

Select this check box to avoid outputing error messages for records that do not match
the specified regex. This check box is cleared by default.


Schema and Edit

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository



Built-in: The schema will be
created and stored locally for this component only. Related topic:
see Talend Studio
User Guide.



Repository: The schema already
exists and is stored in the Repository, hence can be reused in
various projects and Job flowcharts. Related topic: see
Talend Studio
User Guide.


Skip empty rows

Select this check box to skip the empty rows.


Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows.
When errors are skipped, you can collect the rows on error using a Row
> Reject

Advanced settings


Select the encoding from the list or select Custom and
define it manually. This field is compulsory for database data handling.

In the Map/Reduce version of tFileInputRegex, you need to select the
Custom encoding check box to
display this list.


tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a
Job level as well as at each component level.

Global Variables

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

variable is not available to the Map/Reduce version.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.


Use this component to read a file and separate fields contained in
this file according to the defined Regex. You can also create a
rejection flow using a Row >
link to filter the data which doesn’t
correspond to the type defined. For an example of how to use these
two links, see Scenario 2: Extracting correct and erroneous data from an XML field in a delimited

Usage in Map/Reduce Jobs

In a Talend Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

For further information about a Talend Map/Reduce Job, see the sections
describing how to create, convert and configure a Talend Map/Reduce Job of the
Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.


The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User

For more information on the log4j logging levels, see the Apache documentation at



Scenario: Regex to Positional file

The following scenario creates a two-component Job, reading data from an Input file
using regular expression and outputting delimited data into an XML file.


Dropping and linking the components

  1. Drop a tFileInputRegex component from the
    Palette to the design workspace.

  2. Drop a tFileOutputPositional component the
    same way.

  3. Right-click on the tFileInputRegex component
    and select Row >
    Main. Drag this main row link
    onto the tFileOutputPositional component and
    release when the plug symbol displays.

Configuring the components

  1. Select the tFileInputRegex again so the
    Component view shows up, and define the

  2. The Job is built-in for this scenario. Hence, the Properties are set for this
    station only.

  3. Fill in a path to the file in File Name
    field. This field is mandatory.

  4. Define the Row separator identifying the end
    of a row.

  5. Then define the Regular expression in order
    to delimit fields of a row, which are to be passed on to the next component. You
    can type in a regular expression using Java code, and on mutiple lines if


    Regex syntax requires double quotes.

  6. In this expression, make sure you include all subpatterns matching the fields
    to be extracted.

  7. In this scenario, ignore the header, footer and limit fields.

  8. Select a local (Built-in) Schema to define the data to pass on to the tFileOutputPositional component.

  9. You can load or create the schema through the Edit

  10. Then define the second component properties:

  11. Enter the Positional file output path.

  12. Enter the Encoding standard, the output file
    is encoded in. Note that, for the time being, the encoding consistency
    verification is not supported.

  13. Select the Schema type. Click on Sync columns to automatically synchronize the schema
    with the Input file schema.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.

  2. Now go to the Run tab, and click on Run to execute the Job.

    The file is read row by row and split up into fields based on the Regular Expression definition. You can open it using
    any standard file editor.


Document get from Talend
Thank you for watching.
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x