July 31, 2023

tFilterRow – Docs for ESB Filter Row 7.x

tFilterRow

Filters input rows by setting one or more conditions on the selected
columns.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tFilterRow Standard properties

These properties are used to configure tFilterRow running in the Standard Job framework.

The Standard
tFilterRow component belongs to the Processing family.

The component in this framework is available in all Talend
products
.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

The schema of this component is built-in only.

This
component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see
Talend Studio

User Guide.

This
dynamic schema feature is designed for the purpose of retrieving unknown columns of a
table and is recommended to be used for this purpose only; it is not recommended for the
use of creating tables.

Logical operator used to combine conditions

Select a logical operator to combine simple conditions and to
combine the filter results of both modes if any advanced conditions
are defined.

And: returns the boolean value of
true if all conditions are
true; otherwise false. For each
two conditions combined using a logical AND, the second condition is
evaluated only if the first condition is evaluated to be
true.

Or: returns the boolean value of
true if any condition is
true; otherwise false. For each
two conditions combined using a logical OR, the second condition is
evaluated only if the first condition is evaluated to be
false.

Conditions

Click the plus button to add as many simple conditions as needed.
Based on the logical operator selected, the conditions are evaluated
one after the other in sequential order for each row. When
evaluated, each condition returns the boolean value of true or false.

Input column: Select the column of
the schema the function is to be operated on

Function: Select the function on
the list

Operator: Select the operator to
bind the input column with the value

Value: Type in the filtered value,
between quotes if needed.

Use advanced mode

Select this check box when the operations you want to perform
cannot be carried out through the standard functions offered, for
example, different logical operations in the same component. In the
text field, type in the regular expression as required.

If multiple advanced conditions are defined, use a logical
operator between two conditions:

&& (logical AND): returns the boolean value
of true if both conditions are
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be true.

|| (logical OR): returns the boolean value of
true if either condition is
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be false.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the
Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

NB_LINE_OK: the number of rows matching the filter. This
is an After variable and it returns an integer.

NB_LINE_REJECTED: the number of rows rejected. This is an
After variable and it returns an integer.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is not startable (green background) and it requires
an output component.

Filtering a list of names using simple conditions

The following scenario shows a Job that uses simple conditions to filter a list of
records. This scenario will output two tables: the first will list all male persons with
a last name shorter than nine characters and aged between 10 and 80 years; the second
will list all rejected records. An error message for each rejected record will display
in the same table to explain why such a record has been rejected.

tFilterRow_1.png

Dropping and linking components

  1. Drop tFixedFlowInput, tFilterRow and tLogRow from the Palette
    onto the design workspace.
  2. Connect the tFixedFlowInput to the
    tFilterRow, using a Row > Main
    link. Then, connect the tFilterRow to the
    tLogRow, using a Row > Filter
    link.
  3. Drop tLogRow from the Palette onto the design workspace and rename it
    as reject. Then, connect the tFilterRow to the reject, using a
    Row > Reject link.
  4. Label the components to better identify their roles in the Job.

Configuring the components

  1. Double-click tFixedFlowInput to display
    its Basic settings view and define its
    properties.
  2. Click the […] button next to Edit schema to define the schema for the input
    data. In this example, the schema is made of the following four columns:
    LastName (type String), Gender
    (type String), Age (type Integer) and
    City (type String).

    tFilterRow_2.png

    When done, click OK to validate the
    schema setting and close the dialog box. A new dialog box opens and asks you
    if you want to propagate the schema. Click Yes.
  3. Set the row and field separators in the corresponding fields if needed. In
    this example, use the default settings for both, namely the row separator is
    a carriage return and the field separator is a semi-colon.
  4. Select the Use Inline Content(delimited
    file)
    option in the Mode
    area and type in the input data in the Content field.

    tFilterRow_3.png

    The input data used in this example is shown
    below:
  5. Double-click tFilterRow to display its
    Basic settings view and define its
    properties.

    tFilterRow_4.png

  6. In the Conditions table, add four
    conditions and fill in the filtering parameters.

    • From the InputColumn list field
      of the first row, select LastName, from the
      Function list field, select
      Length, from the Operator list field, select Lower than, and in the Value column, type in
      9 to limit the length of last names to nine
      characters.

    • From the InputColumn list field
      of the second row, select Gender, from the
      Operator list field, select
      Equals, and in the Value column, type in
      M in double quotes to filter records of
      male persons.

      Warning:

      In the Value field, you must
      type in your values between double quotes for all types of
      values, except for integer values, which do not need
      quotes.

    • From the InputColumn list field
      of the third row, select Age, from the
      Operator list field, select
      Greater than, and in the
      Value column, type in
      10 to set the lower limit to 10
      years.

    • From the InputColumn list field
      of the four row, select Age, from the Operator list field, select Lower than, and in the Value column, type in
      80 to set the upper limit to 80
      years.

  7. To combine the conditions, select And as
    that only those records that meet all the defined conditions are
    accepted.
  8. In the Basic settings of tLogRow components, select Table (print values in cells of a table) in the Mode area.

Executing the Job

Save your Job and press F6 to execute
it.

tFilterRow_5.png

As shown above, the first table lists the records of male persons aged
between 10 and 80 years, whose last names are made up of less than nine
characters, and the second table lists all the records that do not match the
filter conditions. Each rejected record has a corresponding error message
that explains the reason of rejection.

Filtering a list of names through different logical operations

Based on the previous scenario, this scenario further filters the input data so that
only those records of people from New York and Chicago are accepted. Without changing
the filter settings defined in the previous scenario, advanced conditions are added in
this scenario to enable both logical AND and logical OR operations in the same tFilterRow component.

Procedure

  1. Double-click the tFilterRow component to show
    its Basic settings view.

    tFilterRow_6.png

  2. Select the Use advanced mode check box, and
    type in the following expression in the text field:

    This defines two conditions on the City
    column of the input data to filter records that contain the cities of Chicago
    and New York, and uses a logical OR to combine the two conditions so that
    records satisfying either condition will be accepted.
  3. Press Ctrl+S to save the Job and press
    F6 to execute it.

    tFilterRow_7.png

    As shown above, the result list of the previous scenario has been further
    filtered, and only the records containing the cities of New York and Chicago are
    accepted.

tFilterRow MapReduce properties (deprecated)

These properties are used to configure tFilterRow running in the MapReduce Job framework.

The MapReduce
tFilterRow component belongs to the Processing family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Logical operator used to combine conditions

Select a logical operator to combine simple conditions and to
combine the filter results of both modes if any advanced conditions
are defined.

And: returns the boolean value of
true if all conditions are
true; otherwise false. For each
two conditions combined using a logical AND, the second condition is
evaluated only if the first condition is evaluated to be
true.

Or: returns the boolean value of
true if any condition is
true; otherwise false. For each
two conditions combined using a logical OR, the second condition is
evaluated only if the first condition is evaluated to be
false.

Conditions

Click the plus button to add as many simple conditions as needed.
Based on the logical operator selected, the conditions are evaluated
one after the other in sequential order for each row. When
evaluated, each condition returns the boolean value of true or false.

Input column: Select the column of
the schema the function is to be operated on

Function: Select the function on
the list

Operator: Select the operator to
bind the input column with the value

Value: Type in the filtered value,
between quotes if needed.

Use advanced mode

Select this check box when the operations you want to perform
cannot be carried out through the standard functions offered, for
example, different logical operations in the same component. In the
text field, type in the regular expression as required.

If multiple advanced conditions are defined, use a logical
operator between two conditions:

&& (logical AND): returns the boolean value
of true if both conditions are
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be true.

|| (logical OR): returns the boolean value of
true if either condition is
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be false.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, this component is used as an intermediate
step and other components used along with it must be Map/Reduce components, too. They
generate native Map/Reduce code that can be executed directly in Hadoop.

For further information about a
Talend
Map/Reduce Job, see the sections
describing how to create, convert and configure a
Talend
Map/Reduce Job of the

Talend Open Studio for Big Data Getting Started Guide
.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFilterRow properties for Apache Spark Batch

These properties are used to configure tFilterRow running in the Spark Batch Job framework.

The Spark Batch
tFilterRow component belongs to the Processing family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Logical operator used to combine conditions

Select a logical operator to combine simple conditions and to
combine the filter results of both modes if any advanced conditions
are defined.

And: returns the boolean value of
true if all conditions are
true; otherwise false. For each
two conditions combined using a logical AND, the second condition is
evaluated only if the first condition is evaluated to be
true.

Or: returns the boolean value of
true if any condition is
true; otherwise false. For each
two conditions combined using a logical OR, the second condition is
evaluated only if the first condition is evaluated to be
false.

Conditions

Click the plus button to add as many simple conditions as needed.
Based on the logical operator selected, the conditions are evaluated
one after the other in sequential order for each row. When
evaluated, each condition returns the boolean value of true or false.

Input column: Select the column of
the schema the function is to be operated on

Function: Select the function on
the list

Operator: Select the operator to
bind the input column with the value

Value: Type in the filtered value,
between quotes if needed.

Use advanced mode

Select this check box when the operations you want to perform
cannot be carried out through the standard functions offered, for
example, different logical operations in the same component. In the
text field, type in the regular expression as required.

If multiple advanced conditions are defined, use a logical
operator between two conditions:

&& (logical AND): returns the boolean value
of true if both conditions are
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be true.

|| (logical OR): returns the boolean value of
true if either condition is
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be false.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component
yet.

tFilterRow properties for Apache Spark Streaming

These properties are used to configure tFilterRow running in the Spark Streaming Job framework.

The Spark Streaming
tFilterRow component belongs to the Processing family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Logical operator used to combine conditions

Select a logical operator to combine simple conditions and to
combine the filter results of both modes if any advanced conditions
are defined.

And: returns the boolean value of
true if all conditions are
true; otherwise false. For each
two conditions combined using a logical AND, the second condition is
evaluated only if the first condition is evaluated to be
true.

Or: returns the boolean value of
true if any condition is
true; otherwise false. For each
two conditions combined using a logical OR, the second condition is
evaluated only if the first condition is evaluated to be
false.

Conditions

Click the plus button to add as many simple conditions as needed.
Based on the logical operator selected, the conditions are evaluated
one after the other in sequential order for each row. When
evaluated, each condition returns the boolean value of true or false.

Input column: Select the column of
the schema the function is to be operated on

Function: Select the function on
the list

Operator: Select the operator to
bind the input column with the value

Value: Type in the filtered value,
between quotes if needed.

Use advanced mode

Select this check box when the operations you want to perform
cannot be carried out through the standard functions offered, for
example, different logical operations in the same component. In the
text field, type in the regular expression as required.

If multiple advanced conditions are defined, use a logical
operator between two conditions:

&& (logical AND): returns the boolean value
of true if both conditions are
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be true.

|| (logical OR): returns the boolean value of
true if either condition is
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be false.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.

tFilterRow Storm properties (deprecated)

These properties are used to configure tFilterRow running in the Storm Job framework.

The Storm
tFilterRow component belongs to the Processing family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

The Storm framework is deprecated from Talend 7.1 onwards. Use Talend Jobs for Apache Spark Streaming to accomplish your Streaming related tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Logical operator used to combine conditions

Select a logical operator to combine simple conditions and to
combine the filter results of both modes if any advanced conditions
are defined.

And: returns the boolean value of
true if all conditions are
true; otherwise false. For each
two conditions combined using a logical AND, the second condition is
evaluated only if the first condition is evaluated to be
true.

Or: returns the boolean value of
true if any condition is
true; otherwise false. For each
two conditions combined using a logical OR, the second condition is
evaluated only if the first condition is evaluated to be
false.

Conditions

Click the plus button to add as many simple conditions as needed.
Based on the logical operator selected, the conditions are evaluated
one after the other in sequential order for each row. When
evaluated, each condition returns the boolean value of true or false.

Input column: Select the column of
the schema the function is to be operated on

Function: Select the function on
the list

Operator: Select the operator to
bind the input column with the value

Value: Type in the filtered value,
between quotes if needed.

Use advanced mode

Select this check box when the operations you want to perform
cannot be carried out through the standard functions offered, for
example, different logical operations in the same component. In the
text field, type in the regular expression as required.

If multiple advanced conditions are defined, use a logical
operator between two conditions:

&& (logical AND): returns the boolean value
of true if both conditions are
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be true.

|| (logical OR): returns the boolean value of
true if either condition is
true; otherwise false. The second
condition is evaluated only if the first condition is evaluated to
be false.

Usage

Usage rule

If you have subscribed to one of the
Talend
solutions with Big Data, you can also
use this component as a Storm component. In a
Talend
Storm Job, this component is used as
an intermediate step and other components used along with it must be Storm components, too.
They generate native Storm code that can be executed directly in a Storm system.

The Storm version does not support the use of the global variables.

You need to use the Storm Configuration tab in the
Run view to define the connection to a given Storm
system for the whole Job.

This connection is effective on a per-Job basis.

For further information about a
Talend
Storm Job, see the sections
describing how to create and configure a
Talend
Storm Job of the
Talend Open Studio for Big Data Getting Started Guide
.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Related scenarios

No scenario is available for the Storm version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x