August 15, 2023

tCompareColumns properties – Docs for ESB 6.x

tCompareColumns properties

Compares two columns to design useful features for generating a classification
model.

tCompareColumns outputs comparison results to manually added
columns.

This component can run only with Spark 1.6 and 2.0.

tCompareColumns properties for Apache Spark Batch

These properties are used to configure tCompareColumns running
in the Spark Batch Job framework.

The Spark Batch
tCompareColumns component belongs to the Natural Language Processing family.

The component in this framework is available when you have subscribed to any Talend Platform product with Big Data or Talend Data
Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Add as many columns as necessary to the output schema according the
algorithms defined in the Comparison options
table:

  • two columns for the Most similar in list (2
    outputs)
    algorithm,

  • two columns for the First letter corresponds (1
    output)
    algorithm,

  • two columns for the Is substring (1
    output)
    algorithm.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Comparison options

In this table, set the rules for comparing tokens in two columns.

The column specified in Main column contains the
tokens to be compared with the reference tokens in Reference
column
.

In the Algorithms column, select the algorithm to
be used for each comparison:

  • Most similar in list (2 outputs): The
    column specified in Main column contains
    one token per row. Each row in the column specified in
    Reference column can contain one
    token or one string with multiple tokens separated by a tab. The
    first output is the biggest Jaro-Winkler distance between the
    token in Main column and all the tokens
    in Reference column. The second one is
    the most similar token in Reference
    column
    .

  • First letter corresponds (1 output): The
    columns specified in Main column and
    Reference column contain one token
    per row. In the output, T is returned if the
    first letter of the two tokens are the same. F
    is returned if they are different.

  • Is substring (1 output): The columns
    specified in Main column and
    Reference column contain one token
    per row. In the output, T is returned if the
    token in Main column is a substring of
    the token from Reference column. If not,
    F is returned.

Output column(s): Specify the columns that contain
the comparison results in the output schema.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Spark Batch Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Standard version of this component yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x