July 30, 2023

tCompareColumns – Docs for ESB 7.x

tCompareColumns

Compares two columns to design useful features for generating a classification
model.

tCompareColumns outputs comparison results to manually added
columns.

In local mode, Apache Spark 2.0.0 and 2.4.0 are supported.

tCompareColumns properties for Apache Spark Batch

These properties are used to configure tCompareColumns running
in the Spark Batch Job framework.

The Spark Batch
tCompareColumns component belongs to the Natural Language Processing family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Add as many columns as necessary to the output schema according the
algorithms defined in the Comparison options
table:

  • two columns for the Most similar in list (2
    outputs)
    algorithm,

  • two columns for the First letter corresponds (1
    output)
    algorithm,

  • two columns for the Is substring (1
    output)
    algorithm.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Comparison options

In this table, set the rules for comparing tokens in two columns.

The column specified in Main column contains the
tokens to be compared with the reference tokens in Reference
column
.

In the Algorithms column, select the algorithm to
be used for each comparison:

  • Most similar in list (2 outputs): The
    column specified in Main column contains
    one token per row. Each row in the column specified in
    Reference column can contain one
    token or one string with multiple tokens separated by a tab. The
    first output is the biggest Jaro-Winkler distance between the
    token in Main column and all the tokens
    in Reference column. The second one is
    the most similar token in Reference
    column
    .

  • First letter corresponds (1 output): The
    columns specified in Main column and
    Reference column contain one token
    per row. In the output, T is returned if the
    first letter of the two tokens are the same. F
    is returned if they are different.

  • Is substring (1 output): The columns
    specified in Main column and
    Reference column contain one token
    per row. In the output, T is returned if the
    token in Main column is a substring of
    the token from Reference column. If not,
    F is returned.

Output column(s): Specify the columns that contain
the comparison results in the output schema.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Spark Batch Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Standard version of this component yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x