August 16, 2023

tMatchPairing – Docs for ESB 6.x

tMatchPairing

Enables you to compute pairs of suspect duplicates from any source data including
large volumes in the context of machine learning on Spark.

This component reads a data set row by row, excludes unique rows and exact
duplicates in separate files, computes pairs of suspect records based on a blocking key
definition and creates a sample of suspect records representative of the data set.

You can label suspect pairs manually or load them into a Grouping
campaign which is already defined on the Talend Data Stewardship
server.

For further information about Grouping campaigns, see
documentation on Talend Help Center (https://help.talend.com).

This component can run only with the following Hadoop distributions with
Spark 1.6+ and Spark 2.0:

  • Spark 1.6: CDH5.7, CDH5.8, HDP2.4.0, HDP2.5.0, MapR5.2.0, EMR4.5.0,
    EMR4.6.0.

  • Spark 2.0: EMR5.0.0.

tMatchPairing properties for Apache Spark Batch

These properties are used to configure tMatchPairing running in the Spark Batch Job framework.

The Spark Batch
tMatchPairing component belongs to the Data Quality family.

The component in this framework is available when you have subscribed to any Talend Platform product with Big Data or Talend Data
Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job. For
example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

The output schema of this component has read-only columns
in its output links:

PAIR_ID and
SCORE: used only with the Pairs and Pairs sample output links. The first column holds the
identifiers of the suspect pairs, and the second holds the similarities
between the records in each pair.

LABEL: used only with the Pairs sample output link. You must fill in this
column manually in the Job using the tMatchModel component.

COUNT: used only with the Exact duplicates output link. This column gives
the occurrences of the records which exactly match.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Blocking key

Select the columns with which you want to construct the
blocking key.

This blocking key is used to generate suffixes which are
used to group records.

Suffix array blocking parameters

Min suffix length: Set the
minimum suffix length you want to reach or stop at in each group.

Max block size: Set the maximum
number of the records you want to have in each block. This helps in
filtering in large blocks where the suffix is too common, as with tion and ing for
example.

Pairing model location

Folder: Set the path to the local
folder where you want to generate the model files.

If you want to store the model in a specific file system,
for example S3 or HDFS, you must use the corresponding component in the
Job and select the Define a storage
configuration component
check box in the component basic
settings.

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode,
ensure that you have properly configured the connection in a configuration component in
the same Job, such as tHDFSConfiguration.

Integration with Data Stewardship

Select this check box to set the connection parameters to the Talend Data Stewardship
server.

If you select this check box, tMatchPairing loads the
suspect pairs into a Grouping campaign, which means this component is
used as an end component.

Data Stewardship Configuration

  • URL:

    Enter the address to access the Talend Data Stewardship server suffixed
    with /data-stewardship/, for example
    http://<server_address>:19999/data-stewardship/.

  • Username and
    Password:

    Enter the authentication information to the Talend Data Stewardship server.

  • Campaign Name:

    A read-only field which shows the campaign name once the campaign is
    selected.

    Click Find a Campaign to open a dialog box
    which lists the grouping campaigns on the server for which you
    are the Campaign owner or you have the access rights.

    Click the refresh button to retrieve the campaign details from
    the
    Talend Data Stewardship
    server.

  • Campaign Label:

    A read-only field which shows the campaign label
    once the campaign is selected.

  • Assignee:

    Specifiy the campaign participant whose tasks you want to
    create.

Advanced settings

Filtering threshold

Enter a value between 0.2 and 0.85 to filter the pairs of
suspect records based on the calculated scores. This value helps to
exclude the pairs which are not very similar.

0.3 is the default value. The higher the value is, the
more similar the records will be.

Pairs sample

Number of pairs: Enter a size for
the sample of the suspect pairs you want to generate. The default value
is set to 10000.

Set a random seed: Select this
check box and in the Seed field
that is displayed, enter a random number if you want to have the same
pairs sample in different executions of the Job. Repeating the execution
with a different value for the seed will result in different pairs
samples and the scores of the pairs could be different as well depending
whether the total number of the suspect pairs is greater than 10 000 or
not.

Data Stewardship Configuration

Max tasks per commit: Set the number of lines you
want to have in each commit.

Do not change the default value unless you are facing performance issues.
Increasing the commit size can improve the performance but setting a too
high value could cause Job failures.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Spark Batch Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Matching on Spark

Matching on spark applies only to a subscription-based Talend Platform solution with Big
data or Talend Data Fabric.

Using
Talend Studio
, you can match very high volume of data using machine learning on Spark. This
feature helps you to match very big number of records with a minimal human
intervention.

Machine learning with Spark is usually two phases: the first phase
computes a model (i.e. teaches the machine) based on historical data and mathematical
heuristics, and the second phase applies the model on new data. In the Studio, the first
phase is implemented by two Jobs, one with the tMatchPairing component and the second with the tMatchModel component. While the second phase is
implemented by a third Job with the tMatchPredict
component.

Two workflows are possible when matching on Spark with the Studio.

In the first workflow, tMatchPairing:

  • compute pairs of suspect records based on a blocking key
    definition,

  • creates a sample of suspect records representative of the data
    set,

  • can optionally write this sample of suspect records into a Grouping campaign
    defined on the Talend Data Stewardship server,

  • separates unique records from exact match records,

  • generates a pairing model to be used with tMatchPredict.

match_on_spark_workflow.png

You can then manually label the sample suspect records by resolving tasks
in a Grouping campaign defined on the Talend Data Stewardship server, which is the
recommended method, or by editing the files manually.

Next, you can use the sample suspect records you labeled with tMatchModel in the second Job where tMatchModel:

  • computes similarities between the records in each suspect
    pair,

  • trains a classification model based on the Random Forest
    algorithm.

tMatchPredict labels suspect records automatically
and groups suspect records which match the label(s) set in the component properties.

While in the second workflow, tMatchPredict uses directly on the new data set the pairing model
generated by tMatchPairing and the matching model
generated by tMatchModel, and:

  • labels suspect records automatically,

  • groups suspect records which match the label(s) set in the
    component properties,

  • separates the exact duplicates from unique records.

match_on_spark_workflow2.png

Scenario 1: Computing suspect pairs and writing a sample in Talend Data Stewardship

This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric.

Finding duplicate records is hard and time consuming especially when you are dealing with
huge volume of data. In this example, tMatchPairing uses a
blocking key to compute the pairs of suspect duplicates in a long list of early
childhood education centers in Chicago coming from ten different sources.

It also computes a sample of the suspect duplicates and writes it in the form of tasks
into a Grouping campaign on the Talend Data Stewardship server.
Authorized data stewards can then intervene on the data sample and decide if suspect
pairs are duplicates.

You can then use the labeled sample to compute a matching model and apply it on all
suspect duplicates in the context of machine learning on Spark.

Before setting up the Job, make sure:

  • You have been assigned in Talend Administration Center the Campaign Owner role which grants you
    access to the campaigns on the server.

  • You have created the Grouping campaign in Talend Data Stewardship and defined
    the schema which corresponds to the structure of the education centers file.

    For further information, see the online publication about
    Grouping campaigns on Talend Help Center (https://help.talend.com).

Creating the Job

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited,
    tMatchPairing, tLogRow and
    tFileOutputDelimited (x2).

    stewardship_job_load_grouping_tasks.png

  2. Connect tFileInputDelimited to
    tMatchPairing using the Main link.

    tFileInputDelimited reads the source file and sends
    data to the next component.
  3. Connect tMatchPairing to the output file components
    using the Pairs and Unique rows
    links, and to tLogRow using the Exact
    duplicates
    link.

    tMatchPairing pre-analyzes the data, computes pairs
    of suspect duplicates, unique rows and exact duplicates and generates a pairing
    model to be used with tMatchPredict

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.
  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Connecting to a custom Hadoop distribution.

Reading data and sending the fields to the next component

  1. Double-click tFileInputDelimited to open its
    Basic settings view.

    The input data must have duplicate records, otherwise the model generated
    will not give authentic results when used on the whole suspect
    pairs.

  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. Click OK in the dialog box and accept to propagate
    the changes when prompted.
  4. In the Folder/File field, set the path to the input
    file.
  5. Set the row and field separators in the corresponding fields and the header
    and footer, if any.

Computing suspect pairs and writing a sample in a Grouping campaign

  1. Double-click tMatchPairing to
    display the Basic settings view and define the
    component properties.

    stewardship_job_load_grouping_tasks2.png

  2. Click Sync columns to retrieve the schema defined in
    the input component.
  3. In the Blocking Key table, click the
    [+] button to add a row. Select the column you want
    to use as a blocking key, Site_name in this
    example.

    The blocking key is constructed from the agency name and is used to
    generate the suffixes used to group pairs of records.
  4. In the Suffix array blocking parameters section:

    1. In the Min suffix length field, set the
      minimum suffix length you want to reach or stop at in each
      group.
    2. In the Max block size field, set the maximum
      number of the records you want to have in each block. This helps
      filtering data in large blocks where the suffix is too common.
  5. In the Folder field, set the path to the local
    folder where you want to generate the pairing model file.

    If you want to store the model in a specific file system, for example S3
    or HDFS, you must use the corresponding component in the Job and select
    the Define a storage configuration component
    check box in the component basic settings.

  6. Select the Integration with Data Stewardship check box
    and set the connection parameters to the Talend Data Stewardship
    server.


    1. In the URL field, enter the address of
      the server suffixed with /data-stewardship/, for example http://localhost:19999/data-stewardship/.

    2. Enter your login information to the server in the
      Username and Password
      fields.

      To enter your password, click the […] button next to the Password field, enter your password between double
      quotes in the dialog box that opens and click OK.

    3. Click Find a campaign to open a dialog
      box which lists the campaigns defined on the server and for which you are the owner or
      you have the access rights.

    4. Select the Sites deduplication campaign in which
      to write the grouping tasks and click OK.
  7. Click Advanced settings and set the below
    parameters:

    1. In the Filtering threshold field, enter a
      value between 0.2 and 0.85 to filter the pairs of suspect records
      based on the calculated scores.

      This value helps to exclude the pairs which are not very similar.
      The higher the value is, the more similar the records are.

    2. Leave the Set a random seed check box clear
      as you want to generate a different sample by each execution of the
      Job.
    3. In the Number of pairs field, enter the size
      of the suspect pairs sample you want to generate.
    4. When configured with Talend Data Stewardship, enter the maximum number of the tasks to load per a
      commit in the Max tasks per commit
      field.

Configuring the output components to write suspect pairs, unique rows and
exact duplicates

  1. Double-click the first tFileOutputDelimited
    component to display the Basic settings view and
    define the component properties.

    You have already accepted to propagate the schema to the output
    components when you defined the input component.
  2. Clear the Define a storage configuration component
    check box to use the local system as your target file system.
  3. In the Folder field, set the path to the folder
    which will hold the output data.
  4. From the Action list, select the operation for
    writing data:

    • Select Create when you run the Job for the
      first time.

    • Select Overwrite to replace the file every
      time you run the Job.

  5. Set the row and field separators in the corresponding fields.
  6. Select the Merge results to single file check box,
    and in the Merge file path field set the path where
    to output the file of the suspect record pairs.
  7. Double-click the second tFileOutputDelimited
    component and define the component properties in the Basic
    settings
    view, as you do with the first component.

    This component creates the file which holds the unique rows generated
    from the input data.

  8. Double-click tLogRow component and define the
    component properties in the Basic settings
    view.

    This component writes the exact duplicates generated from the input
    data on the Studio console.

Executing the Job to write tasks into the Grouping campaign


Press F6 to save and execute the
Job.

A sample of suspect pairs is computed and written in the form of tasks into
the Sites deduplication campaign and the record names
are automatically set to Record 1 and
Record 2.

The component also computes suspect pairs and unique rows and writes them in
the output files. It writes exact duplicates on the studio console.

stewardship_job_load_grouping_tasks_results.png

You can now assign the tasks to authorized data stewards who need to decide
if the records in each group are duplicates.

For further information about handling grouping
tasks, see the documentation on Talend Help Center (https://help.talend.com).

Scenario 2: Computing suspect pairs and suspect sample from source
data

This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric.

In this example, tMatchPairing uses a blocking key to compute the
pairs of suspect duplicates in a list of early childhood education centers in
Chicago.

The use case described here uses:

  • a tFileInputDelimited component to read the source file,
    which contains a list of early childhood education centers in Chicago coming
    from ten different sources;

  • a tMatchPairing component to pre-analyze the data, compute
    pairs of suspect duplicates and generate a pairing model which is used by the
    tMatchPredict component;

  • three tFileOutputDelimited
    components to output the suspect duplicates, a sample of suspect pairs and the
    unique records; and

  • a tLogRow component to
    output the exact duplicates.

Setting up the Job

  1. Drop the following components from the Palette onto the design workspace:
    tFileInputDelimited, tMatchPairing, three tFileOutputDelimited and tLogRow.
  2. Connect tFileInputDelimited to
    tMatchPairing using the Main link.
  3. Connect tMatchPairing to
    the tFileOutputDelimited components using the Pairs, Pairs
    sample
    and Unique rows links, and to the
    tLogRow component using the
    Exact duplicates link.
use_case-tmatchpairing.png

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.
  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Connecting to a custom Hadoop distribution.

Configuring the input component

  1. Double-click tFileInputDelimited to open its
    Basic settings view.

    The input data must have duplicate records, otherwise the model generated
    will not give authentic results when used on the whole suspect
    pairs.

  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. Click OK in the dialog box and accept to propagate
    the changes when prompted.
  4. In the Folder/File field, set the path to the input
    file.
  5. Set the row and field separators in the corresponding fields and the header
    and footer, if any.

Computing suspect duplicates, exact duplicates and unique rows

  1. Double-click tMatchPairing to
    display the Basic settings view and define the
    component properties.

    use_case-tmatchpairing3.png

  2. Click Sync columns to retrieve the schema defined in
    the input component.
  3. In the Blocking Key table, click the
    [+] button to add a row. Select the column you want
    to use as a blocking key, Site_name in this
    example.

    The blocking key is constructed from the agency name and is used to
    generate the suffixes used to group pairs of records.
  4. In the Suffix array blocking parameters section:

    1. In the Min suffix length field, set the
      minimum suffix length you want to reach or stop at in each
      group.
    2. In the Max block size field, set the maximum
      number of the records you want to have in each block. This helps
      filtering data in large blocks where the suffix is too common.
  5. In the Folder field, set the path to the local
    folder where you want to generate the pairing model file.

    If you want to store the model in a specific file system, for example S3
    or HDFS, you must use the corresponding component in the Job and select
    the Define a storage configuration component
    check box in the component basic settings.

  6. Click Advanced settings and set the below
    parameters:

    1. In the Filtering threshold field, enter a
      value between 0.2 and 0.85 to filter the pairs of suspect records
      based on the calculated scores.

      This value helps to exclude the pairs which are not very similar.
      The higher the value is, the more similar the records are.

    2. Leave the Set a random seed check box clear
      as you want to generate a different sample by each execution of the
      Job.
    3. In the Number of pairs field, enter the size
      of the suspect pairs sample you want to generate.
    4. When configured with Talend Data Stewardship, enter the maximum number of the tasks to load per a
      commit in the Max tasks per commit
      field.

Configuring the output components to write suspect pairs, suspect sample
and unique rows

  1. Double-click the first tFileOutputDelimited
    component to display the Basic settings view and
    define the component properties.

    You have already accepted to propagate the schema to the output
    components when you defined the input component.
  2. Clear the Define a storage configuration component
    check box to use the local system as your target file system.
  3. In the Folder field, set the path to the folder
    which will hold the output data.
  4. From the Action list, select the operation for
    writing data:

    • Select Create when you run the Job for the
      first time.

    • Select Overwrite to replace the file every
      time you run the Job.

  5. Set the row and field separators in the corresponding fields.
  6. Select the Merge results to single file check box,
    and in the Merge file path field set the path where
    to output the file of the suspect record pairs.
  7. Double-click the other tFileOutputDelimited components to display the Basic
    settings
    view and define the component properties.

    For example, set the path where to output the sample data to
    C:/tmp/tmp/pairsSample and set the path where to
    output the file of the suspect sample to
    C:/tmp/pairing/SampleToLabel.csv.

    For example, set the path where to output the unique row to
    C:/tmp/tmp/uniqueRows and set the path where to
    output the file of the suspect pairs sample to
    C:/tmp/pairing/uniqueRows.csv.

Configuring the log component to write exact duplicates

  1. Double-click tLogRow component and define the
    component properties in the Basic settings
    view.

    This component writes the exact duplicates generated from the input
    data on the Studio console.
  2. Click Sync columns to retrieve the schema from the
    preceding component.
  3. In the Mode area, select Table (print values
    in cells of a table)
    for better readability of the result.

Executing the Job to compute suspect pairs and suspect sample

Press F6 to execute the
Job.

tMatchPairing computes the pairs of suspect records and the
pairs sample, based on the blocking key definition, and writes the results to the
output files.

tMatchPairing excludes unique rows and writes them in the
output file:

use_case-tmatchpairing9.png

tMatchPairing excludes exact duplicates and writes them in the
Run view:

use_case-tmatchpairing8.png

The component has added an extra read-only column, LABEL, for
the Pairs sample link.

use_case-tmatchpairing7.png

You can use the LABEL column to label suspect records manually
before using them with the tMatchModel component.

You can find an example of how to generate a matching model
using tMatchModel on Talend Help Center (https://help.talend.com).


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x