July 30, 2023

tMatchPairing – Docs for ESB 7.x

tMatchPairing

Enables you to compute pairs of suspect duplicates from any source data including
large volumes in the context of machine learning on Spark.

This component reads a data set row by row, excludes unique rows and exact
duplicates in separate files, computes pairs of suspect records based on a blocking key
definition and creates a sample of suspect records representative of the data set.

You can label suspect pairs manually or load them into a Grouping campaign which is already defined in Talend Data Stewardship.

This component runs with Apache Spark 1.6.0 and later
versions.

tMatchPairing properties for Apache Spark Batch

These properties are used to configure tMatchPairing running in the Spark Batch Job framework.

The Spark Batch
tMatchPairing component belongs to the Data Quality family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

The output schema of this component has read-only columns
in its output links:

PAIR_ID and
SCORE: used only with the Pairs and Pairs sample output links. The first column holds the
identifiers of the suspect pairs, and the second holds the similarities
between the records in each pair.

LABEL: used only with the Pairs sample output link. You must fill in this
column manually in the Job using the tMatchModel component.

COUNT: used only with the Exact duplicates output link. This column gives
the occurrences of the records which exactly match.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Blocking key

Select the columns with which you want to construct the
blocking key.

This blocking key is used to generate suffixes which are
used to group records.

Suffix array blocking parameters

Min suffix length: Set the
minimum suffix length you want to reach or stop at in each group.

Max block size: Set the maximum
number of the records you want to have in each block. This helps in
filtering in large blocks where the suffix is too common, as with tion and ing for
example.

Pairing model location

Folder: Set the path to the local
folder where you want to generate the model files.

If you want to store the model in a specific file system,
for example S3 or HDFS, you must use the corresponding component in the
Job and select the Define a storage
configuration component
check box in the component basic
settings.

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Integration with Data Stewardship

Select this check box to set the connection parameters to the Talend Data Stewardship
server.

If you select this check box, tMatchPairing loads the
suspect pairs into a Grouping campaign, which means this component is
used as an end component.

Data Stewardship Configuration

  • URL:

    Enter the address to access the Talend Data Stewardship server suffixed with
    /data-stewardship/, for example
    http://<server_address>:19999/data-stewardship/.

  • If you are working with Talend Cloud Data Stewardship, use one of the following
    addresses to access the application:

    • https://tds.us.cloud.talend.com/data-stewardship for the US
      data center.
    • https://tds.eu.cloud.talend.com/data-stewardship for the EU
      data center.
    • https://tds.ap.cloud.talend.com/data-stewardship for the
      Asia-Pacific data center.
  • Username and
    Password:

    Enter the authentication information to
    log in to
    Talend Data Stewardship.

    If you are working with Talend Cloud Data Stewardship and if:

    • SSO is enabled, enter an access
      token in the field.
    • SSO is not enabled, enter either
      an access token or your password in the field.
  • Campaign:

    Displays the technical name of the campaign once it is selected
    in the basic settings. However, you can modify the field value to replace it with a
    context parameter for example and pass context variables to the Job at runtime. This
    technical name is always used to identify a campaign when the Job communicates with
    Talend Data Stewardship whatever is the value in
    the Campaign field.

    Click Find a Campaign to open a dialog box
    which lists the grouping campaigns on the server for which you
    are the Campaign owner or you have the
    access rights.

    Click the refresh button to retrieve the campaign details from
    the Talend Data Stewardship server.

  • Assignee:

    Specify the campaign participant whose tasks you want to
    create.

Advanced settings

Filtering threshold

Enter a value between 0.2 and 0.85 to filter the pairs of
suspect records based on the calculated scores. This value helps to
exclude the pairs which are not very similar.

0.3 is the default value. The higher the value is, the
more similar the records will be.

Pairs sample

Number of pairs: Enter a size for
the sample of the suspect pairs you want to generate. The default value
is set to 10000.

Set a random seed: Select this
check box and in the Seed field
that is displayed, enter a random number if you want to have the same
pairs sample in different executions of the Job. Repeating the execution
with a different value for the seed will result in different pairs
samples and the scores of the pairs could be different as well depending
whether the total number of the suspect pairs is greater than 10 000 or
not.

Data Stewardship Configuration

Campaign ID:

Displays the technical name of the campaign once it is selected
in the basic settings. However, you can modify the field value to replace it with a
context parameter for example and pass context variables to the Job at runtime. This
technical name is always used to identify a campaign when the Job communicates with
Talend Data Stewardship whatever is the value in
the Campaign field.

Max tasks per commit: Set the number of lines you
want to have in each commit.

Do not change the default value unless you are facing performance issues.
Increasing the commit size can improve the performance but setting a too
high value could cause Job failures.

Use Timestamp format for Date type

Select the check box to output dates, hours, minutes and seconds contained in your
Date-type data. If you clear this check box, only years, months and days are
outputted.

The format used by Deltalake is yyyy-MM-dd HH:mm:ss.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Spark Batch Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Matching on Spark

Matching on Spark applies only to a subscription-based Talend Platform solution with Big
data or Talend Data Fabric.

Using
Talend Studio
, you can match very high volume of data using machine learning on Spark. This
feature helps you to match very big number of records with a minimal human
intervention.

Machine learning with Spark is usually two phases: the first phase
computes a model (i.e. teaches the machine) based on historical data and mathematical
heuristics, and the second phase applies the model on new data. In the Studio, the first
phase is implemented by two Jobs, one with the tMatchPairing component and the second with the tMatchModel component. While the second phase is
implemented by a third Job with the tMatchPredict
component.

Two workflows are possible when matching on Spark with the Studio.

In the first workflow, tMatchPairing:

  • compute pairs of suspect records based on a blocking key
    definition,

  • creates a sample of suspect records representative of the data
    set,

  • can optionally write this sample of suspect records into a Grouping campaign
    defined on the Talend Data Stewardship server,

  • separates unique records from exact match records,

  • generates a pairing model to be used with tMatchPredict.

tMatchPairing_1.png

You can then manually label the sample suspect records by resolving tasks
in a Grouping campaign defined on the Talend Data Stewardship server, which is the
recommended method, or by editing the files manually.

Next, you can use the sample suspect records you labeled with tMatchModel in the second Job where tMatchModel:

  • computes similarities between the records in each suspect
    pair,

  • trains a classification model based on the Random Forest
    algorithm.

tMatchPredict labels suspect records automatically
and groups suspect records which match the label(s) set in the component properties.

While in the second workflow, tMatchPredict uses directly on the new data set the pairing model
generated by tMatchPairing and the matching model
generated by tMatchModel, and:

  • labels suspect records automatically,

  • groups suspect records which match the label(s) set in the
    component properties,

  • separates the exact duplicates from unique records.

tMatchPairing_2.png

Computing suspect pairs and writing a sample in Talend Data Stewardship

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

Finding duplicate records is hard and time consuming especially when you are dealing with
huge volume of data. In this example, tMatchPairing uses a
blocking key to compute the pairs of suspect duplicates in a long list of early
childhood education centers in Chicago coming from ten different sources.

It also computes a sample of the suspect duplicates and writes it in the
form of tasks into a Grouping campaign in Talend Data Stewardship. Authorized data stewards can then
intervene on the data sample and decide if suspect pairs are duplicates.

You can then use the labeled sample to compute a matching model and apply it on all
suspect duplicates in the context of machine learning on Spark.

To replicate the example described below, retrieve the
tmatchpairing_load_suspect_pairs_in_tds.zip file from the
Downloads tab of the online version of this page at https://help.talend.com.

Before setting up the Job, make sure:

  • You have been assigned in Talend Administration Center the Campaign
    Owner
    role which grants you access to the campaigns on the
    server.

  • You have created the Grouping campaign in Talend Data Stewardship and defined the schema which corresponds to the structure
    of the education centers file.

Creating the Job

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited,
    tMatchPairing, tLogRow and two
    tFileOutputDelimited.

    tMatchPairing_3.png

  2. Connect tFileInputDelimited to
    tMatchPairing using the Main link.

    tFileInputDelimited reads the source file and sends
    data to the next component.
  3. Connect tMatchPairing to the output file components
    using the Pairs and Unique rows
    links, and to tLogRow using the Exact
    duplicates
    link.

    tMatchPairing pre-analyzes the data, computes pairs
    of suspect duplicates, unique rows and exact duplicates and generates a pairing
    model to be used with tMatchPredict

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Reading data and sending the fields to the next component

  1. Double-click tFileInputDelimited to open its
    Basic settings view.

    The input data must have duplicate records, otherwise the model generated will
    not give authentic results when used on the whole suspect pairs.
  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. Click OK in the dialog box and accept to propagate the
    changes when prompted.
  4. In the Folder/File field, set the path to the input
    file.
  5. Set the row and field separators in the corresponding fields and the header and
    footer, if any.

Computing suspect pairs and writing a sample in a Grouping campaign

  1. Double-click tMatchPairing to display
    the Basic settings view and define the component
    properties.

    tMatchPairing_4.png

  2. Click Sync columns to retrieve the schema defined in the
    input component.
  3. In the Blocking Key table, click the
    [+] button to add a row. Select the column you want
    to use as a blocking key, Site_name in this
    example.

    The blocking key is constructed from the center name and is used to generate
    the suffixes used to group pairs of records.
  4. In the Suffix array blocking parameters section:

    1. In the Min suffix length field, set the minimum
      suffix length you want to reach or stop at in each group.
    2. In the Max block size field, set the maximum
      number of the records you want to have in each block. This helps
      filtering data in large blocks where the suffix is too common.
  5. In the Folder field, set the path to the local folder
    where you want to generate the pairing model file.

    If you want to store the model in a specific file system, for example S3 or
    HDFS, you must use the corresponding component in the Job and select the
    Define a storage configuration component check box in
    the component basic settings.
  6. Select the Integration with Data Stewardship check box
    and set the connection parameters to the Talend Data Stewardship
    server.

    1. In the URL field, enter the address of the application suffixed with /data-stewardship/, for example http://localhost:19999/data-stewardship/.

      If you are working with Talend Cloud Data Stewardship, use one of the following
      addresses to access the application:

      • https://tds.us.cloud.talend.com/data-stewardship for the US
        data center.
      • https://tds.eu.cloud.talend.com/data-stewardship for the EU
        data center.
      • https://tds.ap.cloud.talend.com/data-stewardship for the
        Asia-Pacific data center.

    2. Enter your login information
      in
      the Username and Password
      fields.

      To enter your password, click next to the field, enter your
      password between double quotes in the dialog box that opens and click
      OK.

      If you
      are working with Talend Cloud Data Stewardship and if:

      • SSO is enabled, enter an access
        token in the field.
      • SSO is not enabled, enter either
        an access token or your password in the field.

    3. Click Find a
      campaign
      to open a dialog box which lists the campaigns defined in
      Talend Data Stewardship and for which you
      are the owner or you have the access rights.

    4. Select the Sites deduplication campaign in which
      to write the grouping tasks and click OK.
  7. Click Advanced settings and set the below
    parameters:

    1. In the Filtering threshold field, enter a value
      between 0.2 and 0.85 to filter the pairs of suspect records based on the
      calculated scores.

      This value helps to exclude the pairs which are not very similar. The
      higher the value is, the more similar the records are.
    2. Leave the Set a random seed check box clear as
      you want to generate a different sample by each execution of the
      Job.
    3. In the Number of pairs field, enter the size of
      the suspect pairs sample you want to generate.
    4. When configured with Talend Data Stewardship,
      enter the maximum number of the tasks to load per a commit in the
      Max tasks per commit field.

      There are no limits for the batch size in Talend Data Stewardship (on
      premises). However, do not exceed 200 tasks per commit in Talend Cloud Data Stewardship,
      otherwise the Job fails.

Configuring the output components to write suspect pairs, unique rows and
exact duplicates

  1. Double-click the first tFileOutputDelimited component to
    display the Basic settings view and define the component
    properties.

    You have already accepted to propagate the schema to the output components
    when you defined the input component.
  2. Clear the Define a storage configuration component check
    box to use the local system as your target file system.
  3. In the Folder field, set the path to the folder which
    will hold the output data.
  4. From the Action list, select the operation for writing
    data:

    • Select Create when you run the Job for the first
      time.
    • Select Overwrite to replace the file every time
      you run the Job.
  5. Set the row and field separators in the corresponding fields.
  6. Select the Merge results to single file check box, and
    in the Merge file path field set the path where to output
    the file of the suspect record pairs.
  7. Double-click the second tFileOutputDelimited component
    and define the component properties in the Basic settings
    view, as you do with the first component.

    This component creates the file which holds the unique rows generated from the
    input data.
  8. Double-click tLogRow component and define the component
    properties in the Basic settings view.

    This component writes the exact duplicates generated from the input data on
    the Studio console.

Executing the Job to write tasks into the Grouping campaign


Press F6 to save and execute the Job.

A sample of suspect pairs is computed and written in the form of tasks into
the Sites deduplication campaign and the record names
are automatically set to Record 1 and
Record 2.

The component also computes suspect pairs and unique rows and writes them in
the output files. It writes exact duplicates on the studio console.

tMatchPairing_5.png

You can now assign the tasks to authorized data stewards who need to decide
if the records in each group are duplicates.

Computing suspect pairs and suspect sample from source
data

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

In this example, tMatchPairing uses a blocking key to compute the
pairs of suspect duplicates in a list of early childhood education centers in
Chicago.

The use case described here uses:

  • a tFileInputDelimited component to read the source file,
    which contains a list of early childhood education centers in Chicago coming
    from ten different sources;

  • a tMatchPairing component to pre-analyze the data, compute
    pairs of suspect duplicates and generate a pairing model which is used by the
    tMatchPredict component;

  • three tFileOutputDelimited
    components to output the suspect duplicates, a sample of suspect pairs and the
    unique records; and

  • a tLogRow component to
    output the exact duplicates.

Setting up the Job

  1. Drop the following components from the Palette onto the design workspace:
    tFileInputDelimited, tMatchPairing, three tFileOutputDelimited and tLogRow.
  2. Connect tFileInputDelimited to
    tMatchPairing using the Main link.
  3. Connect tMatchPairing to
    the tFileOutputDelimited components using the Pairs, Pairs
    sample
    and Unique rows links, and to the
    tLogRow component using the
    Exact duplicates link.
tMatchPairing_6.png

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the input component

  1. Double-click tFileInputDelimited to open its
    Basic settings view.

    The input data must have duplicate records, otherwise the model generated will
    not give authentic results when used on the whole suspect pairs.
  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. Click OK in the dialog box and accept to propagate the
    changes when prompted.
  4. In the Folder/File field, set the path to the input
    file.
  5. Set the row and field separators in the corresponding fields and the header and
    footer, if any.

Computing suspect duplicates, exact duplicates and unique rows

  1. Double-click tMatchPairing to display
    the Basic settings view and define the component
    properties.

    tMatchPairing_7.png

  2. Click Sync columns to retrieve the schema defined in the
    input component.
  3. In the Blocking Key table, click the
    [+] button to add a row. Select the column you want
    to use as a blocking key, Site_name in this
    example.

    The blocking key is constructed from the center name and is used to generate
    the suffixes used to group pairs of records.
  4. In the Suffix array blocking parameters section:

    1. In the Min suffix length field, set the minimum
      suffix length you want to reach or stop at in each group.
    2. In the Max block size field, set the maximum
      number of the records you want to have in each block. This helps
      filtering data in large blocks where the suffix is too common.
  5. In the Folder field, set the path to the local folder
    where you want to generate the pairing model file.

    If you want to store the model in a specific file system, for example S3 or
    HDFS, you must use the corresponding component in the Job and select the
    Define a storage configuration component check box in
    the component basic settings.
  6. Click Advanced settings and set the below
    parameters:

    1. In the Filtering threshold field, enter a value
      between 0.2 and 0.85 to filter the pairs of suspect records based on the
      calculated scores.

      This value helps to exclude the pairs which are not very similar. The
      higher the value is, the more similar the records are.
    2. Leave the Set a random seed check box clear as
      you want to generate a different sample by each execution of the
      Job.
    3. In the Number of pairs field, enter the size of
      the suspect pairs sample you want to generate.
    4. When configured with Talend Data Stewardship,
      enter the maximum number of the tasks to load per a commit in the
      Max tasks per commit field.

      There are no limits for the batch size in Talend Data Stewardship (on
      premises). However, do not exceed 200 tasks per commit in Talend Cloud Data Stewardship,
      otherwise the Job fails.

Configuring the output components to write suspect pairs, suspect sample
and unique rows

  1. Double-click the first tFileOutputDelimited component to
    display the Basic settings view and define the component
    properties.

    You have already accepted to propagate the schema to the output components
    when you defined the input component.
  2. Clear the Define a storage configuration component check
    box to use the local system as your target file system.
  3. In the Folder field, set the path to the folder which
    will hold the output data.
  4. From the Action list, select the operation for writing
    data:

    • Select Create when you run the Job for the first
      time.
    • Select Overwrite to replace the file every time
      you run the Job.
  5. Set the row and field separators in the corresponding fields.
  6. Select the Merge results to single file check box, and
    in the Merge file path field set the path where to output
    the file of the suspect record pairs.
  7. Double-click the other tFileOutputDelimited components to display the Basic
    settings
    view and define the component properties.

    For example, set the path where to output the sample data to
    C:/tmp/tmp/pairsSample and set the path where to
    output the file of the suspect sample to
    C:/tmp/pairing/SampleToLabel.csv.

    For example, set the path where to output the unique row to
    C:/tmp/tmp/uniqueRows and set the path where to
    output the file of the suspect pairs sample to
    C:/tmp/pairing/uniqueRows.csv.

Configuring the log component to write exact duplicates

  1. Double-click tLogRow component and define the component
    properties in the Basic settings view.

    This component writes the exact duplicates generated from the input data on
    the Studio console.
  2. Click Sync columns to retrieve the schema from the
    preceding component.
  3. In the Mode area, select Table (print values
    in cells of a table)
    for better readability of the result.

Executing the Job to compute suspect pairs and suspect sample

Press F6 to execute the
Job.

tMatchPairing computes the pairs of suspect records and the
pairs sample, based on the blocking key definition, and writes the results to the
output files.

tMatchPairing excludes unique rows and writes them in the
output file:

tMatchPairing_8.png

tMatchPairing excludes exact duplicates and writes them in the
Run view:

tMatchPairing_9.png

The component has added an extra read-only column, LABEL, for
the Pairs sample link.

tMatchPairing_10.png

You can use the LABEL column to label suspect records manually
before using them with the tMatchModel component.

You can find an example of how to generate a matching model
using tMatchModel on Talend Help Center (https://help.talend.com).


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x