August 16, 2023

tMatchPredict – Docs for ESB 6.x

tMatchPredict

Labels suspect records automatically and groups suspect records which match the
label(s) set in the component properties.

tMatchPredict labels suspect pairs based on the pairing and matching
models generated by the tMatchPairing and
tMatchModel components.

If the input data is new and has not been paired previously, you can define the input
data as “unpaired” and set the path to the pairing model folder to separate the exact
duplicates from unique records.

tMatchPredict can also output unique records, exact duplicates and
suspect duplicates from a new data set.

This component can run only with the following Hadoop distributions with Spark 1.6+ and
2.0:

  • Spark 1.6: CDH5.7, CDH5.8, HDP2.4.0, HDP2.5.0, MapR5.2.0, EMR4.5.0, EMR4.6.0.

  • Spark 2.0: EMR5.0.0.

tMatchPredict properties for Apache Spark Batch

These properties are used to configure tMatchPredict running in the Spark Batch Job framework.

The Spark Batch
tMatchPredict component belongs to the Data Quality family.

This component is available in the Palette of the Studio only if you have subscribed to any Talend Platform product with Big Data or Talend Data Fabric.

Basic settings

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job. For
example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

The output schema of this component has read-only columns in its
output links:

LABEL: used only with the Suspect duplicates output link. It holds
the prediction labels.

COUNT: used only with the Exact duplicates output link. It holds the
number of exact duplicates.

GROUPID: used only with the Suspect duplicates output link. It holds
the group identifiers.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Pairing

From the Input type list,
select:

paired: to use as input the suspect
duplicates generated by the tMatchPairing component.

unpaired: to use as input new data
set which has not been paired by tMatchPairing.

Pairing model folder: (available only
with the unpaired input type) Set the
path to the folder which has the model files generated by the tMatchPairing component.

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode,
ensure that you have properly configured the connection in a configuration component in
the same Job, such as tHDFSConfiguration.

For further information, see tMatchPairing.

Matching

Matching model location: Select from
the list where to get the model file generated by the classification Job
with the tMatchModel component:

from file system: Set the path to
the folder where the model file is generated by the classification
component. For further information, see tMatchModel.

from current Job: Set the name of
the model file generated by the classification component. You can use
this option only if the classification Job with the tMatchModel component is integrated in the
Job with the tMatchPredict
component.

Matching model folder: Set the path
to the folder which has the model files generated by the tMatchModel component.

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode,
ensure that you have properly configured the connection in a configuration component in
the same Job, such as tHDFSConfiguration.

For further information, see tMatchModel.

Clustering classes

Add in the table one or more of the label(s) you used on the sample
suspects generated by tMatchPairing.
Make sure to use the same text.

The component then groups suspect records which match the label(s) set
in the table.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Batch Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Scenario: Labeling suspect pairs with assigned labels

This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric.

For further information about the two workflows used when
matching with Spark, see the documentation on Talend Help Center (https://help.talend.com).

The use case described here uses:

  • a tFileInputDelimited
    component to read the input suspect pairs generated by tMatchPairing;

  • a tMatchPredict component to
    label suspect records automatically and groups together suspect records which
    match the label set in the component properties; and

  • a tFileOutputDelimited component output the
    labeled duplicate records and the groups created on the suspect records which
    match the label set in tMatchPredict properties.

Setting up the Job

  1. Drop the following components from the Palette onto the design workspace: tFileInputDelimited, tMatchPredict and
    tFileOutputDelimited.
  2. Connect tFileInputDelimited to tMatchPredict using the Main link.
  3. Connect tMatchPredict to
    tFileOutputDelimited using the Suspect duplicates link.
  4. Check that you have defined the connection to the Spark cluster and activated
    checkpointing in the Run > Spark Configuration view. For more information about selecting the Spark mode, see
    the documentation on Talend Help Center (https://help.talend.com).
use_case-tmatchpredict.png

Configuring the input component

  1. Double-click tFileInputDelimited to open its Basic settings view in the Component tab.

    use_case-tmatchpredict1.png

    The input data to be used with
    tMatchPredict is the suspect data pairs
    generated by tMatchPairing. You can find examples
    of how to compute suspect pairs and suspect sample from source data on
    Talend Help Center (https://help.talend.com).

  2. Click the […] button next to Edit
    schema
    to open a dialog box and add columns to the input schema:
    Original_Id, Source,
    Site_name, Address,
    PAIR_ID and SCORE.

    SCORE is a Double-typed column. The other ones are
    String-typed columns.

  3. In the Folder/File
    field, set the path to the input file.
  4. Set the row and field separators in the corresponding fields,
    and limit the header to 1.

Applying the matching model on the data set

  1. Double-click tMatchPredict to display the Basic
    settings
    view and define the component properties.

    use_case-tmatchpredict2.png

  2. Click Sync columns to
    retrieve the schema defined in the input component.
  3. From the Input type
    list, select paired as the input data is
    already paired with tMatchPairing.
  4. From the Matching model
    location
    list, select from file
    system
    and then set the path to the matching model in the
    folder field.
  5. In the Clustering
    classes
    table, add one or more of the labels you used on the
    sample suspects generated by tMatchPairing, YES in this
    example.

    The labels were set manually or through Talend Data Stewardship.

    The tMatchPredict component will group suspect records
    which match the YES label.

Configuring the output components to write the labeled suspect
pairs

  1. Double-click the first tFileOutputDelimited
    component to display the Basic settings view and
    define the component properties.

    You have already accepted to propagate the schema to the output
    components when you defined the input component.
  2. Clear the Define a storage configuration component check
    box to use the local system as your target file system.
  3. In the Folder field, set the path to the folder
    which will hold the output data.
  4. From the Action list, select the operation for
    writing data:

    • Select Create when you run the Job for the
      first time.

    • Select Overwrite to replace the file every
      time you run the Job.

  5. Set the row and field separators in the corresponding fields.
  6. Select the Merge results to single file check box, and
    in the Merge file path field set the path where to output
    the file of the labeled suspect pairs.

Executing the Job to label suspect pairs with assigned labels

Press F6 to execute the Job.

tMatchPredict labels the suspect pairs, groups the suspect
records which match the YES label and writes all the suspect
pairs in the output file.

The suspect records which match the YES label belong to groups
because tMatchPredict was configured to groups records which
match this clustering class.

use_case-tmatchpredict4.png

The records labeled with the NO label do not belong to any
group.

You can now create a single representation of each duplicates group and merge these
representations with the unique rows computed by
tMatchPairing.

You can find an example of how to create a clean and
deduplicated dataset on Talend Help Center (https://help.talend.com).


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x