July 30, 2023

tMatchIndexPredict – Docs for ESB 7.x

tMatchIndexPredict

Compares a new data set with a lookup data set stored in ElasticSearch, using
tMatchIndex. tMatchIndexPredict outputs unique
records and suspect duplicates in separate files.

In the potential duplicates output, each record contains the fields from the source
records and the fields from the potentially matching lookup records.

The tMatchIndexPredict component
supports Elasticsearch versions up to 6.4.2 and Apache Spark from version 2.0.0.

tMatchIndexPredict properties for Apache Spark Batch

These properties are used to configure tMatchIndexPredict
running in the Spark Batch Job framework.

The Spark Batch
tMatchIndexPredict component belongs to the Data Quality family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

Select the Schema type:

  • Built-In: You create and store the schema locally for this component
    only.

  • Repository: You have already created the schema and stored it in the
    Repository. You can reuse it in various projects and Job designs.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

You need to manually edit the output schema to add the necessary columns
that hold the fields from the lookup data.

The output schema of this component contains read-only columns:

LABEL: used only with the Possible
matches
output link.

CONFIDENCE_SCORE: indicates the
confidence score of a prediction for a pair.

ElasticSearch configuration

Nodes: Enter the location
of the cluster hosting the ElasticSearch system to be used.

Index: Enter the name of the ElasticSearch index
where the lookup data is stored.

Note that the Talend components for Spark Jobs support the
Elasticsearch versions up to 6.4.2.

Models

Pairing model folder: Set the path to
the folder which has the model files generated by the tMatchPairing component.

Matching model
location
: Select from the list where to get the model file
generated by the classification Job with the tMatchModel component:

  • from file system: In the
    Matching model folder,
    set the path to the folder where the model file is generated by the
    tMatchModel component.
  • from current Job: From the
    Model name list, select
    the name of the model file generated by the classification
    component. You can use this option only if the classification Job
    with the tMatchModel component is
    integrated in the Job with the tMatchIndexPredict component.

Matching model
folder
: Set the path to the folder which has the model files
generated by the tMatchModel
component.

No-match label: Enter the label used
for the unique records in the output.

If you want to store the model in a specific file system, for
example S3 or HDFS, you must use the corresponding component in the Job and
select the Define a storage configuration
component
check box in the component basic settings.

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Advanced settings

Maximum ElasticSearch bulk size

Maximum number of records for bulk
processing.

tMatchIndexPredict uses bulk mode to process data so that big
batches of data can be quickly compared with lookup data indexed in
ElasticSearch.

It is recommended to leave the default value. If the Job
execution ends with an error, reduce the value for this parameter.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Doing continuous matching using tMatchIndexPredict

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

After indexing lookup data in Elasticsearch using tMatchIndex, you do
not need to restart the matching process from scratch. The
tMatchIndexPredict component compares new data records with the
lookup stored in ElasticSearch.

In this example, a list of early childhood education centers in Chicago coming from ten
different source has been cleaned, deduplicated and indexed in Elasticsearch. You want
to match new records which contain information about early childhood education centers
in Chicago against the reference data set stored in Elasticsearch.

tMatchIndexPredict uses pairing and matching models to group together
records from the input data and the matching records from the reference data set indexed
in Elasticsearch and label the suspect pairs.

tMatchIndexPredict outputs potential duplicates and unique records in
separate files.

Before you begin:

  • You generated a pairing model.

    You can find an example of how to generate a pairing
    model on Talend Help Center (https://help.talend.com).

  • You generated a matching model.

    You can find an example of how to generate a
    matching model on Talend Help Center (https://help.talend.com).

  • Clean and deduplicated data has been indexed in Elasticsearch to match
    against new data records and determine whether they are unique records or
    suspect duplicates.

    You can find an example of how to index clean and
    deduplicated data in ElasticSearch on Talend Help Center (https://help.talend.com).

  • The Elasticsearch search cluster must be running ElasticSearch 5+.

Setting up the Job

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited,
    tMatchIndexPredict and two
    tFileOutputDelimited components.
  2. Connect tFileInputDelimited to the
    tMatchIndexPredict using a Row > Main connection.
  3. Connect tMatchIndexPredict to the first
    tFileOutputDelimited using a Row > Possible
    matches
    connection.
  4. Connect tMatchIndexPredict to the second
    tFileOutputDelimited using a Row > No
    match
    connection.
tMatchIndexPredict_1.png

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Configuring the input component

  1. Double-click the tFileInputDelimited component to open
    its Basic settings view.

    tMatchIndexPredict_2.png

  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. Click OK in the dialog box and accept to propagate the
    changes when prompted.
  4. In the Folder/File field, set the path to the input
    file.
  5. Set the row and field separators in the corresponding fields and the header and
    footer, if any.

Configuring the tMatchIndexPredict component

  1. Double-click the tMatchIndexPredict component to open
    its Basic settings view.

    tMatchIndexPredict_3.png

  2. In the ElasticSearch configuration area, enter the
    location of the cluster hosting the Elasticsearch system to be used in the
    Nodes field, for example:

    "localhost:9200"

  3. In the ElasticSearch configuration area, enter the name
    of the Elasticsearch index where the reference data is stored in the
    Index field, for example:

    "education-agencies-chicago"

  4. In the Models area, set the information about the
    pairing and matching models:

    1. Set the path to the folder containing the model files generated by the
      tMatchPairing component in the Pairing
      model folder
      field.
    2. Select from the Matching model location list
      where to get the model file generated by the
      tMatchModel component.

      In this example, select from file system
      because the classification Job using the
      tMatchModel component is not integrated to
      the current Job.

    3. Set the path to the folder containing the model file generated by the
      tMatchModel component in the
      Matching model folder field.
    4. Set the label used for the unique records output in the
      No-match label field.

Computing suspect pairs and unique rows

  1. Double-click the first tFileOutputDelimited component to
    display the Basic settings view and define the component
    properties.

    You have already accepted to propagate the schema to the output components
    when you defined the input component.
  2. Clear the Define a storage configuration component check
    box to use the local system as your target file system.
  3. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add the columns from the reference data set to the schema.

    You must add _ref at the end of the column names
    to be added to the suspect duplicates output. In this example:
    Original_id_ref,
    Source_ref,
    Site_name_ref and
    Address_ref.

    tMatchIndexPredict_4.png

  4. In the Folder field, set the path to the folder which
    will hold the output data.
  5. From the Action list, select the operation for writing
    data:

    • Select Create when you run the Job for the first
      time.
    • Select Overwrite to replace the file every time
      you run the Job.
  6. Set the row and field separators in the corresponding fields.
  7. Select the Merge results to single file check box, and
    in the Merge file path field set the path where to output
    the file of the suspect record pairs.
  8. Double-click the second tFileOutputDelimited component
    and define the component properties in the Basic settings
    view, as you do with the first component.

    This component creates the file which holds the unique rows generated from the
    input data.

  9. Press F6 to save and execute the
    Job.

tMatchIndexPredict groups together
records from the input data and the matching records from the reference data set
indexed in Elasticsearch and labels the suspect pairs. These appear in the same
row.

tMatchIndexPredict_5.png
tMatchIndexPredict excludes unique records to write them in
another file.

tMatchIndexPredict_6.png

You can now clean and deduplicate the unique rows and use
tMatchIndex to add them to the reference data set stored in
Elasticsearch.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x