August 16, 2023

tMatchIndex – Docs for ESB 6.x

tMatchIndex

Indexes a clean and deduplicated data set in ElasticSearch for continuous
matching purposes.

Before indexing a data set in ElasticSearch using the tMatchIndex
component, you must have performed all the matching and deduplicating tasks on this data set:

  • You generated a pairing model and computed pairs of suspect duplicates using
    tMatchPairing.

  • You labeled a sample of the suspect pairs manually or using Talend Data Stewardship to generate a
    matching model with tMatchModel.

  • You predicted labels on suspect pairs based on the pairing and matching
    models using tMatchPredict.

  • You cleaned and deduplicated the data set using
    tRuleSurvivorship.

Then, you do not need to restart the matching process from scratch when you get new data
records having the same schema. You can index the clean data set in ElasticSearch using
tMatchIndex for continuous matching purposes.

For more information about
tMatchIndexPredict, see the tMatchIndex
documentation on Talend Help Center (https://help.talend.com).

This component can run only with Spark 2.0+ and ElasticSearch 5+.

tMatchIndex properties for Apache Spark Batch

These properties are used to configure tMatchIndex running in
the Spark Batch Job framework.

The Spark Batch
tMatchIndex component belongs to the Data Quality
family.

The component in this framework is available when you have subscribed to any Talend Platform product with Big Data or Talend Data
Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job. For
example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Read-only columns are added to the output schema:

  • INTERNAL_ID: This column holds the
    identifier generated for each record in the input data.

  • BKV: This column holds the blocking key
    value generated for each record in the input data.

  • BEST_SUFFIX: This column holds the
    smallest suffix generated from the blocking key value. The
    suffixes are used to group records.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

ElasticSearch configuration

Nodes: Enter the location
of the cluster hosting the ElasticSearch system to be used.

Index: Enter the name of the index to be created
in ElasticSearch.

Select the Reset index check box to clean the
ElasticSearch index specified in the Index
field.

Pairing

Pairing model folder: Set the path to the folder
which has the model files generated by the
tMatchPairing component.

If you want to store the model in a specific file system, for example S3
or HDFS, you must use the corresponding component in the Job and select
the Define a storage configuration component
check box in the component basic settings.

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode,
ensure that you have properly configured the connection in a configuration component in
the same Job, such as tHDFSConfiguration.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Spark Batch Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Scenario: Indexing a reference data set in Elasticsearch

This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric.

In this Job, the tMatchIndex component creates an index in
Elasticsearch and populates it with a clean and deduplicated data set which contains a
list of education centers in Chicago.

After performing all the matching actions on the data set which contains a list of
education centers in Chicago, you do not need to restart the matching process from
scratch when you get new data records having the same schema. You can index the clean
data set in Elasticsearch using tMatchIndex for continuous
matching purposes.

Before indexing a reference data set in Elasticsearch:

  • You generated a pairing model using tMatchPairing.

    You can find examples of how to generate a pairing
    model on Talend Help Center (https://help.talend.com).

  • Make sure the input data you want to index is clean and deduplicated.

    You can find an example of how to clean and
    deduplicate a data set on Talend Help Center (https://help.talend.com).

  • The Elasticsearch cluster must be running Elasticsearch 5+.

Setting up the Job

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited and
    tMatchIndex.
  2. Connect the components using a Row > Main connection.
use_case_tmatchindex.png

Configuring the input component

  1. Double-click the tFileInputDelimited component to open
    its Basic settings view and define its properties.

    use_case_tmatchindex2.png

  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. In the Folder/File field, set the path to the input
    file.
  4. Set the row and field separators in the corresponding fields and the header and
    footer, if any.

Indexing clean and deduplicated data in Elasticsearch

  • The Elasticsearch cluster and Elasticsearch-head are started before executing
    the Job.

    For more information about Elasticsearch-head, which is a plugin for browsing
    an Elasticsearch cluster, see https://mobz.github.io/elasticsearch-head/.

  1. Double click the tMatchIndex component to open its
    Basic settings view and define its properties.

    use_case_tmatchindex3.png

  2. In the Elasticsearch configuration area, enter the
    location of the cluster hosting the Elasticsearch system to be used in the
    Nodes field, for example:

    "localhost:9200"

  3. Enter the index to be created in Elasticsearch in the
    Index field, for example:

    education-agencies-chicago

  4. If you need to clean the Elasticsearch index specified in the
    Index field, select the Reset
    index
    check box.
  5. Enter the path to the local folder from where you want to retrieve the pairing
    model files in the Pairing model folder.

  6. Press F6 to save and execute the
    Job.

tMatchIndex created the
education-agencies-chicago index in Elasticsearch,
populated it with the clean data and computed the best suffixes based on the
blocking key values.

You can browse the index created by tMatchIndex using the
plugin Elasticsearch-head.

use_case_tmatchindex4.png
use_case_tmatchindex5.png

You can now use the indexed data as a reference data set for the
tMatchIndexPredict component.

You can find an example of how to do continuous matching
using tMatchIndexPredict on Talend Help Center (https://help.talend.com).


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x