July 30, 2023

tMatchIndex – Docs for ESB 7.x

tMatchIndex

Indexes a clean and deduplicated data set in ElasticSearch for continuous
matching purposes.

Before indexing a data set in ElasticSearch using the tMatchIndex
component, you must have performed all the matching and deduplicating tasks on this data set:

  • You generated a pairing model and computed pairs of suspect
    duplicates using tMatchPairing.
  • You labeled a sample of the suspect pairs manually or using
    Talend Data Stewardship to generate a
    matching model with tMatchModel.
  • You predicted labels on suspect pairs based on the pairing and
    matching models using tMatchPredict.
  • You cleaned and deduplicated the data set using tRuleSurvivorship.

Then, you do not need to restart the matching process from scratch when you get new data
records having the same schema. You can index the clean data set in ElasticSearch using
tMatchIndex for continuous matching purposes.

The tMatchIndex component supports
Elasticsearch versions up to 6.4.2 and Apache Spark from version 2.0.0.

tMatchIndex properties for Apache Spark Batch

These properties are used to configure tMatchIndex running in
the Spark Batch Job framework.

The Spark Batch
tMatchIndex component belongs to the Data Quality
family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Read-only columns are added to the output schema:

  • INTERNAL_ID: This column holds the
    identifier generated for each record in the input data.

  • BKV: This column holds the blocking key
    value generated for each record in the input data.

  • BEST_SUFFIX: This column holds the
    smallest suffix generated from the blocking key value. The
    suffixes are used to group records.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

ElasticSearch configuration

Nodes: Enter the location
of the cluster hosting the ElasticSearch system to be used.

Index: Enter the name of the index to be created
in ElasticSearch.

Select the Reset index check box to clean the
ElasticSearch index specified in the Index
field.

Note that the Talend components for Spark Jobs support the
Elasticsearch versions up to 6.4.2.

Pairing

Pairing model folder: Set the path to the folder
which has the model files generated by the
tMatchPairing component.

If you want to store the model in a specific file system, for example S3
or HDFS, you must use the corresponding component in the Job and select
the Define a storage configuration component
check box in the component basic settings.

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Advanced settings

Maximum ElasticSearch bulk size

Maximum number of records for bulk indexing.

tMatchIndex uses bulk mode to index data so that big
batches of data can be quickly indexed in ElasticSearch.

It is recommended to leave the default value. If the Job execution ends
with an error, reduce the value for this parameter.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Spark Batch Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Indexing a reference data set in Elasticsearch

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

In this Job, the tMatchIndex component creates an index in
Elasticsearch and populates it with a clean and deduplicated data set which contains a
list of education centers in Chicago.

After performing all the matching actions on the data set which contains a list of
education centers in Chicago, you do not need to restart the matching process from
scratch when you get new data records having the same schema. You can index the clean
data set in Elasticsearch using tMatchIndex for continuous
matching purposes.

Before indexing a reference data set in Elasticsearch:

  • You generated a pairing model using tMatchPairing.

    You can find examples of how to generate a pairing
    model on Talend Help Center (https://help.talend.com).

  • Make sure the input data you want to index is clean and deduplicated.

    You can find an example of how to clean and
    deduplicate a data set on Talend Help Center (https://help.talend.com).

  • The Elasticsearch cluster must be running Elasticsearch 5+.

Setting up the Job

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited and
    tMatchIndex.
  2. Connect the components using a Row > Main connection.
tMatchIndex_1.png

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Configuring the input component

  1. Double-click the tFileInputDelimited component to open
    its Basic settings view and define its properties.

    tMatchIndex_2.png

  2. Click the […] button next to Edit
    schema
    and use the [+] button in the
    dialog box to add String type columns: Original_Id,
    Source, Site_name and
    Address.
  3. In the Folder/File field, set the path to the input
    file.
  4. Set the row and field separators in the corresponding fields and the header and
    footer, if any.

Indexing clean and deduplicated data in Elasticsearch

  • The Elasticsearch cluster and Elasticsearch-head are started before executing
    the Job.

    For more information about Elasticsearch-head, which is a plugin for browsing
    an Elasticsearch cluster, see https://mobz.github.io/elasticsearch-head/.

  1. Double click the tMatchIndex component to open its
    Basic settings view and define its properties.

    tMatchIndex_3.png

  2. In the Elasticsearch configuration area, enter the
    location of the cluster hosting the Elasticsearch system to be used in the
    Nodes field, for example:

    "localhost:9200"

  3. Enter the index to be created in Elasticsearch in the
    Index field, for example:

    education-agencies-chicago

  4. If you need to clean the Elasticsearch index specified in the
    Index field, select the Reset
    index
    check box.
  5. Enter the path to the local folder from where you want to retrieve the pairing
    model files in the Pairing model folder.

  6. Press F6 to save and execute the
    Job.

tMatchIndex created the
education-agencies-chicago index in Elasticsearch,
populated it with the clean data and computed the best suffixes based on the
blocking key values.

You can browse the index created by tMatchIndex using the
plugin Elasticsearch-head.

tMatchIndex_4.png
tMatchIndex_5.png

You can now use the indexed data as a reference data set for the
tMatchIndexPredict component.

You can find an example of how to do continuous matching
using tMatchIndexPredict on Talend Help Center (https://help.talend.com).


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x