July 30, 2023

tNLPPreprocessing – Docs for ESB 7.x

tNLPPreprocessing

Prepares a text sample and divides it into tokens, which can be words, numbers or
punctuation marks.

tNLPPreprocessing outputs a column containing all the tokens for the
input text, separated by tabs. You can convert the output to the CoNLL format and
manually annotate the text. Then, you can use it to train a model and design features
with the tNLPModel component.

This component can run only with Spark 1.6 and 2.0.

tNLPPreprocessing properties for Apache Spark Batch

These properties are used to configure tNLPPreprocessing
running in the Spark Batch Job framework.

The Spark Batch
tNLPPreprocessing component belongs to the Natural Language Processing family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

The output schema of this component contains a read-only column:

tokens: This column holds the tokens for each row
of the input data.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

NLP Library

From this list, select the library for text preprocessing between
ScalaNLP and Stanford
CoreNLP
.

Clean all HTML tags

Select this check box to remove all the tags from the text.

Column to preprocess

Select the column from the input schema containing the text to be divided
into tokens.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Spark Batch Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Natural Language Processing using Talend Studio

Using Talend Studio and machine learning on Spark, you can teach computers to
understand how humans learn and use natural language.

What is natural language
processing?

Natural language processing tasks include:

  • text tokenization, which divides a text into basic units such as words or
    punctuation marks;

  • sentence splitting, which divides the input into sentences, based on
    ending characters, such as periods or question marks; and

  • named entity recognition, which finds and classify person names, dates,
    locations and organizations in a text.

Natural language processing is useful to:

  • extract person names or company names from textual resources;

  • group forum discussions together by topics;

  • find discussions where people are mentioned but don’t participate to the
    discussion; or

  • link entities.

Natural language processing can help you create links between user profiles
and mentions in the text, between persons and organizations, or between persons and any
other information that may be used for re-identification.

Workflow

Machine learning with Spark is usually two phases: the first phase computes a model
based on historical data and mathematical heuristics, and the second phase applies
the model on text data. In Talend Studio, the first phase is
implemented by two Jobs:

  • the first one with the tNLPPreprocessing and the
    tNormalize components; and

  • the second one with the tNLPModel component.

While the second phase is implemented by a third Job with the
tNLPPredict component.

tNLPPreprocessing_1.png
In this workflow, tNLPPreprocessing:

  • divides a text sample in tokens; and

  • cleans the text sample by removing all HTML tags.

Then, tNormalize converts tokens to the CoNLL format.

You can then manually label the tokens and add optional features by editing the
files. For example, you can label person names with PER:

tNLPPreprocessing_2.png

Next, you can use the tokenized sample text you labeled with
tNLPModel in the second Job where
tNLPModel:

  • generates fatures for each token; and

  • trains a classification model.

tNLPPredict labels text data automatically using the
classification model generated by tNLPModel.

For example, you can extract named entities with <PER>
labels:

tNLPPreprocessing_3.png

Preparing a text sample to be used for learning a model

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

This Job uses tNLPPreprocessing to divide the input text into tokens.
Then, the tokens are converted to the CoNLL format using tNormalize.
You will be able to use this CoNLL file to learn a classification model for extracting
named entities in text data.

Extracting names entities from text data is a three-phase operation:

  1. Preparing a text sample by dividing it into tokens. The tokens will be used
    for training a classification model.

  2. Learning a classification model, designing the features and evaluating the
    model.

    You can find an example of how to generate a named
    entity recognition model on Talend Help Center (https://help.talend.com).

  3. Applying the model on the full text to extract named entities using
    tNLPPredict.

    You can find an example of how to extract named
    entities using a classification model on Talend Help Center (https://help.talend.com).

You can find more information about natural language processing on
Talend Help Center (https://help.talend.com).

tHDFSConfiguration is used in this scenario by Spark to connect
to the HDFS system where the jar files dependent on the Job are transferred.

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

Creating a Job to divide the input text into tokens in CoNLL format

This Job uses tNLPPreprocessing to divide a text sample in XML
format into tokens. Then, tokens are converted to the CoNLL format using
tNormalize.

  1. Drop the following components from the Palette onto the
    design workspace: tXMLFileInput,
    tNLPPreprocessing,
    tFilterColumns, tNormalize and
    tFileOutputDelimited.
  2. Connect the components using Row > Main connections.
tNLPPreprocessing_4.png

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Configuring the input component

The tFileInputXML component is used to load the text to be
processed.

  1. Double click the tFileInputXML component to open its
    Basic settings view and define its properties.

    tNLPPreprocessing_5.png

    1. Click the […] button next to Edit
      schema
      to add the necessary columns to hold the input
      data.
    2. In the File name field, specify the path to the
      file to be processed.
    3. In the Element to extract, enter
      "row".
    4. In the Loop XPath query field, enter the XPath
      query expression between double quotation marks to specify the node on
      which the loop is based.
    5. In the XPath query column of the
      Mapping table, specify the fields to be
      queried between double quotation marks.

  2. In the Advanced settings view of the
    component, select the Custom encoding check box if you
    encounter issues when processing the data.


  3. From the Encoding list, select the encoding
    to be used, UTF-8 in this example.

Converting the tokenized text to the CoNLL format

To be able to learn a classification model from a text, you must divide this text
into tokens and convert it to the CoNLL format using
tNormalize.

  1. Double click the tNLPPreprocessing component to open its
    Basic settings view and define its properties.

    tNLPPreprocessing_6.png


    1. Click Sync columns to retrieve the
      schema from the previous component connected in the Job.

    1. From the NLP Library list, select the library to
      be used for tokenization. In this example,
      ScalaNLP is used.
  2. From the Column to preprocess list, select the column
    that holds the text to be divided into tokens, which is
    message in this example.
  3. Double click the tFilterColumns component to open its
    Basic settings view and define its properties.
  4. Click Edit schema to add the
    tokens column in the output schema because this is
    the column to be normalized, and click OK to
    validate.

    tNLPPreprocessing_7.png

  5. Double click the tNormalize component to open its Basic settings
    view and define its properties.

    tNLPPreprocessing_8.png


    1. Click Sync columns to retrieve the
      schema from the previous component connected in the Job.

    2. From the Column to normalize list, select
      tokens.
    3. In the Item separator field, enter
      " " to separate tokens using a tab in the
      output file.
  6. Double click the tFileOutputDelimited component to open
    its Basic settings view and define its properties.

    tNLPPreprocessing_9.png


    1. Click Sync columns to retrieve the
      schema from the previous component connected in the Job.

    2. In the Folder field, specify the path to the
      folder where the CoNLL files will be stored.
    3. In the Row Separator field, enter
      "
      "
      .
    4. In the Field Separator field, enter
      " " to separate fields with a tab.

  7. Press F6 to save and execute the
    Job.

The output files are created in the specified folder. The files contain a single
column with one token per row.

tNLPPreprocessing_10.png

You can then manually label person names with PER and the
other tokens with O before you can learn a classification
model from this text data:

tNLPPreprocessing_11.png


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x