July 30, 2023

tNLPModel – Docs for ESB 7.x

tNLPModel

Uses an input in CoNLL format and automatically generates token-level features to
create a model for classification tasks like Named Entity Recognition (NER).

This component can run only with Spark 1.6.

tNLPModel properties for Apache Spark Batch

These properties are used to configure tNLPModel running in
the Spark Batch Job framework.

The Spark Batch
tNLPModel component belongs to the Natural Language Processing family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job.
For example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

The first column in the input schema must be token
and the last column must be label.

You can insert columns for features in between.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Feature template

Features: Select from the list the token-level
features to be generated.

  • POS tag: Part-of-speech tags are labels that
    are assigned to words according to their role in a sentence, for
    example, verb, noun or adjective.
  • NER tag: Named Entity Recognition tags are
    labels assigned to tokens which are the names of things. For
    example, “PER” for a person name.
  • token : Original word segment.
  • lemma: Produces the lemma form for the word,
    for example, “take” for “takes”, “took” or “taken”
  • stem: Produces the root form of the word, for
    example, “fish” for “fishing, “fished” or “fishes”.
  • lowertoken: Produces the original token in
    lowercase
  • tokenisnumeric: The token is a number.
  • tokenispunct: The token is a punctuation mark
    or multiple punctuation marks.
  • tokeninwordnet: The token can be found in
    WordNet.
  • tokeninstopwordlist The token is a stop word,
    for example “the”, “and”, “then” or “where”.
  • tokeninfirstnamelist: The token appears in
    the list of first names.
  • tokeninlastnamelist: The token appears in the
    list of last names.
  • tokensuffixprefix: The token prefix or
    suffix.
  • tokenismostfrequent: The token is among the
    top five percent most frequent tokens in the text.
  • tokenpositionrelative: In a line, the number
    of tokens before / the total number of tokens in this line.
  • tokeniscapitalized: The first letter of this
    word is capitalized.
  • tokenisupper: The word is upper-cased.
  • tokenmostfrequentpredecessor: The token is
    among the top five percent most frequent predecessors before a named
    entity.
  • tokeninacronymlist: The token is an acronym,
    for example, EU, UN, PS, etc.
  • tokeningeonames: This token appears in the
    list of geographic names.

Relative position: This is the relative positional
composition of feature. This must be a string of numbers separated by
comma:

  • 0 is for the current feature,
  • 1 is for the next feature; and so on.

For example -2,-1,0,1,2 means that you use the
current token, the preceding two and the following two context tokens as
features.

Additional Features

Select this check box to add additional features in the
Additional feature template.

NLP Library

From this list, select the library to be used between
ScalaNLP and Stanford
CoreNLP
.

If the input is a text preprocessed using the
tNLPPreprocessing component, select the same
NLP Library that was used for the
preprocessing.

Model location

Select the Save the model on file system check box
and either:

  • set the path to the folder where you want to generate the model
    files in the Folder field, for example:
    “opt/model/”; or
  • select the Store model in a single file
    check box to generate the model file in the folder set in the
    Folder field. For example:
    “/opt/model/<model_name>”

If you want to store the model in a specific file system, for example S3
or HDFS, you must use the corresponding component in the Job and select
the Define a storage configuration component
check box in the component basic settings.

The button for browsing does not work with the Spark
Local mode; if you are
using the other Spark Yarn
modes that the Studio supports with your distribution, ensure that you have properly
configured the connection in a configuration component in the same Job, such as

tHDFSConfiguration
. Use the
configuration component depending on the filesystem to be used.

Run cross validation evaluation

If you select this check box, the tNLPModel will run a
K-fold cross-validation to evaluate the performance of the model and
generate the model.

By default, the Fold parameter is set to
3.

  • The dataset is partitioned into K equal size
    subsets.
  • One of the K subsets is used as the validation data
    for testing the model and the remaining subsets are used as the
    training data.
  • The cross-validation process is repeated K times
    according to the Fold parameter with each of
    the K subset used once as the validation data.

For each improvement of the model, to output the best weighted F1-score
resulting from the cross validation evaluation in the
Run view, set the
log4jLevel to Info in
the Advanced Settings tab of the
Run view.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Cross validation evaluation

The following items are output to the console of the
Run view:

  • For each class:
    • The class name
    • True Positive is the number of elements that were predicted
      correctly as elements of this class.
    • Predicted True is the number of elements that were predicted
      as elements of this class.
    • Labeled True is the number of elements belonging to this
      class.
    • The Precision score, varying from 0 to 1, indicates how
      relevant the elements selected by the classification are to
      a given class.
    • The Recall score, varying from 0 to 1, indicates how many
      relevant elements are selected.
    • The F1-score is the harmonic mean of the Precision score and
      the Recall score.
  • For the best model: the global weighted F1-score.

For each improvement of the model, the best weighted F1-score is output
to the console of the Run view. This score is
output along with the other Log4j INFO-level information.

For more information on the log4j logging levels, see the Apache
documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Spark Batch Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Generating a classification model

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

This Job uses text data divided into tokens in CoNLL format to train a classification
model, design features and evaluate the model.

You can find more information about natural language processing on
Talend Help Center (https://help.talend.com).

Creating a Job to generate a classification model

This Job uses tNLPModel to learn a model to extract named
entities from manually annotated tokens in CoNLL format.

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited and
    tNLPModel.
  2. Connect the components using Row > Main connections.
tNLPModel_1.png

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Configuring the input component

  • You annotated the named entities in the CoNLL files to be used for training
    the model.

    tNLPModel_2.png
  1. Double click the tFileInputDelimited component to open
    its Basic settings view and define its properties.

    tNLPModel_3.png

    1. Set the Schema as Built-in and click
      Edit schema to define the desired
      schema.

      The first column in the output schema must be
      tokens and the last one must be
      labels. In between, you can have columns
      for features you added manually.

    2. In the Folder/file field, specify the path to
      the training data.
    3. Leave the Die on error check box selected.

  2. In the Advanced settings view of the
    component, select the Custom encoding check box if you
    encounter issues when processing the data.


  3. From the Encoding list, select the encoding
    to be used, UTF-8 in this example.

Evaluating and generating a classification model

The tNLPModel component reads training data in CoNLL format to
evaluate and generate a classification model.

  1. Double click the tNLPModel component to open its
    Basic settings view and define its properties.

    tNLPModel_4.png

    1. Click the [+] button under the
      Feature template table to add rows to the
      table.
    2. Click in the Features column to select the
      features to be generated.
    3. For each feature, specify the relative position.

      For example -2,-1,0,1,2 means that you use the
      current token, the preceding two and the following two context
      tokens as features.

    4. From the NLP Library list, select the same
      library you used for preprocessing the training text data.
  2. To evaluate the model, select the Run cross validation
    evaluation
    check box.
  3. Select the Save the model on file system and the
    Store model in a single file check boxes to save the
    model locally in the folder specified in the Folder
    field.
  4. Optional:
    Change the logging output level for the execution of the Job to output the best
    weighted F1-score for each improvement of the model in the
    Run view:

    1. In the Run view, click the Advanced
      settings
      tab.
    2. Select the log4jLevel check box, and select
      Info from the list.

  5. Press F6 to save and execute the
    Job.

If you set the log4jLevel value to
Info, the best weighted F1-score is output to the console of
the Run view for each improvement of the model.

The following
items are also output to the console of the Run
view:

Category Item
For each class The class name
True Positive: the number of elements that were predicted
correctly as elements of this class.
Predicted True: the number of elements that were predicted as
elements of this class.
Labeled True: the number of elements belonging this
class.
Precision score: this score varies from 0 to 1 and indicates how
relevant the elements selected by the classification are to a given
class.
Recall score: this score varies from 0 to 1 and indicates how
many relevant elements are selected.
F1-score: the harmonic mean of the Precision score and the Recall
score.
For the best model The global weighted F1-score

The model file is stored in the specified folder. You can now use the
generated model with the tNLPPredict component to predict named
entities and label text data automatically.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x