August 16, 2023

tNLPModel – Docs for ESB 6.x

tNLPModel

Uses an input in CoNLL format and automatically generates token-level features to
create a model for classification tasks like Named Entity Recognition (NER).

This component can run only with Spark 1.6.

tNLPModel properties for Apache Spark Batch

These properties are used to configure tNLPModel running in
the Spark Batch Job framework.

The Spark Batch
tNLPModel component belongs to the Natural Language Processing family.

The component in this framework is available when you have subscribed to any Talend Platform product with Big Data or Talend Data
Fabric.

Basic settings

Define a storage configuration component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job. For
example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

Schema and Edit Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

The first column in the input schema must be token
and the last column must be label.

You can insert columns for features in between.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Feature template

Features: Select from the list the token-level
features to be generated.

  • POS tag: Part-of-speech tags are labels
    that are assigned to words according to their role in a
    sentence, for example, verb, noun or adjective.

  • NER tag: Named Entity Recognition tags are
    labels assigned to tokens which are the names of things. For
    example, “PER” for a person name.

  • token : Original word segment.

  • lemma: Produces the lemma form for the
    word, for example, “take” for “takes”, “took” or “taken”

  • stem: Produces the root form of the word,
    for example, “fish” for “fishing, “fished” or “fishes”.

  • lowertoken: Produces the original token in
    lowercase

  • tokenisnumeric: The token is a number.

  • tokenispunct: The token is a punctuation
    mark or multiple punctuation marks.

  • tokeninwordnet: The token can be found in
    WordNet.

  • tokeninstopwordlist The token is a stop
    word, for example “the”, “and”, “then” or “where”.

  • tokeninfirstnamelist: The token appears in
    the list of first names.

  • tokeninlastnamelist: The token appears in
    the list of last names.

  • tokensuffixprefix: The token prefix or
    suffix.

  • tokenismostfrequent: The token is among
    the top five percent most frequent tokens in the text.

  • tokenpositionrelative: In a line, the
    number of tokens before / the total number of tokens in this
    line.

  • tokeniscapitalized: The first letter of
    this word is capitalized.

  • tokenisupper: The word is upper-cased.

  • tokenmostfrequentpredecessor: The token is
    among the top five percent most frequent predecessors before a
    named entity.

  • tokeninacronymlist: The token is an
    acronym, for example, EU, UN, PS, etc.

  • tokeningeonames: This token appears in the
    list of geographic names.

Relative position: This is the relative positional
composition of feature. This must be a string of numbers separated by
comma:

  • 0 is for the current feature,

  • 1 is for the next feature; and so on.

For example -2,-1,0,1,2 means that you use the
current token, the preceding two and the following two context tokens as
features.

Additional Features

Select this check box to add additional features in the
Additional feature template.

NLP Library

From this list, select the library to be used between
ScalaNLP and Stanford
CoreNLP
.

If the input is a text preprocessed using the
tNLPPreprocessing component, select the same
NLP Library that was used for the
preprocessing.

Model location

Select the Save the model on file system check box
and in the Folder field, set the path to the
local folder where you want to generate the model file.

If you want to store the model in a specific file system, for example S3
or HDFS, you must use the corresponding component in the Job and select
the Define a storage configuration component
check box in the component basic settings.

The button for browsing does not work with the Spark Local mode; if you are using the Spark Yarn or the Spark Standalone mode,
ensure that you have properly configured the connection in a configuration component in
the same Job, such as tHDFSConfiguration.

Run cross validation evaluation

If you select this check box, the tNLPModel component
will not generate a model file. Instead, it will run a K-fold
cross-validation to evaluate the performance of the model.

The cross-validation process is repeated K times
according to the Fold parameter.

After evaluating the model, clear this check box and rerun the Job to
generate the model file.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Spark Batch Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Scenario: Generating a classification model

This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric.

This Job uses text data divided into tokens in CoNLL format to train a classification
model, design features and evaluate the model.

You can find more information about natural language processing on
Talend Help Center (https://help.talend.com).

Creating a Job to generate a classification model

This Job uses tNLPModel to learn a model to extract named
entities from manually annotated tokens in CoNLL format.

  1. Drop the following components from the Palette onto the
    design workspace: tFileInputDelimited and
    tNLPModel.
  2. Connect the components using Row > Main connections.
use_case_tnlpmodel.png

Configuring the input component

  • You annotated the named entities in the CoNLL files to be used for training
    the model.

    use_case_tnlppreprocessing8.png
  1. Double click the tFileInputDelimited component to open
    its Basic settings view and define its properties.

    use_case_tnlpmodel2.png

    1. Set the Schema as Built-in and click
      Edit schema to define the desired
      schema.

      The first column in the output schema must be
      tokens and the last one must be
      labels. In between, you can have columns
      for features you added manually.

    2. In the Folder/file field, specify the path to
      the training data.
    3. Leave the Die on error check box selected.

  2. In the Advanced settings view of the
    component, select the Custom encoding check box if you
    encounter issues when processing the data.


  3. From the Encoding list, select the encoding
    to be used, UTF-8 in this example.

Evaluating and generating a classification model

The tNLPModel component reads training data in CoNLL format to
evaluate and generate a classification model.

  1. Double click the tNLPModel component to open its
    Basic settings view and define its properties.

    use_case_tnlpmodel3.png

    1. Click the [+] button under the
      Feature template table to add rows to the
      table.
    2. Click in the Features column to select the
      features to be generated.
    3. For each feature, specify the relative position.

      For example -2,-1,0,1,2 means that you use the
      current token, the preceding two and the following two context
      tokens as features.

    4. From the NLP Library list, select the same
      library you used for preprocessing the training text data.
  2. To evaluate the model, select the Run cross validation
    evaluation
    check box and enter 2 in the
    Fold field.

    This means the training data is partitioned into two pieces: the training
    data set and the test data set. The validation process is repeated
    twice.


  3. Press F6 to save and execute the
    Job.

    The results from the K-fold cross-validation process are displayed on the
    Run view:

    • Precision is the ratio of correctly predicted named
      entities to the total number of predicted named entities.
    • Recall is the ratio of correctly predicted named
      entities to the total number of named entities.
    • F1 score is the harmonic mean between
      recall and precision.

  4. Clear the Run cross validation evaluation check
    box.
  5. Select the Save the model on file system check box to
    save the model locally in the folder specified in the
    Folder field.

  6. Press F6 to save and execute the
    Job.

The model files are stored in the specified folder. You can now use the generated
model with the tNLPPredict component to predict named entities
and label text data automatically.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x