August 16, 2023

tPredict – Docs for ESB 6.x

tPredict

Predicts the situation of an element.

Based on the models generated by the classification components, the clustering components or the regression components, tPredict predicts the situation an element could fall into.
tPredict uses a given classification, clustering or
relationship model to analyse datasets incoming from its preceding
component.

Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:

tPredict properties for Apache Spark Batch

These properties are used to configure tPredict running in the Spark Batch Job framework.

The Spark Batch
tPredict component belongs to the Machine Learning family.

The component in this framework is available when you have subscribed to any Talend Platform product with Big Data or Talend Data
Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Depending on the model you select to use, a corresponding read-only column is
automatically added to the schema and is used to carry the result records of the
prediction.

Model type

Select the type of the model you want tPredict to use.
This automatically adds a read-only column to the schema of tPredict to carry the result records of the prediction.

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job. For
example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

The Define a storage configuration component check box is
displayed when you select this radio box. Select it to connect to the filesystem to be
used.

Model on filesystem

Select this radio box if the model to be used is stored on a file system. The button for
browsing does not work with the Spark Local mode; if you
are using the Spark Yarn or the Spark Standalone mode, ensure that you have properly configured the connection in
a configuration component in the same Job, such as tHDFSConfiguration.

In the HDFS
folder
field that is displayed, enter the HDFS URI in which this model is
stored.

The Define a storage configuration component check box is
displayed when you select this radio box. Select it to connect to the filesystem to be
used.

Model computed in the current Job

Select this radio box and then select the model training component that is used in the
same Job to create the model to be used.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenario

For a scenario in which tPredict is used, see Modeling the accident-prone areas in a city.

tPredict properties for Apache Spark Streaming

These properties are used to configure tPredict running in the Spark Streaming Job framework.

The Spark Streaming
tPredict component belongs to the Machine Learning family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Depending on the model you select to use, a corresponding read-only column is
automatically added to the schema and is used to carry the result records of the
prediction.

Model type

Select the type of the model you want tPredict to use.
This automatically adds a read-only column to the schema of tPredict to carry the result records of the prediction.

Define a storage configuration
component

Select the configuration component to be used to provide the configuration
information for the connection to the target file system such as HDFS.

If you leave this check box clear, the target file system is the local
system.

The configuration component to be used must be present in the same Job. For
example, if you have dropped a tHDFSConfiguration component in the Job, you can select it to write
the result in a given HDFS system.

The Define a storage configuration component check box is
displayed when you select this radio box. Select it to connect to the filesystem to be
used.

Model on filesystem

Select this radio box if the model to be used is stored on a file system. The button for
browsing does not work with the Spark Local mode; if you
are using the Spark Yarn or the Spark Standalone mode, ensure that you have properly configured the connection in
a configuration component in the same Job, such as tHDFSConfiguration.

In the HDFS
folder
field that is displayed, enter the HDFS URI in which this model is
stored.

The Define a storage configuration component check box is
displayed when you select this radio box. Select it to connect to the filesystem to be
used.

Model computed in the current Job

Select this radio box and then select the model training component that is used in the
same Job to create the model to be used.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x