July 30, 2023

tJava – Docs for ESB 7.x

tJava

Extends the functionalities of a Talend Job using custom Java
commands.

tJava enables you to enter personalized
code in order to integrate it in Talend program. You can execute
this code only once.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tJava Standard properties

These properties are used to configure tJava running in the Standard Job framework.

The Standard
tJava component belongs to the Custom Code family.

The component in this framework is available in all Talend
products
.

Basic settings

Code

Type in the Java code you want to execute according to the task
you need to perform. For further information about Java functions syntax
specific to
Talend
, see
Talend Studio
Help Contents (Help > Developer Guide > API Reference).

For a complete Java reference, check http://docs.oracle.com/javaee/6/api/

Note:

This component offers the advantage of the dynamic schema
feature. This allows you to retrieve unknown columns from source files or
to copy batches of columns from a source with­out mapping each column
individually. For further information about dynamic schemas, see


Talend Studio
User Guide
.

Note: If your custom Java code references
org.talend.transform.runtime.api.ExecutionStatus, change it
to
org.talend.transform.runtime.common.MapExecutionStatus.

Advanced settings

Import

Enter the Java code to import, if necessary, external libraries
used in the Code field of the Basic settings view.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job
level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is generally used as a one-component subJob.

Limitation

You should know Java language.

Printing out a variable content

The following scenario is a simple demo of the extended application of the
tJava component. The Job aims at printing out the
number of lines being processed using a Java command and the global variable provided in
Talend Studio
.

tJava_1.png

Setting up the Job

  1. Select and drop the following components from the Palette onto the design workspace: tFileInputDelimited, tFileOutputExcel, tJava.
  2. Connect the tFileInputDelimited to the
    tFileOutputExcel using a Row Main connection. The content from a delimited
    txt file will be passed on through the connection to an xls-type of file
    without further transformation.
  3. Then connect the tFileInputDelimited
    component to the tJava component using a
    Trigger > On
    Subjob Ok
    link. This link sets a sequence ordering tJava to be executed at the end of the main
    process.

Configuring the input component

  1. Set the Basic settings of the tFileInputDelimited component.

    tJava_2.png

  2. Define the path to the input file in the File
    name
    field.

    The input file used in this example is a simple text file made of two
    columns: Names and their respective
    Emails.
  3. Click the Edit Schema button, and set the
    two-column schema. Then click OK to close
    the dialog box.

    tJava_3.png

  4. When prompted, click OK to accept the
    propagation, so that the tFileOutputExcel
    component gets automatically set with the input schema.

Configuring the output component

Set the output file to receive the input content without changes. If the file does
not exist already, it will get created.

tJava_4.png

In this example, the Sheet name is
Email and the Include
Header
box is selected.

Configuring the tJava component

  1. Then select the tJava component to set
    the Java command to execute.

    tJava_5.png

  2. In the Code area, type in the following
    command:

    In this use case, we use the NB_Line variable. To
    access the global variable list, press Ctrl + Space bar on your keyboard and
    select the relevant global parameter.

Executing the Job

  1. Press Ctrl+S to save
    your Job.
  2. Press F6 to execute
    it.
tJava_6.png

The content gets passed on to the Excel file defined and the Number of lines
processed are displayed on the Run
console.

tJava properties for Apache Spark Batch

These properties are used to configure tJava running in the Spark Batch Job framework.

The Spark Batch
tJava component belongs to the Custom Code family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Schema et Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Note that if the input value of any non-nullable primitive field is
null, the row of data including that field will be rejected.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Code

Type in the Java code you want to execute to process the incoming RDD
from the input link or even create new RDDs out of this input one.

You need to leverage the schema, the link and the component name to
write the custom code. For example, if this component is labeled
tJava_1 and the connection to it
is labeled row1, then the class of
the input RDD is row1Struct and the
input RDD itself is available with the rdd_tJava_1 variable.

For more detailed instructions, see the default comment provided in
the Code field of this
component.

For further information about Spark’s Java API, see Apache’s Spark
documentation in https://spark.apache.org/docs/latest/api/java/index.html.

Advanced settings

Classes

Define the classes that you need to use in the code written in the Code field in the Basic settings
view.

It is recommended to define new classes in this field, instead of in the Code field, so as to avoid
eventual exceptions in serialization.

Import

Enter the Java code to import, if necessary, external libraries
used in the Code field of the Basic settings view.

Usage

Usage rule

This component is used as an end component and requires an input link.

Code example In the Code field of the Basic settings view,
enter the following code to create an output RDD by using custom transformations on the
input RDD. mapInToOut is a class to be defined in the
Classes field in the Advanced settings
view.

In
the Classes field of the Advanced settings
view, enter the following code to define the mapInToOut
class:

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Limitation

Knowledge of Spark and Java language is required.

Related scenarios

No scenario is available for the Spark Batch version of this component
yet.

tJava properties for Apache Spark Streaming

These properties are used to configure tJava running in the Spark Streaming Job framework.

The Spark Streaming
tJava component belongs to the Custom Code family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema et Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Note that if the input value of any non-nullable primitive field is
null, the row of data including that field will be rejected.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Code

Type in the Java code you want to execute to process the incoming RDD
from the input link or even create new RDDs out of this input one.

You need to leverage the schema, the link and the component name to
write the custom code. For example, if this component is labeled
tJava_1 and the connection to it
is labeled row1, then the class of
the input RDD is row1Struct and the
input RDD itself is available with the rdd_tJava_1 variable.

For more detailed instructions, see the default comment provided in
the Code field of this
component.

For further information about Spark Java API, see
Apache’s Spark documentation in https://spark.apache.org/docs/latest/api/java/index.html.

Advanced settings

Classes

Define the classes that you need to use in the code written in the Code field in the Basic settings
view.

It is recommended to define new classes in this field, instead of in the Code field, so as to avoid
eventual exceptions in serialization.

Import

Enter the Java code to import, if necessary, external libraries
used in the Code field of the Basic settings view.

Usage

Usage rule

This component is used as an end component and requires an input link.

Code example In the Code field of the Basic settings view,
enter the following code to create an output RDD by using custom transformations on the
input RDD. mapInToOut is a class to be defined in the
Classes field in the Advanced settings
view.

In
the Classes field of the Advanced settings
view, enter the following code to define the mapInToOut
class:

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x