July 30, 2023

tJavaMR – Docs for ESB 7.x

tJavaMR

Provides an editor that enables you to enter personalized MapReduce code in order
to integrate it in Talend
program.

tJavaMR makes it possible to extend the functionalities
of a Talend Job through
writing custom map and reduce methods. You can execute this code only once.

This component appears only when you are
creating a Map/Reduce Job.

tJavaMR MapReduce properties (deprecated)

These properties are used to configure tJavaMR running in the MapReduce Job framework.

The MapReduce
tJavaMR component belongs to the Custom Code family.

The component in this framework is available in all Talend Platform products with Big Data and in Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Map only

Select this check box to edit and use a custom mapper only. In
that situation, only the Map code
editing field and the Map advanced
code
area are available.

Map code

Enter the body of the map method you want to execute.

This component automatically defines the other parts of the map
method and if you are not using Map
only
, tJavaMR also
automatically uses the column names you specify in the mrKeyStruct and the mrValueStruct tables to instantiate the key/value
pairs intermediate between the map and the reduce phases.

For example, you put word as
column name in the mrKeyStruct
table; then when you write the code, you need to write mrKey.word to represent the
corresponding key instance and at runtime, this instance is
automatically constructed to be mrKey_component_ID.word
such as mrKey_tJavaMR_1.word.

Note that the text displayed above the Map
code
editing field indicates the parameters you can
directly use in writing the code, such as mrKey, mrValue or
outputRow.

For further information about a map method and the intermediate
key/value pairs it outputs, see https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapred/Mapper.html.

For further information about Java functions syntax specific to

Talend
, see

Talend Studio
Help Contents (Help > Developer
Guide > API Reference).

For a complete Java reference, check http://docs.oracle.com/javaee/6/api/.

mrKeyStruct and mrValueStruct

In these two tables, add the columns you want to use to compose
the key/value pairs required by MapReduce computations.

Reduce code

Enter the body of the reduce method you want to execute according
to the task you need to perform.

This component automatically defines the shuffle and sort phases
and the other parts of the reduce method and uses the column names
you specify in the mrKeyStruct and
the mrValueStruct tables to
instantiate the key/value pairs that have been shuffled and
sorted.

For example, you put word as
column name in the mrKeyStruct
table, then when you write the code, you need to write key.word to represent the corresponding
key instance and at runtime, this instance is automatically
constructed to be key_component_ID.word such as
key_tJavaMR_1.word; you put
count in the mrValueStruct table, then you have to
write values.count to define the
corresponding value instance and at runtime, this instance is
constructed to be values_component_ID.count such as
values_tJavaMR_1.count.

Note that the text displayed above the Reduce code editing field indicates the parameters
you can directly use in writing the code.

For further information about a reduce method and its related
phases and key/value pairs, see https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/mapred/Reducer.html.

For further information about Java functions syntax specific to

Talend
, see

Talend Studio
Help Contents (Help > Developer
Guide > API Reference).

For a complete Java reference, check http://docs.oracle.com/javaee/6/api/.

Advanced settings

Map advanced code

This area allows you to define the classes, variables and methods
that you want to use along with the map method defined in the
Basic settings view. Note that
the advanced code is not required for using tJavaMR.

Three fields are available for this purpose:

Implement the prepare code:
select this check box and in the displayed field, define variables,
methods and inner classes to be nested in the body of the public
class of this component’s mapper.

Implement the configure method:
select this check box and in the displayed field, define the body of
the configure method of this component’s mapper.

Implement the close method:
select this check box and in the displayed field, define the body of
the close method of this component’s mapper.

Reduce advanced code

This area allows you to define the classes, variables and methods
that you want to use along with the reduce method defined in the
Basic settings view. Note that
the advanced code is not required for using tJavaMR.

Three fields are available for this purpose:

Implement the prepare code:
select this check box and in the displayed field, define variables,
methods and inner classes to be nested in the body of the public
class of this component’s reducer.

Implement the configure method:
select this check box and in the displayed field, define the body of
the configure method of this component’s reducer.

Implement the close method:
select this check box and in the displayed field, define the body of
the close method of this component’s reducer.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

Once a Map/Reduce Job is opened in the workspace, tJavaMR appears in the Palette of the Studio. It is used as an
intermediate step in a Map/Reduce Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Counting words using custom map and reduce code (deprecated)

This scenario applies only to subscription-based Talend products with Big
Data
.

Inspired by the MapReduce example explained in Apache’s documentation on http://wiki.apache.org/hadoop/WordCount, this scenario demonstrates how to
use tJavaMR to create a MapReduce program to count
words.

tJavaMR_1.png
The sample data to be used in this scenario reads as follows:

Before starting to replicate this scenario, ensure that you have appropriate rights
and permissions to access the Hadoop distribution to be used. Then proceed as
follows:

Linking components

  1. In the
    Integration
    perspective
    of the Studio, create an empty Map/Reduce Job from the
    Job Designs
    node in the Repository tree view.

    For further information about how to create a Map/Reduce Job, see

    Talend Open Studio for Big Data Getting Started Guide
    .
  2. Drop a tHDFSInput component, a tJavaMR component, and a tHDFSOutput component in the workspace.

    The tHDFSInput component reads data from
    the Hadoop distribution to be used and the tHDFSOutput component writes processed data into a that
    distribution.
  3. Connect these components using the Row >
    Main
    link.

Setting up Hadoop connection

  1. Click Run to open its view and then click the
    Hadoop Configuration tab to display its
    view for configuring the Hadoop connection for this Job.
  2. From the Property type list,
    select Built-in. If you have created the
    connection to be used in Repository, then
    select Repository and thus the Studio will
    reuse that set of connection information for this Job.
  3. In the Version area, select the
    Hadoop distribution to be used and its version.

    • If you use Google Cloud Dataproc, see Google Cloud Dataproc.

    • If you cannot
      find the Cloudera version to be used from this drop-down list, you can add your distribution
      via some dynamic distribution settings in the Studio.

    • If you cannot find from the list the distribution corresponding to
      yours, select Custom so as to connect to a
      Hadoop distribution not officially supported in the Studio. For a
      step-by-step example about how to use this
      Custom option, see Connecting to a custom Hadoop distribution.

  4. In the Name node field, enter the location of
    the master node, the NameNode, of the distribution to be used. For example,
    hdfs://tal-qa113.talend.lan:8020.

    • If you are using a MapR distribution, you can simply leave maprfs:/// as it is in this field; then the MapR
      client will take care of the rest on the fly for creating the connection. The
      MapR client must be properly installed. For further information about how to set
      up a MapR client, see the following link in MapR’s documentation: http://doc.mapr.com/display/MapR/Setting+Up+the+Client

    • If you are using WebHDFS, the location should be
      webhdfs://masternode:portnumber; WebHDFS with SSL is not
      supported yet.

  5. In the Resource Manager field,
    enter the location of the ResourceManager of your distribution. For example,
    tal-qa114.talend.lan:8050.

    • Then you can continue to set the following parameters depending on the
      configuration of the Hadoop cluster to be used (if you leave the check
      box of a parameter clear, then at runtime, the configuration about this
      parameter in the Hadoop cluster to be used will be ignored):

      • Select the Set resourcemanager
        scheduler address
        check box and enter the Scheduler address in
        the field that appears.

      • Select the Set jobhistory
        address
        check box and enter the location of the JobHistory
        server of the Hadoop cluster to be used. This allows the metrics information of
        the current Job to be stored in that JobHistory server.

      • Select the Set staging
        directory
        check box and enter this directory defined in your
        Hadoop cluster for temporary files created by running programs. Typically, this
        directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files
        such as yarn-site.xml or mapred-site.xml of your distribution.

      • Select the Use datanode hostname check box to allow the
        Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname
        property to true. When connecting to a
        S3N filesystem, you must select this check box.


  6. If you are accessing the Hadoop cluster running
    with Kerberos security, select this check box, then, enter the Kerberos
    principal name for the NameNode in the field displayed. This enables you to use
    your user name to authenticate against the credentials stored in Kerberos.

    • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
      MapR ticket authentication configuration in addition or as an alternative by following
      the explanation in Connecting to a security-enabled MapR.

      Keep in mind that this configuration generates a new MapR security ticket for the username
      defined in the Job in each execution. If you need to reuse an existing ticket issued for the
      same username, leave both the Force MapR ticket
      authentication
      check box and the Use Kerberos
      authentication
      check box clear, and then MapR should be able to automatically
      find that ticket on the fly.

    In addition, since this component performs Map/Reduce computations, you
    also need to authenticate the related services such as the Job history server and
    the Resource manager or Jobtracker depending on your distribution in the
    corresponding field. These principals can be found in the configuration files of
    your distribution. For example, in a CDH4 distribution, the Resource manager
    principal is set in the yarn-site.xml file and the Job history
    principal in the mapred-site.xml file.

    If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
    pairs of Kerberos principals and encrypted keys. You need to enter the principal to
    be used in the Principal field and the access
    path to the keytab file itself in the Keytab
    field. This keytab file must be stored in the machine in which your Job actually
    runs, for example, on a Talend
    Jobserver.

    Note that the user that executes a keytab-enabled Job is not necessarily
    the one a principal designates but must have the right to read the keytab file being
    used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
    situation, ensure that user1 has the right to read the keytab
    file to be used.

  7. In the User name field, enter the login user
    name for your distribution. If you leave it empty, the user name of the machine
    hosting the Studio will be used.
  8. In the Temp folder field, enter the path in
    HDFS to the folder where you store the temporary files generated during
    Map/Reduce computations.

  9. Leave the default value of the Path separator in
    server
    as it is, unless you have changed the separator used by your
    Hadoop distribution’s host machine for its PATH variable or in other words, that
    separator is not a colon (:). In that situation, you must change this value to the
    one you are using in that host.

  10. Leave the Clear temporary folder check box
    selected, unless you want to keep those temporary files.
  11. Leave the Compress intermediate map output to reduce
    network traffic
    check box selected, so as to spend shorter time
    to transfer the mapper task partitions to the multiple reducers.

    However, if the data transfer in the Job is negligible, it is recommended to
    clear this check box to deactivate the compression step, because this
    compression consumes extra CPU resources.
  12. If you need to use custom Hadoop properties, complete the Hadoop properties table with the property or
    properties to be customized. Then at runtime, these changes will override the
    corresponding default properties used by the Studio for its Hadoop
    engine.

    For further information about the properties required by Hadoop, see Apache’s
    Hadoop documentation on http://hadoop.apache.org, or
    the documentation of the Hadoop distribution you need to use.

  13. If the HDFS transparent encryption has been enabled in your cluster, select
    the Setup HDFS encryption configurations check
    box and in the HDFS encryption key provider field
    that is displayed, enter the location of the KMS proxy.

    For further information about the HDFS transparent encryption and its KMS proxy, see Transparent Encryption in HDFS.

  14. You can tune the map and reduce computations by
    selecting the Set memory check box to set proper memory allocations
    for the computations to be performed by the Hadoop system.

    The memory parameters to be set are Map (in Mb),
    Reduce (in Mb) and ApplicationMaster (in Mb). These fields allow you to dynamically allocate
    memory to the map and the reduce computations and the ApplicationMaster of YARN.

    For further information about the Resource Manager, its scheduler and the
    ApplicationMaster, see YARN’s documentation such as http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/.

    For further information about how to determine YARN and MapReduce memory configuration
    settings, see the documentation of the distribution you are using, such as the following
    link provided by Hortonworks: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html.


  15. If you are using Cloudera V5.5+, you can select the Use Cloudera Navigator check box to enable the Cloudera Navigator
    of your distribution to trace your Job lineage to the component level, including the
    schema changes between components.

    With this option activated, you need to set the following parameters:

    • Username and Password: this is the credentials you use to connect to your Cloudera
      Navigator.

    • Cloudera Navigator URL : enter the location of the
      Cloudera Navigator to be connected to.

    • Cloudera Navigator Metadata URL: enter the location
      of the Navigator Metadata.

    • Activate the autocommit option: select this check box
      to make Cloudera Navigator generate the lineage of the current Job at the end of the
      execution of this Job.

      Since this option actually forces Cloudera Navigator to generate lineages of
      all its available entities such as HDFS files and directories, Hive queries or Pig
      scripts, it is not recommended for the production environment because it will slow the
      Job.

    • Kill the job if Cloudera Navigator fails: select this check
      box to stop the execution of the Job when the connection to your Cloudera Navigator fails.

      Otherwise, leave it clear to allow your Job to continue to run.

    • Disable SSL validation: select this check box to
      make your Job to connect to Cloudera Navigator without the SSL validation
      process.

      This feature is meant to facilitate the test of your Job but is not
      recommended to be used in a production cluster.


  16. If you are using Hortonworks Data Platform V2.4.0 onwards and you have
    installed Atlas in your cluster, you can select the Use
    Atlas
    check box to enable Job lineage to the component level, including the
    schema changes between components.

    With this option activated, you need to set the following parameters:

    • Atlas URL: enter the location of the Atlas to be
      connected to. It is often http://name_of_your_atlas_node:port

    • Die on error: select this check box to stop the Job
      execution when Atlas-related issues occur, such as connection issues to Atlas.

      Otherwise, leave it clear to allow your Job to continue to run.

    In the Username and Password fields, enter the authentication information for access to
    Atlas.

Configuring tHDFSInput

  1. Double-click tHDFSInput to open its
    Component view.

    tJavaMR_2.png

  2. Click the

    tJavaMR_3.png

    button next to Edit
    schema
    to open the schema editor.

  3. Click the

    tJavaMR_4.png

    button once to add one row and in the Column column, rename it, for example, to
    record.

    tJavaMR_5.png

  4. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.
  5. In the Folder/File field, enter the path,
    or browse to the source file you need the Job to read.

    If this file is not in the HDFS system to be used, you have to place it in
    that HDFS, for example, using tFileInputDelimited and tHDFSOutput in a Standard
    Job. For further information about these components, see tFileInputDelimited and tHDFSOutput.

Creating the MapReduce program

  1. Double-click tJavaMR to open its Component view.

    tJavaMR_6.png

  2. Under the mrKeyStruct table, click the tJavaMR_4.png button once to add
    one row.
  3. Rename that row to word_mr. This is the key part of the key/value pair to be used
    by the Map/Reduce program being created. In the map method, you need to write mrKey.word_mr to represent the keys to be
    outputted to a reducer.
  4. Under the mrValueStruct table, click the tJavaMR_4.png button once to add
    one row.
  5. Rename that row to count_mr. This is the value part of the above-mentioned
    key/value pair. In the map method, you need to write mrValue.count_mr to represent the values to be outputted to a
    reducer.
  6. Click the tJavaMR_3.png button next to Edit schema to open the schema
    editor.
  7. On the side of the schema of tJavaMR, click the tJavaMR_4.png
    button to add two columns and name them to word_output and count_output,
    respectively. This defines the structure of the data to be outputted.

    tJavaMR_11.png

  8. In the Type column, select Integer for count_output.
  9. In the Map code editing field, edit the body of the map method. In this
    example, the code is as follows:

    This method is used to split the input data into
    words, change each word to upper case and create and output key/value pairs such as
    (HELLO, 1) and (WORLD, 1) to the reducer.
    Note that at runtime, these pairs are
    automatically shuffled and sorted to take the form of (key, list of values) before being process by the reduce
    method.
  10. In the Reduce code editing field, edit the body of the reduce method. In
    this example, the code is as follows:

    This reduce method is used to make the sum of the
    values of the list in each (key, list of
    values)
    pair and map the results to the columns of the output
    schema.

Writing results in HDFS

  1. Double-click tHDFSOutput to open its
    Component view.

    tJavaMR_12.png

  2. In the Folder field, enter the path, or
    browse to the folder you want to write the results in.
  3. From the Type list, select the data format for the
    results to be written. In this example, select Text
    file
    .
  4. From the Action list, select the
    operation you need to perform on the file in question. If the file already
    exists, select Overwrite; otherwise, select
    Create.
  5. Select the Merge result to single file
    check box and enter the path, or browse to the file you need to write the
    merged output data in.
  6. If you need to remove the source data of the merge, select Remove source dir. In this scenario, select
    it.
  7. If the file for the merged data exists, select the Override target file check box to overwrite that
    file.

Executing the Job

Then you can press F6 to run this Job.

Once done, view the merged result in the web console of the HDFS system being
used.

tJavaMR_13.png

If you need to obtain more execution information of this Job, see the web console
of the Jobtracker of that HDFS system.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x