July 30, 2023

tGlobalVarLoad – Docs for ESB 7.x

tGlobalVarLoad

Sets variables using the incoming data so that the data can be dynamically reused
by other subJobs.

tGlobalVarLoad MapReduce properties (deprecated)

These properties are used to configure tGlobalVarLoad running in the MapReduce Job framework.

The MapReduce
tGlobalVarLoad component belongs to the MapReduce family.

The information in this section is only for users who have subscribed to
Talend Data Fabric or to any Talend product with Big Data but it is not
applicable to Talend Open Studio for Big Data users.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

The columns
of the schema are set to be variable keys and the data in these columns are
the variable values.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is placed at
the end of a process. It generates variables that the other subJobs within
the same Job can reuse by calling the globalMap.get() method.

Selecting the salary records above the average using a Map/Reduce
Job

This scenario applies only to subscription-based Talend products with Big
Data
.

In this scenario, a six-component Job is created to calculate the average salary of a
set of sample data and select the salaries above the average.

tGlobalVarLoad_1.png
The sample data to be used is already stored in the HDFS system to be used and read as
follows:

You can read that the separator between the fields is /t
and the three columns of the sample data are id,
name and salary.

You can use the tHDFSOutput component to write the
sample data in the HDFS system to be used. For further information, see tHDFSOutput.

Linking the components

  1. In the
    Integration
    perspective
    of the Studio, create an empty Map/Reduce Job from the
    Job Designs
    node in the Repository tree view.

    For further information about how to create a Map/Reduce Job, see

    Talend Open Studio for Big Data Getting Started Guide
    .
  2. In the workspace, enter the name of the component to be used and select this component
    from the list that appears. In this scenario, the components are tAggregateRow, tGlobalVarLoad, tMap,
    tLogRow and two tHDFSInput (labelled customer in this scenario) components.
  3. Connect one of the tHDFSInput components
    to tAggregateRow using the Row > Main link and then do the same to link
    tAggregateRow to tGlobalVarLoad.

    This subJob is used to calculate the average salary and set this average into a reusable
    variable.
  4. Connect the same tHDFSInput component to
    the other tHDFSInput component using the
    Trigger > On Subjob Ok link.
  5. Connect this second tHDFSInput component
    to tMap using the Row
    > Main
    link, then do the same to connect tMap to tLogRow
    and in the popup dialog box, give this link a name you want to use.

    This subJob is used to select the salaries above the average.

Setting up Hadoop connection

  1. Click Run to open its view and then click the
    Hadoop Configuration tab to display its
    view for configuring the Hadoop connection for this Job.
  2. From the Property type list,
    select Built-in. If you have created the
    connection to be used in Repository, then
    select Repository and thus the Studio will
    reuse that set of connection information for this Job.
  3. In the Version area, select the
    Hadoop distribution to be used and its version.

    • If you use Google Cloud Dataproc, see Google Cloud Dataproc.

    • If you cannot
      find the Cloudera version to be used from this drop-down list, you can add your distribution
      via some dynamic distribution settings in the Studio.

    • If you cannot find from the list the distribution corresponding to
      yours, select Custom so as to connect to a
      Hadoop distribution not officially supported in the Studio. For a
      step-by-step example about how to use this
      Custom option, see Connecting to a custom Hadoop distribution.

  4. In the Name node field, enter the location of
    the master node, the NameNode, of the distribution to be used. For example,
    hdfs://tal-qa113.talend.lan:8020.

    • If you are using a MapR distribution, you can simply leave maprfs:/// as it is in this field; then the MapR
      client will take care of the rest on the fly for creating the connection. The
      MapR client must be properly installed. For further information about how to set
      up a MapR client, see the following link in MapR’s documentation: http://doc.mapr.com/display/MapR/Setting+Up+the+Client

    • If you are using WebHDFS, the location should be
      webhdfs://masternode:portnumber; WebHDFS with SSL is not
      supported yet.

  5. In the Resource Manager field,
    enter the location of the ResourceManager of your distribution. For example,
    tal-qa114.talend.lan:8050.

    • Then you can continue to set the following parameters depending on the
      configuration of the Hadoop cluster to be used (if you leave the check
      box of a parameter clear, then at runtime, the configuration about this
      parameter in the Hadoop cluster to be used will be ignored):

      • Select the Set resourcemanager
        scheduler address
        check box and enter the Scheduler address in
        the field that appears.

      • Select the Set jobhistory
        address
        check box and enter the location of the JobHistory
        server of the Hadoop cluster to be used. This allows the metrics information of
        the current Job to be stored in that JobHistory server.

      • Select the Set staging
        directory
        check box and enter this directory defined in your
        Hadoop cluster for temporary files created by running programs. Typically, this
        directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files
        such as yarn-site.xml or mapred-site.xml of your distribution.

      • Select the Use datanode hostname check box to allow the
        Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname
        property to true. When connecting to a
        S3N filesystem, you must select this check box.


  6. If you are accessing the Hadoop cluster running
    with Kerberos security, select this check box, then, enter the Kerberos
    principal name for the NameNode in the field displayed. This enables you to use
    your user name to authenticate against the credentials stored in Kerberos.

    • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
      MapR ticket authentication configuration in addition or as an alternative by following
      the explanation in Connecting to a security-enabled MapR.

      Keep in mind that this configuration generates a new MapR security ticket for the username
      defined in the Job in each execution. If you need to reuse an existing ticket issued for the
      same username, leave both the Force MapR ticket
      authentication
      check box and the Use Kerberos
      authentication
      check box clear, and then MapR should be able to automatically
      find that ticket on the fly.

    In addition, since this component performs Map/Reduce computations, you
    also need to authenticate the related services such as the Job history server and
    the Resource manager or Jobtracker depending on your distribution in the
    corresponding field. These principals can be found in the configuration files of
    your distribution. For example, in a CDH4 distribution, the Resource manager
    principal is set in the yarn-site.xml file and the Job history
    principal in the mapred-site.xml file.

    If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
    pairs of Kerberos principals and encrypted keys. You need to enter the principal to
    be used in the Principal field and the access
    path to the keytab file itself in the Keytab
    field. This keytab file must be stored in the machine in which your Job actually
    runs, for example, on a Talend
    Jobserver.

    Note that the user that executes a keytab-enabled Job is not necessarily
    the one a principal designates but must have the right to read the keytab file being
    used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
    situation, ensure that user1 has the right to read the keytab
    file to be used.

  7. In the User name field, enter the login user
    name for your distribution. If you leave it empty, the user name of the machine
    hosting the Studio will be used.
  8. In the Temp folder field, enter the path in
    HDFS to the folder where you store the temporary files generated during
    Map/Reduce computations.

  9. Leave the default value of the Path separator in
    server
    as it is, unless you have changed the separator used by your
    Hadoop distribution’s host machine for its PATH variable or in other words, that
    separator is not a colon (:). In that situation, you must change this value to the
    one you are using in that host.

  10. Leave the Clear temporary folder check box
    selected, unless you want to keep those temporary files.
  11. Leave the Compress intermediate map output to reduce
    network traffic
    check box selected, so as to spend shorter time
    to transfer the mapper task partitions to the multiple reducers.

    However, if the data transfer in the Job is negligible, it is recommended to
    clear this check box to deactivate the compression step, because this
    compression consumes extra CPU resources.
  12. If you need to use custom Hadoop properties, complete the Hadoop properties table with the property or
    properties to be customized. Then at runtime, these changes will override the
    corresponding default properties used by the Studio for its Hadoop
    engine.

    For further information about the properties required by Hadoop, see Apache’s
    Hadoop documentation on http://hadoop.apache.org, or
    the documentation of the Hadoop distribution you need to use.

  13. If the HDFS transparent encryption has been enabled in your cluster, select
    the Setup HDFS encryption configurations check
    box and in the HDFS encryption key provider field
    that is displayed, enter the location of the KMS proxy.

    For further information about the HDFS transparent encryption and its KMS proxy, see Transparent Encryption in HDFS.

  14. You can tune the map and reduce computations by
    selecting the Set memory check box to set proper memory allocations
    for the computations to be performed by the Hadoop system.

    The memory parameters to be set are Map (in Mb),
    Reduce (in Mb) and ApplicationMaster (in Mb). These fields allow you to dynamically allocate
    memory to the map and the reduce computations and the ApplicationMaster of YARN.

    For further information about the Resource Manager, its scheduler and the
    ApplicationMaster, see YARN’s documentation such as http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/.

    For further information about how to determine YARN and MapReduce memory configuration
    settings, see the documentation of the distribution you are using, such as the following
    link provided by Hortonworks: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html.


  15. If you are using Cloudera V5.5+, you can select the Use Cloudera Navigator check box to enable the Cloudera Navigator
    of your distribution to trace your Job lineage to the component level, including the
    schema changes between components.

    With this option activated, you need to set the following parameters:

    • Username and Password: this is the credentials you use to connect to your Cloudera
      Navigator.

    • Cloudera Navigator URL : enter the location of the
      Cloudera Navigator to be connected to.

    • Cloudera Navigator Metadata URL: enter the location
      of the Navigator Metadata.

    • Activate the autocommit option: select this check box
      to make Cloudera Navigator generate the lineage of the current Job at the end of the
      execution of this Job.

      Since this option actually forces Cloudera Navigator to generate lineages of
      all its available entities such as HDFS files and directories, Hive queries or Pig
      scripts, it is not recommended for the production environment because it will slow the
      Job.

    • Kill the job if Cloudera Navigator fails: select this check
      box to stop the execution of the Job when the connection to your Cloudera Navigator fails.

      Otherwise, leave it clear to allow your Job to continue to run.

    • Disable SSL validation: select this check box to
      make your Job to connect to Cloudera Navigator without the SSL validation
      process.

      This feature is meant to facilitate the test of your Job but is not
      recommended to be used in a production cluster.


  16. If you are using Hortonworks Data Platform V2.4.0 onwards and you have
    installed Atlas in your cluster, you can select the Use
    Atlas
    check box to enable Job lineage to the component level, including the
    schema changes between components.

    With this option activated, you need to set the following parameters:

    • Atlas URL: enter the location of the Atlas to be
      connected to. It is often http://name_of_your_atlas_node:port

    • Die on error: select this check box to stop the Job
      execution when Atlas-related issues occur, such as connection issues to Atlas.

      Otherwise, leave it clear to allow your Job to continue to run.

    In the Username and Password fields, enter the authentication information for access to
    Atlas.

Reading the sample data into the Job

  1. Double-click either of the two tHDFSInput
    components to display its Basic settings
    view.

    Since these two tHDFSInput components are used to read
    the same source data and are configured the same way. You need to configure
    both of them using the procedure explained in this section.
    tGlobalVarLoad_2.png

  2. Click the […] button next to Edit schema to open the schema editor.
  3. Click the [+] button three times to add
    three rows and in the Column column, rename
    them to id, name and salary,
    respectively.

    tGlobalVarLoad_3.png

  4. In the Type column of the salary row, select Double.
  5. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.
  6. In the Folder/File field, browse to the
    sample data to be processed in the HDFS system.
  7. In the File type area, select Text file from the Type list.
  8. In the Field separator field, enter
    .

Calculating the average

  1. Double-click tAggregateRow to open its
    Component view.

    tGlobalVarLoad_4.png

  2. Click the […] button next to Edit schema to open the schema editor.
  3. In the table of the tAggregateRow schema, click the [+] button once to add one row and in the Column column, rename it to
    avg.
  4. In the Type column of
    the salary row, select Double.
  5. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.
  6. Under the Operations table,
    click the [+] button to add one row and
    configure the following columns of this row to define the calculation of the
    average salary.

    Column

    Description

    Output column

    Select the column of the output schema in which the average salary is stored. In this
    scenario, it is avg.

    Function

    Select the avg function to calculate the average.

    Input column position

    Select the column of the input schema used to provide the source data
    of the calculation.

Setting the avg variable

  1. Double-click tGlobalVarLoad to open its
    Component view.

    tGlobalVarLoad_5.png

  2. Click the Sync columns button to ensure
    that this component retrieves the avg
    column of the tAggregateRow component’s
    schema. This way the tGlobalVarLoad
    component defines the avg variable using
    the calculated average salary.

Filtering the salary records

  1. Double-click tMap to open the map
    editor.

    Note that the tHDFSInput component linked
    to this tMap has been configured along with
    the other tHDFSInput component linked to
    tAggregateRow.
    tGlobalVarLoad_6.png

  2. From the table representing the input flow (on the left side), select all
    the three columns and drop them to the table representing the output flow
    (on the right side).
  3. On the table of the input flow, click the
    tGlobalVarLoad_7.png button to display the filter
    expression panel.
  4. In this filter expression panel, enter
    row5.salary > Double.valueOf(String.valueOf(globalMap.get("avg")))

    This expression allows the tMap component
    to select only the salaries above the average calculated by tAggregateRow.
    Note that the row5 in this expression
    is the ID of the input row to the tMap
    component and therefore, it might be another value in your scenario.
  5. Click Apply and then OK to validate these changes.

Executing the Job

Then you can run this Job.

The tLogRow component is used to present the
execution result of the Job.

  1. If you want to configure the presentation mode on its Component view, double-click the tLogRow component to open the Component view and then in the Mode area, select the Table (print
    values in cells of a table)
    option.
  2. Press F6 to run this Job.

Once done, the Run view is opened automatically,
where you can check the execution result.

tGlobalVarLoad_8.png

As presented at the beginning of this scenario, the average salary of the sample data is
2950, and you can read that the salary
records above the average have been filtered from the sample data.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x