August 17, 2023

tAvroInput – Docs for ESB 5.x

tAvroInput

tavroinput_icon32_white.png

Warning

This component will be available in the Palette of
Talend Studio on the condition that you have subscribed to one of
the Talend
solutions with Big Data.

tAvroInput properties

Component family

MapReduce

 

Function

tAvroInput parses Avro format
files in a given distributed file system and loads data to a data
flow to pass the data to the transformation component that follows.

This component, along with the MapReduce family it belongs to, appears only when you are
creating a Map/Reduce Job.

Purpose

tAvroInput extracts records from
any given Avro format files for other components to process the
records.

Basic settings

Property type

Either Built-in or Repository.

   

Built-in: no property data stored
centrally.

   

Repository: reuse properties
stored centrally under the Hadoop
Cluster
node of the Repository tree.

The fields that come after are pre-filled in using the fetched
data.

For further information about the Hadoop
Cluster
node, see Talend Big Data Getting Started Guide.

 

Schema and Edit
Schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

   

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

   

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

 

Folder/File

Browse to, or enter the directory in HDFS where the data you need to use is.

If the path you set points to a folder, this component will read
all of the files stored in that folder, for example, /user/talend/in; if sub-folders exist,
the sub-folders are automatically ignored unless you define the path
like
/user/talend/in/*
.

If you want to specify more than one files or directories in this
field, separate each path using a coma (,).

Note that you need
to ensure you have properly configured the connection to the Hadoop
distribution to be used in the Hadoop
configuration
tab in the Run view.

 

Die on error

Select this check box to stop the execution of the Job when an error occurs.

Clear the check box to skip any rows on error and complete the process for error-free rows.
When errors are skipped, you can collect the rows on error using a Row
> Reject
link.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

In a Talend Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Once a Map/Reduce Job is opened in the workspace, tAvroInput as well as the MapReduce family appears in the Palette of the Studio.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Scenario: Filtering Avro format employee data

This scenario illustrates how to create a Talend Map/Reduce Job to
read, transform and write Avro format data by using Map/Reduce components. This Job
generates Map/Reduce code and directly runs in Hadoop. In addition, the Map bar in the workspace indicates that only a mapper will be
used in this Job and at runtime, it shows the progress of the Map computation.

use_case-mr_tavroinput1.png

Note that the Talend Map/Reduce components are available to
subscription-based Big Data users only and this scenario can be replicated only with
Map/Reduce components.

The sample data to be used in this scenario is employee information of a company with
records virtually reading as follows but actually only visible as Avro format files:

Before starting to replicate this scenario, ensure that you have appropriate rights
and permissions to access the Hadoop distribution to be used. Then proceed as
follows:

Linking the components

  1. In the Integration perspective
    of the Studio, create an empty Map/Reduce Job from the
    Job Designs
    node in the Repository tree view.

    For further information about how to create a Map/Reduce Job, see
    Talend Big Data Getting Started Guide.

  2. Drop tAvroInput, tMap, tHDFSOutput and
    tAvroOutput onto the workspace.

  3. Connect tAvroInput to tMap using the Row >
    Main
    link.

  4. Do the same to connect tMap to tHDFSOutput and tAvroOutput, respectively. In doing it, you are prompted to
    name each link. In this example, name the link to tHDFSOutput to out1 and
    the link to tAvroOutput to reject.

Setting up Hadoop connection

  1. Click Run to open its view and then click the
    Hadoop Configuration tab to display its
    view for configuring the Hadoop connection for this Job.

    This view looks like the image below:

    use_case-hadoop_config-common.png
  2. From the Property type list, select Built-in. If you have created the connection to be
    used in Repository, then select Repository and thus the Studio will reuse that set of
    connection information for this Job.

    For further information about how to create an Hadoop connection in
    Repository, see the chapter describing the Hadoop
    cluster
    node of the Talend Big Data Getting Started Guide.

  3. In the Version area, select the Hadoop
    distribution to be used and its version. If you cannot find from the list the
    distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not officially
    supported in the Studio.

    For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution.

    Along with the evolution of Hadoop, please note the
    following changes:

    • If you use Hortonworks Data Platform
      V2.2
      , the configuration files of your cluster might be using
      environment variables such as ${hdp.version}. If this is your situation, you need to set
      the mapreduce.application.framework.path property in the
      Hadoop properties table with the path
      value explicitly pointing to the MapReduce framework archive of your
      cluster. For
      example:

    • If you use Hortonworks Data Platform
      V2.0.0
      , the type of the operating system for running the
      distribution and a Talend Job must be the same,
      such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same
      type of operating system in which the Hortonworks
      Data Platform V2.0.0
      distribution you are using is run. For
      further information about Talend Jobserver, see
      Talend
      Installation and Upgrade Guide
      .

  4. In the Name node field, enter the location of
    the master node, the NameNode, of the distribution to be used. For example,
    hdfs://tal-qa113.talend.lan:8020.

    If you are using a MapR distribution, you can simply leave maprfs:/// as it is in this field; then the MapR
    client will take care of the rest on the fly for creating the connection. The
    MapR client must be properly installed. For further information about how to set
    up a MapR client, see the following link in MapR’s documentation: http://doc.mapr.com/display/MapR/Setting+Up+the+Client

  5. In the Job tracker field, enter the location
    of the JobTracker of your distribution. For example, tal-qa114.talend.lan:8050.

    Note that the notion Job in this term JobTracker designates the MR or the
    MapReduce jobs described in Apache’s documentation on http://hadoop.apache.org/.

    If you use YARN in your Hadoop cluster such as Hortonworks Data Platform V2.0.0 or Cloudera CDH4.3 + (YARN mode), you need to specify the location
    of the Resource Manager instead of the
    Jobtracker. Then you can continue to set the following parameters depending on
    the configuration of the Hadoop cluster to be used (if you leave the check box
    of a parameter clear, then at runtime, the configuration about this parameter in
    the Hadoop cluster to be used will be ignored ):

    • Select the Set resourcemanager scheduler
      address
      check box and enter the Scheduler address in
      the field that appears.

    • Select the Set jobhistory address
      check box and enter the location of the JobHistory server of the
      Hadoop cluster to be used. This allows the metrics information of
      the current Job to be stored in that JobHistory server.

    • Select the Set staging directory
      check box and enter this directory defined in your Hadoop cluster
      for temporary files created by running programs. Typically, this
      directory can be found under the yarn.app.mapreduce.am.staging-dir property in the
      configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

    • Select the Use datanode hostname
      check box to allow the Job to access datanodes via their hostnames.
      This actually sets the dfs.client.use.datanode.hostname property to
      true. When connecting to a
      S3N filesystem, you must select this check box.

  6. If you are accessing the Hadoop cluster running with Kerberos security, select this check
    box, then, enter the Kerberos principal name for the NameNode in the field displayed. This
    enables you to use your user name to authenticate against the credentials stored in
    Kerberos.

    In addition, since this component performs Map/Reduce computations, you also need to
    authenticate the related services such as the Job history server and the Resource manager or
    Jobtracker depending on your distribution in the corresponding field. These principals can
    be found in the configuration files of your distribution. For example, in a CDH4
    distribution, the Resource manager principal is set in the yarn-site.xml file and the Job history principal in the mapred-site.xml file.

    If you need to use a Kerberos keytab file to log in, select Use a
    keytab to authenticate
    . A keytab file contains pairs of Kerberos principals
    and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the
    Keytab field.

    Note that the user that executes a keytab-enabled Job is not necessarily the one a
    principal designates but must have the right to read the keytab file being used. For
    example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

  7. In the User name field, enter the login user
    name for your distribution. If you leave it empty, the user name of the machine
    hosting the Studio will be used.

  8. In the Temp folder field, enter the path in
    HDFS to the folder where you store the temporary files generated during
    Map/Reduce computations.

  9. Leave the default value of the Path separator in server as
    it is, unless you have changed the separator used by your Hadoop distribution’s host machine
    for its PATH variable or in other words, that separator is not a colon (:). In that
    situation, you must change this value to the one you are using in that host.

  10. Leave the Clear temporary folder check box
    selected, unless you want to keep those temporary files.

  11. Leave the Compress intermediate map output to reduce
    network traffic
    check box selected, so as to spend shorter time
    to transfer the mapper task partitions to the multiple reducers.

    However, if the data transfer in the Job is negligible, it is recommended to
    clear this check box to deactivate the compression step, because this
    compression consumes extra CPU resources.

  12. If you need to use custom Hadoop properties, complete the Hadoop properties table with the property or
    properties to be customized. Then at runtime, these changes will override the
    corresponding default properties used by the Studio for its Hadoop
    engine.

    For further information about the properties required by Hadoop, see Apache’s
    Hadoop documentation on http://hadoop.apache.org, or
    the documentation of the Hadoop distribution you need to use.

  13. If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks
    Data Platform V1.3, you need to set proper memory allocations for the map and reduce
    computations to be performed by the Hadoop system.

    In that situation, you need to enter the values you need in the Mapred
    job map memory mb
    and the Mapred job reduce memory
    mb
    fields, respectively. By default, the values are both 1000 which are normally appropriate for running the
    computations.

    If the distribution is YARN, then the memory parameters to be set become Map (in Mb), Reduce (in Mb) and
    ApplicationMaster (in Mb), accordingly. These fields
    allow you to dynamically allocate memory to the map and the reduce computations and the
    ApplicationMaster of YARN.

For further information about this Hadoop
Configuration
tab, see the section describing how to configure the Hadoop
connection for a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

For further information about the Resource Manager, its scheduler and the
ApplicationMaster, see YARN’s documentation such as http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/.

For further information about how to determine YARN and MapReduce memory configuration
settings, see the documentation of the distribution you are using, such as the following
link provided by Hortonworks: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html.

Reading Avro data

Configuring tAvroInput

  1. Double-click tAvroInput to open its
    Component view.

    use_case-mr_tavroinput2.png
  2. Click the dotbutton.png button next to Edit
    schema
    to open the schema editor.

  3. Click the plus_button.png button four times to add four rows and in the Column column, rename them to Id, FirstName, LastName and
    Reg_date, respectively.

    use_case-mr_tavroinput3.png
  4. In the Type column, select Integer for Id
    and Date for Reg_date. The date pattern used in this scenario is
    dd-MM-yyyy.

  5. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.

  6. In the Folder/File field, enter the path,
    or browse to the source file you need the Job to read.

Transforming the data

Configuring tMap

  1. Double-click tMap to open the Map Editor.

    use_case-mr_tavroinput4.png
  2. Drop the four columns of the input schema from the input side (left) into
    each of the output flows of the output side (right), that is to say,
    out1 and reject. This way the input flow and the output flows are
    mapped.

  3. In the table representing the out1
    flow, click Expression_Filter.png to display the filter editing area and enter the
    following expression to select the employee records that were registered
    before January 1st, 2008 (01-01-2008).

  4. In the table representing the reject
    flow, click Map_setting.png to display the property settings panel.

  5. In the Value field of the Catch output reject row, click the dotbutton.png and select true in the
    pop-up dialog box. This allows you to output the records rejected by the
    out1 flow.

  6. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.

Writing data in HDFS

Configuring the selected employee data

  1. Double-click tHDFSOutput to open its
    Component view.

    use_case-mr_tavroinput5.png
  2. In the Folder field, enter the path, or
    browse to the folder you want to write the employee records registered
    before January 1st, 2008.

  3. From the Type list, select the data
    format for the records to be written. In this example, select Text file.

  4. From the Action list, select the
    operation you need to perform on the file in question. If the file already
    exists, select Overwrite; otherwise, select
    Create.

  5. Select the Merge result to single file
    check box and enter the path, or browse to the file you need to write the
    merged output data in.

  6. If the file for the merged data exists, select the Override target file check box to overwrite that
    file.

Configuring the rejected employee data

  1. Double-click tAvroOutput to open its
    Component view.

    use_case-mr_tavroinput6.png
  2. In the Folder field, enter the path, or
    browse to the folder you want to write the employee records registered after
    January 1st, 2008.

  3. From the Action list, select the
    operation you need to perform on the folder in question. If the folder
    already exists, select Overwrite;
    otherwise, select Create.

Executing the Job

Then you can press F6 to run this Job and the
Map bar in the workspace shows the progress of
the Map computation.

Once done, you can check the results in the web console of the Hadoop distribution
being used.

The records in the out1 flow is outputted and
merged into one text file.

use_case-mr_tavroinput7.png

The records in the reject flow is outputted as
Avro format files.

use_case-mr_tavroinput8.png

If you need to obtain more execution information about this Job, you can check the
web console of the Jobtracker of the Hadoop distribution being used.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x