August 17, 2023

tPigReplicate – Docs for ESB 5.x

tPigReplicate

tpigreplicate_icon32_white.png

Warning

This component will be available in the Palette of
Talend Studio on the condition that you have subscribed to one of
the Talend
solutions with Big Data.

tPigReplicate Properties

Component family

Big Data / Pig

 

Function

The tPigReplicate is used after
an input Pig component, this component duplicates the incoming
schema into as many identical output flows as needed.

Purpose

This component allows you to perform different operations on the
same schema.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

Click Sync columns to retrieve the schema from the
previous component connected in the Job.

 

 

Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.

 

 

Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.

 Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the
Job level as well as at each component level.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component is not startable (green background); it requires
tPigLoad as the input component
and expects other Pig components to handle its output
flow(s).

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with Talend Studio. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box. This argument provides to the Studio the
    path to the native library of that MapR client. This allows the subscription-based
    users to make full use of the Data viewer to view
    locally in the Studio the data stored in MapR. For further information about how to
    set this argument, see the section describing how to view data of Talend Big Data Getting Started Guide.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Connections

Outgoing links (from this component to another):

Row: Pig combine. This link joins
all data processes designed in the Job and executes them
simultaneously.

Incoming links (from one component to this one):

Row: Pig combine.

For further information regarding connections, see
Talend Studio User Guide.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario: Replicating a flow and sorting two identical flows respectively

The Job in this scenario uses Pig components to handle names and states loaded from a
given HDFS system. It reads and replicates the input flow, then sorts the two identical
flows based on name and state respectively, and writes the results back into that
HDFS.

use_case-tpigreplicate1.png

Before starting to replicate this Job, ensure that you have the appropriate right to
read and write data in the Hadoop distribution to be used and that Pig is properly
installed in that distribution.

Linking the components

  1. In the Integration perspective
    of Talend Studio,
    create an empty Job, named Replicate for
    example, from the Job Designs node in the
    Repository tree view.

    For further information about how to create a Job, see the Talend Studio User
    Guide
    .

  2. Drop tPigLoad, tPigReplicate, two tPigSort
    and two tPigStoreResult onto the
    workspace.

    The tPigLoad component reads data from
    the given HDFS system. The sample data used in this scenario reads as
    follows:

    The location of the data in this scenario is /user/ychen/raw/Name&State.csv.

  3. Connect them using the Row > Pig combine
    links.

Configuring tPigLoad

  1. Double-click tPigLoad to open its
    Component view.

    use_case-tpigreplicate2.png
  2. Click the dotbutton.png button next to Edit schema to open the schema editor.

    use_case-tpigreplicate3.png
  3. Click the Button_Plus.png button twice to add two rows and name them Name and State, respectively.

  4. Click OK to validate these changes and
    accept the propagation prompted by the pop-up dialog box.

  5. In the Mode area, select Map/Reduce because the Hadoop to be used in this
    scenario is installed in a remote machine. Once selecting it, the parameters
    to be set appear.

  6. In the Distribution and the Version lists, select the Hadoop distribution to
    be used.

  7. In the Load function list, select
    PigStorage

  8. In the NameNode URI field and the
    JobTracker host field, enter the
    locations of the NameNode and the JobTracker to be used for Map/Reduce,
    respectively.

  9. In the Input file URI field, enter the
    location of the data to be read from HDFS. In this example, the location is
    /user/ychen/raw/NameState.csv.

  10. In the Field separator field, enter the
    semicolon ;.

Configuring tPigReplicate

  1. Double-click tPigReplicate to open its
    Component view.

    use_case-tpigreplicate4.png
  2. Click the dotbutton.png button next to Edit schema to open the schema editor to verify
    whether its schema is identical with that of its preceding component.

    use_case-tpigreplicate5.png

    Note

    If this component does not have the same schema of the preceding
    component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema
    from the preceding one and once done, the warning icon disappears.

Configuring tPigSort

Two tPigSort components are used to sort the two
identical output flows: one based on the Name
column and the other on the State column.

  1. Double-click the first tPigSort component
    to open its Component view to define the
    sorting by name.

    use_case-tpigreplicate6.png
  2. In the Sort key table, add one row by
    clicking the Button_Plus.png button under this table.

  3. In the Column column, select Name from the drop-down list and select
    ASC in the Order column.

  4. Double-click the other tPigSort to open
    its Component view to define the sorting by
    state.

    use_case-tpigreplicate7.png
  5. In the Sort key table, add one row, then
    select Name from the drop-down list in
    the Column column and select ASC in the Order
    column.

Configuring tPigStoreResult

Two tPigStoreResult components are used to write
each of the sorted data into HDFS.

  1. Double-click either the first tPigStoreResult component to open its Component view to write the data sorted by name.

    use_case-tpigreplicate8.png
  2. In the Result file field, enter the
    directory where the data will be written. This directory will be created if
    it does not exist. In this scenario, we put /user/ychen/sort/tPigreplicate/byName.csv.

  3. Select Remove result directory if
    exists
    .

  4. In the Store function list, select
    PigStorage.

  5. In the Field separator field, enter the
    semicolon ;.

  6. Do the same for the other tPigStoreResult
    component but set another directory for the data sorted by state. In this
    scenario, it is /user/ychen/sort/tPigreplicate/byState.csv.

Executing the Job

Then you can run this Job.

  • Press F6 to run this Job.

Once done, browse to the locations where the results were written in HDFS.

The following image presents the results sorted by name:

use_case-tpigreplicate9.png

The following image presents the results sorted by state:

use_case-tpigreplicate10.png

If you need to obtain more details about the Job, it is recommended to use the web console
of the Jobtracker provided by the Hadoop distribution you are using.

In Jobtracker, you can easily find the execution status of your Pig Job because the name
of the Job is automatically created by concatenating the name of the project that contains
the Job, the name and version of the Job itself and the label of the first tPigLoad component used in it. The naming convention of a Pig Job
in Jobtracker is ProjectName_JobNameVersion_FirstComponentName.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x