August 16, 2023

tPigSort – Docs for ESB 6.x

tPigSort

Sorts relation based on one or more defined sort keys.

tPigSort Standard properties

These properties are used to configure tPigSort running in the Standard Job framework.

The Standard
tPigSort component belongs to the Big Data and the Processing families.

The component in this framework is available when you are using one of the Talend solutions with Big Data.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

 

Built-in: The schema will be
created and stored locally for this component only. Related topic:
see

Talend Studio
User Guide
.

 

Repository: The schema already
exists and is stored in the Repository, hence can be reused in
various projects and Job designs. Related topic: see


Talend Studio
User Guide
.

Sort key

Click the Add button beneath the
Sort key table to add one or
more lines to specify column and sorting order for each sort
key.

Advanced settings

Increase parallelism

Select this check box to set the number of reduce tasks for the
MapReduce Jobs

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the
Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is commonly used as intermediate step together with
input component and output component.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Limitation

Knowledge of Pig scripts is required.

Scenario: Sorting data in ascending order

This scenario applies only to a Talend solution with Big Data.

This scenario describes a three-component Job that sorts rows of data based on one or
more sorting conditions and stores the result into a local file.

Use_Case_tPigSort1.png

Setting up the Job

  1. Drop the following components from the Palette to the design workspace: tPigSort, tPigLoad,
    tPigStoreResult.
  2. Connect tPigLoad to tPigFilterRow using a Row > Pig
    Combine
    connection.
  3. Connect tPigFilterRow to tPigStoreResult using a Row
    > Pig Combine
    connection.

Loading the data

  1. Double-click tPigLoad to open its
    Basic settings view.

    Use_Case_tPigSort2.png

  2. Click the […] button next to Edit schema to add columns for tPigLoad.

    Use_Case_tPigSort3.png

  3. Click the [+] button to add
    Name, Country and
    Age and click OK
    to save the setting.
  4. Select Local from the Mode area.
  5. Fill in the Input filename field with the
    full path to the input file.

    In this scenario, the input file is CustomerList
    that contains rows of names, country names and age.
  6. Select PigStorage from the Load function list.
  7. Leave rest of the settings as they are.

Setting the sorting condition

  1. Double-click tPigSort to open its
    Basic settings view.

    use_case-tpigsort4.png

  2. Click Sync columns to retrieve the schema
    structure from the preceding component.
  3. Click the [+] button beneath the
    Sort key table to add a new sort key.
    Select Age from the Column list and select ASC
    from the Order list.

    This sort key will sort the data in CustomerList in
    ascending order based on Age.

Saving the data to a local file

  1. Double-click tPigStoreResult to open its
    Basic settings view.

    Use_Case_tPigSort5.png

  2. Click Sync columns to retrieve the schema
    structure from the preceding component.
  3. Select Remove result directory if
    exists
    .
  4. Fill in the Result file field with the
    full path to the result file.

    In this scenario, the result of filter is saved in
    Lucky_Customer file.
  5. Select PigStorage from the Store function list.
  6. Leave rest of the settings as they are.

Executing the Job

Save your Job and press F6 to run it.

Use_Case_tPigSort6.png

The Lucky_Customer file is generated containing the data in
ascending order based on Age.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x