August 16, 2023

tPigStoreResult – Docs for ESB 6.x

tPigStoreResult

Stores the result of your Pig Job into a defined data storage space.

tPigStoreResult Standard properties

These properties are used to configure tPigStoreResult running in the Standard Job framework.

The Standard
tPigStoreResult component belongs to the Big Data and the Processing families.

The component in this framework is available when you are using one of the Talend solutions with Big Data.

Basic settings

Property type

Either Repository or Built-in.

The Repository option allows you
to reuse the connection properties centrally stored under the
Hadoop cluster node of the
Repository tree. Once selecting
it, the

dotbutton.png

button appears, then you can click it to
display the list of the stored properties and from that list, select
the properties you need to use. Once done, the appropriate
parameters are automatically set

Otherwise, if you select Built-in, you need to manually set each of the
parameters.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Use S3 endpoint

Select this check box to write data into a given Amazon S3 bucket
folder.

Once this Use S3 endpoint check box is
selected, you need to enter the following parameters in the fields that appear:

  • S3 bucket name and folder:
    enter the bucket name and its folder in which you need to write data.
    You need to separate the bucket name and the folder name using a slash
    (/).

  • Access key and Secret key: enter the authentication
    information required to connect to the Amazon S3 bucket to be used.

    To enter the password, click the […] button next to the
    password field, and then in the pop-up dialog box enter the password between double quotes
    and click OK to save the settings.

Note that the format of the S3 file is S3N (S3 Native Filesystem).

Result folder URI

Select the path to the result file in which data is stored.

Remove result directory if exists

Select this check box to remove an existing result directory.

Note:

This check box is disabled when you select HCatStorer from the Store function list.

Store function

Select a store function for data to be stored:

  • PigStorage: Stores data
    in UTF-8 format.

  • BinStorage: Stores data
    in machine-readable format.

  • PigDump: Stores data
    as tuples in human-readable UTF-8 format.

  • HCatStorer: Stores
    data in HCatalog managed tables using Pig
    scripts.

  • HBaseStorage:Stores
    data in HBase. Then you need to complete the HBase
    configuration in the HBase
    configuration
    area displayed.

  • SequenceFileStorage:
    Stores data of the SequenceFile formats. Then you need
    to complete the configuration of the file to be stored
    in the Sequence Storage
    Configuration
    area that appears.

  • RCFilePigStorage:
    Stores data of the RCFile format.

  • AvroStorage: Stores
    Avro files. For further information about AvroStorage,
    see Apache’s documentation on https://cwiki.apache.org/confluence/display/PIG/AvroStorage.

  • ParquetStorer: Stores
    Parquet files. Then from the Associate tPigLoad component list, you
    need to select the tPigLoad component in which the
    connection to the MapReduce cluster to be used is
    defined.

    Then from the Compression list that appears, select the
    compression mode you need to use to handle the Parquet file. The default mode is Uncompressed.

  • Custom: Stores data
    using any user-defined store function. To do this, you
    need to register, in the Advanced
    settings
    tab view, the jar file
    containing the function to be used, and then, in the
    field displayed next to this Store
    function
    field, specify that function.

Note that when the file format to be used is PARQUET, you
might be prompted to find the specific Parquet jar file and install it into the Studio.

  • When the connection mode to Hive is Embedded,
    the Job is run in your local machine and calls this jar installed in the
    Studio.

  • When the connection mode to Hive is Standalone, the Job is run in the server hosting Hive and this
    jar file is sent to the HDFS system of the cluster you are connecting to.
    Therefore, ensure that you have properly defined the NameNode URI in the
    corresponding field of the Basic settings
    view.

This jar file can be downloaded from Apache’s site. You can find more details about how to install external modules in Talend Help Center (https://help.talend.com).

HCataLog Configuration

Fill the following fields to configure HCataLog managed tables on
HDFS (Hadoop distributed file system):

Distribution and Version:

Select the Hadoop distribution to which you have defined the connection in the
tPigLoad component, used in the same Pig process
of the active tPigStoreResult.

If that tPigLoad component connects to a
custom Hadoop distribution, you must select Custom
for this tPigStoreResult component, too. Then the
Custom jar table appears, in which, you need to
add only the jar files required by the selected Store
function
.

HCat metastore: Enter the
location of the HCatalog’s metastore, which is actually Hive’s
metastore.

Database: The database in which
tables are placed.

Table: The table in which data is
stored.

Partition filter: Fill this field
with the partition keys to list partitions by filter.

Note:

HCataLog Configuration area
is enabled only when you select HCatStorer from the Store
function
list. For further information about the
usage of HCataLog, see https://cwiki.apache.org/confluence/display/Hive/HCatalog
. For further information about the usage of Partition filter, see https://cwiki.apache.org/confluence/display/HCATALOG/Design+Document+-+Java+APIs+for+HCatalog+DDL+Commands.

HBase configuration

This area is available to the HBaseStorage function. The
parameters to be set are:

Distribution and Version:

Select the Hadoop distribution to which you have defined the connection in the
tPigLoad component, used in the same Pig process
of the active tPigStoreResult.

If that tPigLoad component connects to a
custom Hadoop distribution, you must select Custom
for this tPigStoreResult component, too. Then the
Custom jar table appears, in which, you need to
add only the jar files required by the selected Store
function
.

Zookeeper quorum:

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port:

Type in the number of the client listening port of the Zookeeper service you are
using.

Table name:

Enter the name of the HBase table you need to store data in. The
table must exist in the target HBase.

Row key column:

Select the column used as the row key column of the HBase
table.

Store row key column to Hbase
column
:

Select this check box to make the row key column an HBase column
belonging to a specific column family.

Mapping:

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

The Column column of this table is automatically filled
once you have defined the schema; in the Family name
column, enter the column families you want to create or use to group the columns in the
Column column. For further information about a column
family, see Apache documentation at Column families.

Field separator

Enter character, string or regular expression to separate fields for the transferred
data.

Note:

This field is enabled only when you select PigStorage from the Store function list.

Sequence Storage configuration

This area is available only to the SequenceFileStorage function. Since a SequenceFile
record consists of binary key/value pairs, the parameters to be set
are:

Key column:

Select the Key column of a key/value record.

Value column

Select the Value column of a key/value record.

Advanced settings

Register jar

Click the [+] button to add rows to the table and from these rows, browse to the jar
files to be added. For example, in order to register a jar file called piggybank.jar, click the [+] button once to add one row, then click this row to display the […] browse button, and click this button to browse to the piggybank.jar file following the [Select
Module]
wizard.

HBaseStorage configuration

Add and set more HBaseStorage storer options in this table. The
options are:

loadKey: enter true to store the
row key as the first column of the result schema, otherwise, enter
false;

gt: the minimum key value;

lt: the maximum key value;

gte: the minimum key value
(included);

lte: the maximum key value
(included);

limit: maxum number of rows to
retrieve per region;

caching: number of rows to
cache;

caster: the converter to use for
writing values to HBase. For example, Utf8StorageConverter.

Define the jars to register

This check box appears when you are using tHCatStorer, while by default, you can leave it
clear as the Studio registers the required jar files automatically.
In case any jar file is missing, you can select this check box to
display the Register jar for
HCatalog
table and set the correct path to that
missing jar.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the
Job level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is always used to end a Pig process and needs
tPigLoad at the beginning of
that chain to provide data

This component reuses automatically the connection created by the
tPigLoad component in that Pig
process.

Note that if you use Hortonworks
Data Platform V2.0.0
, the type of the operating system
for running the distribution and a
Talend
Job must be the same, such as Windows or Linux. Otherwise,
you have to use
Talend
Jobserver to execute the Job in the same type of operating
system in which the Hortonworks Data Platform
V2.0.0
distribution you are using is run.

Prerequisites

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the [Preferences] dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Limitation

Knowledge of Pig scripts is required. If you select HCatStorer as the store function,
knowledge of HCataLog DDL(HCataLog Data Definition Language, a
subset of Hive Data Definition Language) is required. For further
information about HCataLog DDL, see https://cwiki.apache.org/confluence/display/Hive/HCatalog.

Related Scenario


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x