Warning
This component will be available in the Palette of
Talend Studio on the condition that you have subscribed to one of
the Talend
solutions with Big Data.
Component family |
Big Data / Pig |
|
Function |
The tPigReplicate is used after |
|
Purpose |
This component allows you to perform different operations on the |
|
Basic settings |
Schema and Edit |
A schema is a row description. It defines the number of fields to be processed and passed on Since version 5.6, both the Built-In mode and the Repository mode are Click Edit schema to make changes to the schema. If the
Click Sync columns to retrieve the schema from the |
|
|
Built-In: You create and store the schema locally for this |
|
|
Repository: You have already created the schema and |
Advanced settings |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the |
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see Talend Studio |
|
Usage |
This component is not startable (green background); it requires |
|
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
|
Connections |
Outgoing links (from this component to another):
Row: Pig combine. This link joins Incoming links (from one component to this one): Row: Pig combine. For further information regarding connections, see |
|
Log4j |
The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html. |
The Job in this scenario uses Pig components to handle names and states loaded from a
given HDFS system. It reads and replicates the input flow, then sorts the two identical
flows based on name and state respectively, and writes the results back into that
HDFS.
Before starting to replicate this Job, ensure that you have the appropriate right to
read and write data in the Hadoop distribution to be used and that Pig is properly
installed in that distribution.
-
In the Integration perspective
of Talend Studio,
create an empty Job, named Replicate for
example, from the Job Designs node in the
Repository tree view.For further information about how to create a Job, see the Talend Studio User
Guide. -
Drop tPigLoad, tPigReplicate, two tPigSort
and two tPigStoreResult onto the
workspace.The tPigLoad component reads data from
the given HDFS system. The sample data used in this scenario reads as
follows:123456789101112Andrew Kennedy;MississippiBenjamin Carter;LouisianaBenjamin Monroe;West VirginiaBill Harrison;TennesseeCalvin Grant;VirginiaChester Harrison;Rhode IslandChester Hoover;KansasChester Kennedy;MarylandChester Polk;IndianaDwight Nixon;NevadaDwight Roosevelt;MississippiFranklin Grant;NebraskaThe location of the data in this scenario is /user/ychen/raw/Name&State.csv.
-
Connect them using the Row > Pig combine
links.
-
Double-click tPigLoad to open its
Component view. -
Click the button next to Edit schema to open the schema editor.
-
Click the button twice to add two rows and name them Name and State, respectively.
-
Click OK to validate these changes and
accept the propagation prompted by the pop-up dialog box. -
In the Mode area, select Map/Reduce because the Hadoop to be used in this
scenario is installed in a remote machine. Once selecting it, the parameters
to be set appear. -
In the Distribution and the Version lists, select the Hadoop distribution to
be used. -
In the Load function list, select
PigStorage -
In the NameNode URI field and the
JobTracker host field, enter the
locations of the NameNode and the JobTracker to be used for Map/Reduce,
respectively. -
In the Input file URI field, enter the
location of the data to be read from HDFS. In this example, the location is
/user/ychen/raw/NameState.csv. -
In the Field separator field, enter the
semicolon ;.
-
Double-click tPigReplicate to open its
Component view. -
Click the button next to Edit schema to open the schema editor to verify
whether its schema is identical with that of its preceding component.Note
If this component does not have the same schema of the preceding
component, a warning icon appears. In this case, click the Sync columns button to retrieve the schema
from the preceding one and once done, the warning icon disappears.
Two tPigSort components are used to sort the two
identical output flows: one based on the Name
column and the other on the State column.
-
Double-click the first tPigSort component
to open its Component view to define the
sorting by name. -
In the Sort key table, add one row by
clicking the button under this table. -
In the Column column, select Name from the drop-down list and select
ASC in the Order column. -
Double-click the other tPigSort to open
its Component view to define the sorting by
state. -
In the Sort key table, add one row, then
select Name from the drop-down list in
the Column column and select ASC in the Order
column.
Two tPigStoreResult components are used to write
each of the sorted data into HDFS.
-
Double-click either the first tPigStoreResult component to open its Component view to write the data sorted by name.
-
In the Result file field, enter the
directory where the data will be written. This directory will be created if
it does not exist. In this scenario, we put /user/ychen/sort/tPigreplicate/byName.csv. -
Select Remove result directory if
exists. -
In the Store function list, select
PigStorage. -
In the Field separator field, enter the
semicolon ;. -
Do the same for the other tPigStoreResult
component but set another directory for the data sorted by state. In this
scenario, it is /user/ychen/sort/tPigreplicate/byState.csv.
Then you can run this Job.
-
Press F6 to run this Job.
Once done, browse to the locations where the results were written in HDFS.
The following image presents the results sorted by name:
The following image presents the results sorted by state:
If you need to obtain more details about the Job, it is recommended to use the web console
of the Jobtracker provided by the Hadoop distribution you are using.
In Jobtracker, you can easily find the execution status of your Pig Job because the name
of the Job is automatically created by concatenating the name of the project that contains
the Job, the name and version of the Job itself and the label of the first tPigLoad component used in it. The naming convention of a Pig Job
in Jobtracker is ProjectName_JobNameVersion_FirstComponentName
.