Scenario: Normalizing data using Map/Reduce components
This scenario applies only to a subscription-based Talend solution with Big data.
You can produce the Map/Reduce version of the Job described earlier using Map/Reduce
components. This
Talend
Map/Reduce Job generates Map/Reduce code and is run
natively in Hadoop.
Note that the
Talend
Map/Reduce components are available to
subscription-based Big Data users only and this scenario can be replicated only with
Map/Reduce components.
The sample data used in this scenario is the same as in the scenario explained
earlier.
1 2 3 4 5 6 7 8 9 10 11 12 |
ldap, db2, jdbc driver, grid computing, talend architecture , content, environment,, tmap,, eclipse, database,java,postgresql, tmap, database,java,sybase, deployment,, repository, database,informix,java |
Since
Talend Studio
allows you to convert a Job between its
Map/Reduce and Standard (Non Map/Reduce) versions, you can convert the scenario
explained earlier to create this Map/Reduce Job. This way, many components used can keep
their original settings so as to reduce your workload in designing this Job.
Before starting to replicate this scenario, ensure that you have appropriate rights
and permissions to access the Hadoop distribution to be used. Then proceed as
follows: