tHMapFile
Runs a
Talend Data Mapper
map where input and output structures may differ, as a Spark batch
execution.
tHMapFile transforms data from a
single source, in a Spark environment.
tHMapFile properties for Apache Spark Batch
These properties are used to configure tHMapFile running in the Spark Batch Job framework.
The Spark Batch
tHMapFile component belongs to the Processing family.
This component is available in Talend Platform products with Big Data and
in Talend Data Fabric.
Basic settings
Storage |
To connect to an HDFS installation, select the Define a storage configuration component This option requires you to have previously configured the If you leave the Define a |
Configure Component |
To configure the component, click the […] button and, in the Component Configuration window, perform
|
Input |
Click the […] |
Output |
Click the […] |
Action |
From the drop-down list, select:
|
Open Map Editor |
Click the […] For more information, see |
Die on error |
This check box is selected by default. Clear the check box to skip any rows on error and complete the If you opt to clear the check box, you can perform any of these options:
Note: Any errors while trying to store the reject are logged and the
processing continues. |
Merge result to single file |
By default, the tHMapFile creates several part files. Select this check The following options are used to manage the source and
the target files:
Warning: Using this option with an Avro output creates an
invalid Avro file. Since each part starts with an Avro Schema header, the merged file would have more than one Avro Schema, which is invalid. |
Usage
Usage rule |
This component is used with a tHDFSConfiguration component which defines the |
Transforming data in a Spark environment
This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.
The following scenario creates a two-component Job that transforms data in
a Spark environment using a map that was previously created in
Talend Data Mapper
.
tHDFSConfiguration is used in this scenario by Spark to connect
to the HDFS system where the jar files dependent on the Job are transferred.
Configuration tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:
-
Yarn mode (Yarn client or Yarn cluster):
-
When using Google Dataproc, specify a bucket in the
Google Storage staging bucket
field in the Spark configuration
tab. -
When using HDInsight, specify the blob to be used for Job
deployment in the Windows Azure Storage
configuration area in the Spark
configuration tab. - When using Altus, specify the S3 bucket or the Azure
Data Lake Storage for Job deployment in the Spark
configuration tab. - When using Qubole, add a
tS3Configuration to your Job to write
your actual business data in the S3 system with Qubole. Without
tS3Configuration, this business data is
written in the Qubole HDFS system and destroyed once you shut
down your cluster. -
When using on-premise
distributions, use the configuration component corresponding
to the file system your cluster is using. Typically, this
system is HDFS and so use tHDFSConfiguration.
-
-
Standalone mode: use the
configuration component corresponding to the file system your cluster is
using, such as tHDFSConfiguration or
tS3Configuration.If you are using Databricks without any configuration component present
in your Job, your business data is written directly in DBFS (Databricks
Filesystem).
Downloading the input files
-
Retrieve the input files for this scenario from the Downloads tab of the online version of this page at https://help.talend.com.
The thmapfile_transform_scenario.zip file
contains two files: gdelt.json is a JSON file built using data
from the GDELT project http://gdeltproject.org, and
gdelt-onerec.json is a subset of gdelt.json
containing just one record and is used as a sample document for creating the
structure in Talend Data Mapper. -
Save the thmapfile_scenario.zip file on your
local machine and unpack the .zip file.
Creating the input and output structures
-
In the Integration perspective, in the Repository tree view, expand Metadata >
Hierarchical Mapper, right click Structures, and then click New >
Structure. -
In the New Structure dialog box that opens,
select Import a structure definion, and then click
Next. -
Select JSON Sample Document, and then click
Next. -
Select Local file, browse to the location on your
local file system where you saved the source files, import gdelt-onerec.json as your sample document, and then click Next. -
Give your new structure a name, gdelt-onerec in
this example, click Next, and then click Finish.
Creating the map
-
In the Repository tree view, expand Metadata > Hierarchical Mapper, right click Maps, and then click New >
Map. -
In the Select Type of New Map dialog box that
opens, select Standard Map and then click Next. -
Give your new map a name, json2xml in this
example, and then click Finish. -
Drag the gdelt-onerec structure you created
earlier into both the Input and Output sides of the map. -
On the Output side of the map, change the
representation used from JSON to XML by double-clicking Output
(JSON) and selecting the XML output
representation. -
Drag the Root element from the Input side of the map to the Root element on the Output side. This
maps each element from the Input side with its
corresponding element on the Outside side, which is
a very simple map just for testing purposes. - Press Ctrl+S to save your map.
Adding the components
-
In the
Integration
perspective, create a new
Job and call it thmapfile_transform. -
Click any point in the design workspace, start typing tHDFSConfiguration, and then click the name of the component when it
appears in the list proposed in order to select it.Note that for testing purposes, you can also perform this scenario locally. In
that case, do not add the tHDSFConfiguration
component and skip the Configuring the connection to the
file system used by Spark section below. -
Do the same to add a tHMapFile component, but do
not link the two components.
Configuring the connection to the file system to be used by Spark
Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration tab.
-
Double-click tHDFSConfiguration to open its Component view.
Spark uses this component to connect to the HDFS system to which the jar
files dependent on the Job are transferred. -
If you have defined the HDFS connection metadata under the Hadoop
cluster node in Repository, select
Repository from the Property
type drop-down list and then click the
[…] button to select the HDFS connection you have
defined from the Repository content wizard.For further information about setting up a reusable
HDFS connection, search for centralizing HDFS metadata on Talend Help Center
(https://help.talend.com).If you complete this step, you can skip the following steps about configuring
tHDFSConfiguration because all the required fields
should have been filled automatically. -
In the Version area, select
the Hadoop distribution you need to connect to and its version. -
In the NameNode URI field,
enter the location of the machine hosting the NameNode service of the cluster.
If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; WebHDFS with SSL is not
supported yet. -
In the Username field, enter
the authentication information used to connect to the HDFS system to be used.
Note that the user name must be the same as you have put in the Spark configuration tab.
Defining the properties of tHMapFile
-
In the Job, select the tHMapFile component to define its
properties. -
Select the Define a storage configuration
component check box and then select the name of the component to use
from those available in the drop-down list, tHDFSConfiguration_1 in this example.Note that if you leave the Define a storage configuration
component check box unselected, you can only transform files
locally. -
Click the […] button and, in the Component Configuration window, click the Select button next to the Record
Map field. -
In the Select a Map dialog box that opens,
select the map you want to use and then click OK.
In this example, use the json2xml map you just
created. - Click Next.
-
Select an appropriate record delimitor for your data that tells the component
where each new record begins.In this example, each record is on a new line, so select Separator and specify the newline character,
in this example. - Click Finish.
- Click the […] button next to the Input field to define the path to the input file, /talend/input/gdelt.json in this example.
-
Click the […] button next to the Output field to define the path to where the output file is
to be stored, /talend/output in this
example. - Leave the other settings unchanged.
Selecting the Spark mode
Depending on the Spark cluster to be used, select a Spark mode for your Job.
The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.
-
Click Run to open its view and then click the
Spark Configuration tab to display its view
for configuring the Spark connection. -
Select the Use local mode check box to test your Job locally.
In the local mode, the Studio builds the Spark environment in itself on the fly in order to
run the Job in. Each processor of the local machine is used as a Spark
worker to perform the computations.In this mode, your local file system is used; therefore, deactivate the
configuration components such as tS3Configuration or
tHDFSConfiguration that provides connection
information to a remote file system, if you have placed these components
in your Job.You can launch
your Job without any further configuration. -
Clear the Use local mode check box to display the
list of the available Hadoop distributions and from this list, select
the distribution corresponding to your Spark cluster to be used.This distribution could be:-
For this distribution, Talend supports:
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Standalone
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Yarn client
-
-
For this distribution, Talend supports:
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Standalone
-
Yarn client
-
Yarn cluster
-
-
For this distribution, Talend supports:
-
Yarn cluster
-
-
Cloudera Altus
For this distribution, Talend supports:-
Yarn cluster
Your Altus cluster should run on the following Cloud
providers:-
Azure
The support for Altus on Azure is a technical
preview feature. -
AWS
-
-
As a Job relies on Avro to move data among its components, it is recommended to set your
cluster to use Kryo to handle the Avro types. This not only helps avoid
this Avro known issue but also
brings inherent preformance gains. The Spark property to be set in your
cluster is:
1spark.serializer org.apache.spark.serializer.KryoSerializerIf you cannot find the distribution corresponding to yours from this
drop-down list, this means the distribution you want to connect to is not officially
supported by
Talend
. In this situation, you can select Custom, then select the Spark
version of the cluster to be connected and click the
[+] button to display the dialog box in which you can
alternatively:-
Select Import from existing
version to import an officially supported distribution as base
and then add other required jar files which the base distribution does not
provide. -
Select Import from zip to
import the configuration zip for the custom distribution to be used. This zip
file should contain the libraries of the different Hadoop/Spark elements and the
index file of these libraries.In
Talend
Exchange, members of
Talend
community have shared some ready-for-use configuration zip files
which you can download from this Hadoop configuration
list and directly use them in your connection accordingly. However, because of
the ongoing evolution of the different Hadoop-related projects, you might not be
able to find the configuration zip corresponding to your distribution from this
list; then it is recommended to use the Import from
existing version option to take an existing distribution as base
to add the jars required by your distribution.Note that custom versions are not officially supported by
Talend
.
Talend
and its community provide you with the opportunity to connect to
custom versions from the Studio but cannot guarantee that the configuration of
whichever version you choose will be easy. As such, you should only attempt to
set up such a connection if you have sufficient Hadoop and Spark experience to
handle any issues on your own.
For a step-by-step example about how to connect to a custom
distribution and share this connection, see Hortonworks.
Saving and executing the Job
- Press Ctrl+S to save your Job.
- In the Run tab, click Run to execute the Job.
-
Browse to the location on your file system where the output files are stored to
check that the transformation was performed successfully.