July 30, 2023

tHMapFile – Docs for ESB 7.x

tHMapFile

Runs a
Talend Data Mapper
map where input and output structures may differ, as a Spark batch
execution.

tHMapFile transforms data from a
single source, in a Spark environment.

tHMapFile properties for Apache Spark Batch

These properties are used to configure tHMapFile running in the Spark Batch Job framework.

The Spark Batch
tHMapFile component belongs to the Processing family.

This component is available in Talend Platform products with Big Data and
in Talend Data Fabric.

Basic settings

Storage

To connect to an HDFS installation, select the Define a storage configuration component
check box and then select the name of the component to use from those
available in the drop-down list.

This option requires you to have previously configured the
connection to the HDFS installation to be used, as described in the
documentation
for the tHDFSConfiguration
component.

If you leave the Define a
storage configuration component
check box unselected,
you can only convert files locally.

Configure Component

To configure the component, click the […] button and, in the Component Configuration window, perform
the following actions.

  1. Click the Select button next to the Record Map field and then, in
    the Select a Map dialog box
    that opens, select the map you want to use and then click
    OK.

    This map must have been previously created in
    Talend Data Mapper
    .

    Note that the input and output representations
    are those defined in the map, and cannot be changed in the
    component.

  2. Click Next.

  3. Tell the component where each new record begins.
    In order for you to be able to do so, you need to fully
    understand the structure of your data.

    Exactly how you do this varies depending on the
    input representation being used, and you will be presented with
    one of the following options.

    1. Select an appropriate
      record delimiter for your data. Note that you must
      specify this value without quotes.

      • Separator lets you specify a
        separator indicator, such as
        , to identify a new
        line.

        Supported
        indicators are
        for a Unix-type new line,
        for
        Windows and
        for Mac, and for tab characters.

      • Start/End with lets you specify the
        initial characters that indicate a new record,
        such as <root, or the characters that indicate
        where a record ends. This can also be a regular
        expression.

        Start
        with
        also supports new lines,
        for a
        Unix-type new line,
        for Windows and
        for Mac,
        and for
        tab characters.

      • Sample
        File
        : To test the signature with a
        sample file, click the […] button, browse to the file you
        want to use as a sample, click Open, and then click
        Run to test
        your sample.

        Testing the
        signature lets you check that the total number of
        records and their minimum and maximum length
        corresponds to what you expect based on your
        knowledge of the data. This step assumes you have
        a local subset of your data to use as a
        sample.

      • Click Finish.

    2. If your input representation is COBOL or
      Flat with positional and/or binary encoding properties,
      define the signature for the input record structure:

      • Input Record root
        corresponds to the root element in your input
        record.
      • Minimum Record
        Size
        corresponds to the size in bytes
        of the smallest record. If you set this value too
        low, you may encounter performance issues, since
        the component will perform more checks than
        necessary when looking for a new record.

      • Maximum Record
        Size
        corresponds to the size in bytes
        of the largest record, and is used to determine
        how much memory is allocated to read the
        input.

      • Sample from Workspace or
        Sample from File System: To
        test the signature with a sample file, click the
        […]
        button, and then browse to the file you want to
        use as a sample.

        Testing the signature lets you
        check that the total number of records and their
        minimum and maximum length corresponds to what you
        expect based on your knowledge of the data. This
        step assumes you have a local subset of your data
        to use as a sample.

      • Footer Size
        corresponds to the size in bytes of the footer, if
        any. At runtime, the footer will be ignored rather
        than being mistakenly included in the last record.
        Leave this field empty if there is no footer.

      • Click the Next button to open
        the Signature
        Parameters
        window, select the fields
        that define the signature of your record input
        structure (that is, to identify where a new record
        begins), update the Operation and Value columns as
        appropriate, and then click Next.

      • In the Record
        Signature Test
        window that opens, check
        that your Records are correctly delineated by
        scrolling through them with the
        Back and
        Next buttons and performing
        a visual check, and then click
        Finish.

Input

Click the […]
button to define the path to where the input file is stored.

Output

Click the […]
button to define the path to where the output files will be stored.

Action

From the drop-down list, select:

  • Create if you want
    the mapping process to create a new file.

  • Overwrite if you want
    the mapping process to overwrite an existing file.

Open Map Editor

Click the […]
button to open the map for editing in the Map
Editor
of
Talend Data Mapper
.

For more information, see
Talend Data Mapper User Guide
.

Die on error

This check box is selected by default.

Clear the check box to skip any rows on error and complete the
process for error-free rows.

If you opt to clear the check box, you can perform any of these options:

  • Row > Rejectsconnection. In the output component, ensure
    that you add a fixed metadata with the following columns:

    • inputRecord: contains the rejected
      input record during the transformation.
    • recordId: refers to the record
      identifier. For a text or binary input, the recordId
      specifies the start offset of the record in the
      input file. For an AVRO input, the recordId
      specifies the timestamp when the input was
      processed.
    • errorMessage: contains the
      transformation status with details of the cause of
      the transformation error.
  • If the check box is unselected, you can retrieve the rejected
    records in a file. One of these mechanisms triggers this
    feature: (1) a context variable
    (Connect the tHMapFile component to an output
    component, for example tAvroOutput, using a talend_transform_reject_file_path)
    and (2) a system variable set in the Advanced job parameters
    (spark.hadoop.talend.transform.reject.file.path).

    When you set the file path on the Hadoop Distributed File
    System (HDFS), no further configurations are needed. When
    you set the file on Amazon S3 or any other Hadoop-compatible
    file systems, add the associated Spark advanced
    configuration parameter.

    In case of errors at runtime, tHMapFile checks if one of
    the mechanisms exists and, if so, appends the rejected
    record to the designated file. The reject file content
    includes the concatenation of the rejected records without
    any additional metadata.

    If the file system you use does not support
    appending to a file, a separate file is created for each
    rejection. The file uses the provided file path as the
    prefix and adds a suffix that is the offset of the input
    file and the size of the rejected record.

Note: Any errors while trying to store the reject are logged and the
processing continues.

Merge result to single file

By default, the tHMapFile creates several part files. Select this check
box to merge these files into a single file.

The following options are used to manage the source and
the target files:

  • Merge File
    Path
    : enter the path to the file which will
    contain the merged content from all parts.
  • Remove source dir: select
    this check box to remove the source files after the merge.

  • Override target file:
    select this check box to override the file already existing in the target
    location. This option does not override the folder.

  • Include
    Header
    : select this check box to add the CSV
    header to the beginning of the merged file. This option is only
    used for CSV outputs. For other representations, it has no
    effect on the target file.
Warning: Using this option with an Avro output creates an
invalid Avro file. Since each part starts with an Avro Schema header,
the merged file would have more than one Avro Schema, which is
invalid.

Usage

Usage rule

This component is used with a tHDFSConfiguration component which defines the
connection to the HDFS storage, or as a standalone component for mapping
local files only.

Transforming data in a Spark environment

This scenario applies only to subscription-based Talend Platform products with Big Data and Talend Data Fabric.

The following scenario creates a two-component Job that transforms data in
a Spark environment using a map that was previously created in
Talend Data Mapper
.

tHMapFile_1.png

tHDFSConfiguration is used in this scenario by Spark to connect
to the HDFS system where the jar files dependent on the Job are transferred.

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

Downloading the input files

  1. Retrieve the input files for this scenario from the Downloads tab of the online version of this page at https://help.talend.com.

    The thmapfile_transform_scenario.zip file
    contains two files: gdelt.json is a JSON file built using data
    from the GDELT project http://gdeltproject.org, and
    gdelt-onerec.json is a subset of gdelt.json
    containing just one record and is used as a sample document for creating the
    structure in Talend Data Mapper.
  2. Save the thmapfile_scenario.zip file on your
    local machine and unpack the .zip file.

Creating the input and output structures

  1. In the Integration perspective, in the Repository tree view, expand Metadata >
    Hierarchical Mapper
    , right click Structures, and then click New >
    Structure
    .
  2. In the New Structure dialog box that opens,
    select Import a structure definion, and then click
    Next.
  3. Select JSON Sample Document, and then click
    Next.
  4. Select Local file, browse to the location on your
    local file system where you saved the source files, import gdelt-onerec.json as your sample document, and then click Next.
  5. Give your new structure a name, gdelt-onerec in
    this example, click Next, and then click Finish.

Creating the map

  1. In the Repository tree view, expand Metadata > Hierarchical Mapper, right click Maps, and then click New >
    Map
    .
  2. In the Select Type of New Map dialog box that
    opens, select Standard Map and then click Next.
  3. Give your new map a name, json2xml in this
    example, and then click Finish.
  4. Drag the gdelt-onerec structure you created
    earlier into both the Input and Output sides of the map.
  5. On the Output side of the map, change the
    representation used from JSON to XML by double-clicking Output
    (JSON)
    and selecting the XML output
    representation.
  6. Drag the Root element from the Input side of the map to the Root element on the Output side. This
    maps each element from the Input side with its
    corresponding element on the Outside side, which is
    a very simple map just for testing purposes.
  7. Press Ctrl+S to save your map.

Adding the components

  1. In the
    Integration
    perspective, create a new
    Job and call it thmapfile_transform.
  2. Click any point in the design workspace, start typing tHDFSConfiguration, and then click the name of the component when it
    appears in the list proposed in order to select it.

    Note that for testing purposes, you can also perform this scenario locally. In
    that case, do not add the tHDSFConfiguration
    component and skip the Configuring the connection to the
    file system used by Spark
    section below.
  3. Do the same to add a tHMapFile component, but do
    not link the two components.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Defining the properties of tHMapFile

  1. In the Job, select the tHMapFile component to define its
    properties.

    tHMapFile_2.png

  2. Select the Define a storage configuration
    component
    check box and then select the name of the component to use
    from those available in the drop-down list, tHDFSConfiguration_1 in this example.

    Note that if you leave the Define a storage configuration
    component
    check box unselected, you can only transform files
    locally.
  3. Click the […] button and, in the Component Configuration window, click the Select button next to the Record
    Map
    field.
  4. In the Select a Map dialog box that opens,
    select the map you want to use and then click OK.
    In this example, use the json2xml map you just
    created.
  5. Click Next.
  6. Select an appropriate record delimitor for your data that tells the component
    where each new record begins.

    In this example, each record is on a new line, so select Separator and specify the newline character,
    in this example.
  7. Click Finish.
  8. Click the […] button next to the Input field to define the path to the input file, /talend/input/gdelt.json in this example.
  9. Click the […] button next to the Output field to define the path to where the output file is
    to be stored, /talend/output in this
    example.
  10. Leave the other settings unchanged.

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.
  2. In the Run tab, click Run to execute the Job.
  3. Browse to the location on your file system where the output files are stored to
    check that the transformation was performed successfully.

Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x