August 15, 2023

tHMapFile – Docs for ESB 6.x

tHMapFile

Runs a
Talend Data Mapper
map where input and output structures may differ, as a Spark batch
execution.

tHMapFile transforms data from a
single source, in a Spark environment.

tHMapFile properties for Apache Spark Batch

These properties are used to configure tHMapFile running in the Spark Batch Job framework.

The Spark Batch
tHMapFile component belongs to the Processing family.

This component is available in the Palette of the Studio only if you have subscribed to any Talend Platform product with Big Data or Talend Data Fabric.

Basic settings

Storage

To connect to an HDFS installation, select the Define a storage configuration component check box and then
select the name of the component to use from those available in the
drop-down list.

This option requires you to have previously configured the connection
to the HDFS installation to be used, as described in the documentation
for the tHDFSConfiguration
component.

If you leave the Define a storage
configuration component
check box unselected, you can only
convert files locally.

Configure Component

To configure the component, click the […] button and, in the [Component
Configuration]
window, perform the following actions.

  1. Click the Select button next to the Record Map field and then, in the
    [Select a Map] dialog box
    that opens, select the map you want to use and then click
    OK.

    This map must have been previously created in
    Talend Data Mapper
    .

    Note that the input and output representations
    are those defined in the map, and cannot be changed in the
    component.

  2. Click Next.

  3. Tell the component where each new record begins.
    In order for you to be able to do so, you need to fully
    understand the structure of your data.

    Exactly how you do this varies depending on the
    input representation being used, and you will be presented with
    one of the following options.

    1. Select an appropriate record delimiter
      for your data. Note that you must specify this value
      without quotes.

      • Separator lets you specify a
        separator indicator, such as
        , to identify a new line.

        Supported indicators are
        for a Unix-type new line,

        for Windows and
        for Mac, and for tab characters.

      • Start/End with lets you specify the
        initial characters that indicate a new record,
        such as <root, or the
        characters that indicate where a record ends. This
        can also be a regular expression.

        Start
        with
        also supports new lines,
        for a Unix-type new line,

        for Windows and
        for Mac, and for tab characters.

      • Sample File: To test the
        signature with a sample file, click the
        […] button, browse to the
        file you want to use as a sample, click
        Open, and then click
        Run to test your
        sample.

        Testing the signature lets you check that
        the total number of records and their minimum and
        maximum length corresponds to what you expect based
        on your knowledge of the data. This step assumes you
        have a local subset of your data to use as a
        sample.

      • Click Finish.

    2. If your input representation is COBOL or
      Flat with positional and/or binary encoding properties,
      define the signature for the input record structure:

      • Input Record root corresponds
        to the root element in your input record.
      • Minimum Record Size corresponds to
        the size in bytes of the smallest record. If you
        set this value too low, you may encounter
        performance issues, since the component will
        perform more checks than necessary when looking
        for a new record.

      • Maximum Record Size corresponds to
        the size in bytes of the largest record, and is
        used to determine how much memory is allocated to
        read the input.

      • Sample from Workspace or
        Sample from File System: To
        test the signature with a sample file, click the
        […] button,
        and then browse to the file you want to use as a
        sample.

        Testing the signature lets you check that
        the total number of records and their minimum and
        maximum length corresponds to what you expect based
        on your knowledge of the data. This step assumes you
        have a local subset of your data to use as a
        sample.

      • Footer
        Size
        corresponds to the size in bytes
        of the footer, if any. At runtime, the footer will
        be ignored rather than being mistakenly included
        in the last record. Leave this field empty if
        there is no footer.

      • Click the Next button to open
        the [Signature
        Parameters]
        window, select the fields
        that define the signature of your record input
        structure (that is, to identify where a new record
        begins), update the Operation and Value columns as
        appropriate, and then click Next.

      • In the [Record
        Signature Test]
        window that opens,
        check that your Records are correctly delineated
        by scrolling through them with the
        Back and
        Next buttons and performing
        a visual check, and then click
        Finish.

Input

Click the […] button to define the path to where the input
file is stored.

Output

Click the […] button to define the path to where the output
files will be stored.

Action

From the drop-down list, select:

  • Create if you want the
    mapping process to create a new file.

  • Overwrite if you want the
    mapping process to overwrite an existing file.

Open Map Editor

Click the […] button to open the map for editing in the
Map Editor of
Talend Data Mapper
.

For more information, see
Talend Data Mapper User
Guide
.

Advanced settings

Die on error

This check box is selected by default. Clear the check box to skip
the row in error and complete the process for error-free rows. If
needed, you can retrieve the rows in error via a Row > Rejects link.

Usage

Usage rule

This component is used with a tHDFSConfiguration component which defines the
connection to the HDFS storage, or as a standalone component for mapping
local files only.

Scenario: Transforming data in a Spark environment

This scenario applies only to a subscription-based Talend Platform solution with Big data or Talend Data Fabric.

The following scenario creates a two-component Job that transforms data in a Spark
environment using a map that was previously created in
Talend Data Mapper
.

use_case-thmapfile_scenario1.png

Downloading the input files

  1. Retrieve the input files for this scenario from the Downloads tab of the online version of this page at https://help.talend.com.

    The thmapfile_transform_scenario.zip file
    contains two files: gdelt.json is a JSON file built using data
    from the GDELT project http://gdeltproject.org, and
    gdelt-onerec.json is a subset of gdelt.json
    containing just one record and is used as a sample document for creating the
    structure in Talend Data Mapper.
  2. Save the thmapfile_scenario.zip file on your
    local machine and unpack the .zip file.

Creating the input and output structures

  1. In the Integration perspective, in the Repository tree view, expand Metadata >
    Hierarchical Mapper
    , right click Structures, and then click New >
    Structure
    .
  2. In the New Structure dialog box that opens,
    select Import a structure definion, and then click
    Next.
  3. Select JSON Sample Document, and then click
    Next.
  4. Select Local file, browse to the location on your
    local file system where you saved the source files, import gdelt-onerec.json as your sample document, and then click Next.
  5. Give your new structure a name, gdelt-onerec in
    this example, click Next, and then click Finish.

Creating the map

  1. In the Repository tree view, expand Metadata > Hierarchical Mapper, right click Maps, and then click New >
    Map
    .
  2. In the Select Type of New Map dialog box that
    opens, select Standard Map and then click Next.
  3. Give your new map a name, json2xml in this
    example, and then click Finish.
  4. Drag the gdelt-onerec structure you created
    earlier into both the Input and Output sides of the map.
  5. On the Output side of the map, change the
    representation used from JSON to XML by double-clicking Output
    (JSON)
    and selecting the XML output
    representation.
  6. Drag the Root element from the Input side of the map to the Root element on the Output side. This
    maps each element from the Input side with its
    corresponding element on the Outside side, which is
    a very simple map just for testing purposes.
  7. Press Ctrl+S to save your map.

Adding the components

  1. In the
    Integration
    perspective, create a new
    Job and call it thmapfile_transform.
  2. Click any point in the design workspace, start typing tHDFSConfiguration, and then click the name of the component when it
    appears in the list proposed in order to select it.

    Note that for testing purposes, you can also perform this scenario locally. In
    that case, do not add the tHDSFConfiguration
    component and skip the Configuring the connection to the
    file system used by Spark
    section below.
  3. Do the same to add a tHMapFile component, but do
    not link the two components.

Configuring the connection to the file system to be used by Spark

  1. Double-click tHDFSConfiguration to open its
    Component view.
  2. In the Version area, select the Hadoop
    distribution you need to connect to and its version.
  3. In the NameNode URI
    field, enter the location of the machine hosting the NameNode service of the
    cluster. If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; if this WebHDFS is secured
    with SSL, the scheme should be swebhdfs and you need to use
    a tLibraryLoad in the Job to load the library required by
    the secured WebHDFS.
  4. In the Username field, enter the authentication
    information used to connect to the HDFS system to be used.

Defining the properties of tHMapFile

  1. In the Job, select the tHMapFile component to define its
    properties.

    use_case-thmapfile_properties.png

  2. Select the Define a storage configuration
    component
    check box and then select the name of the component to use
    from those available in the drop-down list, tHDFSConfiguration_1 in this example.

    Note that if you leave the Define a storage configuration
    component
    check box unselected, you can only transform files
    locally.
  3. Click the […] button and, in the [Component Configuration] window, click the Select button next to the Record
    Map
    field.
  4. In the [Select a Map] dialog box that opens,
    select the map you want to use and then click OK.
    In this example, use the json2xml map you just
    created.
  5. Click Next.
  6. Select an appropriate record delimitor for your data that tells the component
    where each new record begins.

    In this example, each record is on a new line, so select Separator and specify the newline character,
    in this example.
  7. Click Finish.
  8. Click the […] button next to the Input field to define the path to the input file, /talend/input/gdelt.json in this example.
  9. Click the […] button next to the Output field to define the path to where the output file is
    to be stored, /talend/output in this
    example.
  10. Leave the other settings unchanged.

Saving and executing the Job

  1. Press Ctrl+S to save your Job.
  2. In the Run tab, click Run to execute the Job.
  3. Browse to the location on your file system where the output files are stored to
    check that the transformation was performed successfully.

Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x