August 17, 2023

tConvertType – Docs for ESB 5.x

tConvertType

tConvertType_icon32_white.png

tConvertType properties

Component family

Processing

 

Function

tConvertType allows specific
conversions at runtime from one Talend java type to another.

Purpose

Helps to automatically convert one Talend
java type to another and thus avoid compiling
errors.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields to be processed and
passed on to the next component. The schema is either Built-in or stored remotely in the
Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

 

 

Built-in: You create and store
the schema locally for only the current component. Related topic:
see Talend Studio User
Guide
.

 

 

Repository: The schema already
exists and is stored in the Repository, hence can be reused in
various projects and Job flowcharts. Related topic: see
Talend Studio User
Guide
.

 

Auto Cast

This check box is selected by default. It performs an automatic
java type conversion.

 

Manual Cast

This mode is not visible if the Auto Cast
check box is selected. It allows you to precise manually
the columns where a java type conversion is needed.

 

Set empty values to Null before converting

This check box is selected to set the empty values of String or
Object type to null for the input data.

 

Die on error

Note

Not available for Map/Reduce Jobs.

This check box is selected to kill the Job when an error
occurs.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a
Job level as well as at each component level. Note that this check box is not available in
the Map/Reduce version of the component.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component cannot be used as a start component as it requires
an input flow to operate.

Usage in Map/Reduce Jobs

If you have subscribed to one of the Talend solutions with Big Data, you can also
use this component as a Map/Reduce component. In a Talend Map/Reduce Job, this
component is used as an intermediate step and other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

For further information about a Talend Map/Reduce Job, see the sections
describing how to create, convert and configure a Talend Map/Reduce Job of the
Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Limitation

n/a

Scenario 1: Converting java types

This Java scenario describes a four-component Job where the tConvertType component is used to convert Java types in three columns,
and a tMap is used to adapt the schema and have as an
output the first of the three columns and the sum of the two others after
conversion.

Note

In this scenario, the input schemas for the input delimited file are stored in the
repository, you can simply drag and drop the relevant file node from RepositoryMetadata
File delimited onto the design workspace to
automatically retrieve the tFileInputDelimited
component’s setting. For more information, see Talend Studio User
Guide
.

Dropping the components

  1. Drop the following components from the Palette onto the design workspace: tConvertType, tMap, and
    tLogRow.

  2. In the Repository tree view, expand Metadata and from File delimited
    drag the relevant node, JavaTypes in this
    scenario, to the design workspace.

    The [Components] dialog box
    displays.

  3. From the component list, select tFileInputDelimited and click Ok.

    A tFileInputComponent called
    Java types displays in the design workspace.

  4. Connect the components using Row > Main
    links.

    Use_Case_tConvertType.png

Configuring the components

  1. Double-click tFileInputDelimited to enter
    its Basic settings view.

  2. Set Property Type to Repository since the file details are stored in
    the repository. The fields to follow are pre-defined using the fetched
    data.

    Use_Case_tConvertType1.png

    The input file used in this scenario is called input.
    It is a text file that holds string, integer, and float java types.

    Use_Case_tConvertType2.png

    Fill in all other fields as needed. For more information, see tFileInputDelimited. In this scenario, the header and the
    footer are not set and there is no limit for the number of processed
    rows.

  3. Click Edit schema to describe the data
    structure of this input file. In this scenario, the schema is made of three
    columns, StringtoInteger, IntegerField, and
    FloatToInteger.

    Use_Case_tConvertType3.png
  4. Click Ok to close the dialog box.

  5. Double-click tConvertType to enter its
    Basic settings view.

    Use_Case_tConvertType4.png
  6. Set Schema Type to Built in, and click Sync columns
    to automatically retrieve the columns from the tFileInputDelimited component.

  7. Click Edit schema to describe manually
    the data structure of this processing component.

    Use_Case_tConvertType5.png

    In this scenario, we want to convert a string type data into an integer
    type and a float type data into an integer type.

    Click OK to close the [Schema of tConvertType] dialog box.

  8. Double-click tMap to open the Map
    editor.

    The Map editor displays the input metadata of the
    tFileInputDelimited
    component

    Use_Case_tConvertType7.png
  9. In the Schema editor panel of the Map
    editor, click the plus button of the output table to add two rows and name
    them to StringToInteger and
    Sum.

  10. In the Map editor, drag the StringToInteger row from
    the input table to the StringToInteger row in the
    output table.

  11. In the Map editor, drag each of the IntegerField and
    the FloatToInteger rows from the input table to the
    Sum row in the output table and click OK to close the Map editor.

    Use_Case_tConvertType6.png
  12. In the design workspace, select tLogRow
    and click the Component tab to define its
    basic settings. For more information, see tLogRow.

Executing the Job

  1. Press Ctrl+S to save the Job.

  2. Press F6 to execute it.

    Use_Case_tConvertType8.png

    The string type data is converted into an integer type and displayed in
    the StringToInteger column on the console. The float
    type data is converted into an integer and added to the
    IntegerField value to give the addition result in
    the Sum column on the console.

Scenario 2: Converting java types using Map/Reduce components

If you are a subscription-based Big Data users, you can produce the Map/Reduce version
of the Job described earlier using Map/Reduce components. This Talend
Map/Reduce Job generates Map/Reduce code and is run natively in Hadoop.

use_case-mr_tconverttype1.png

The sample data used in this scenario is the same as in the scenario explained
earlier.

Since Talend Studio allows you to convert a Job between its
Map/Reduce and Standard (Non Map/Reduce) versions, you can convert the previous
scenario to create this Map/Reduce Job. This way, many components used can keep their
original settings so as to reduce your workload in designing this Job.

Before starting to replicate this scenario, ensure that you have appropriate rights
and permissions to access the Hadoop distribution to be used. Then proceed as
follows:

Converting the Job

  1. In the Repository tree view of the Integration perspective of Talend Studio, right-click the
    Job you have created in the earlier scenario to open its contextual menu and
    select Edit properties.

    Then the [Edit properties] dialog box is
    displayed. Note that the Job must be closed before you are able to make any
    changes in this dialog box.

    This dialog box looks like the image below:

    use_case-mr_convert_job-common.png

    Note that you can change the Job name as well as the other descriptive
    information about the Job from this dialog box.

  2. Click Convert to Map/Reduce Job. Then a
    Map/Reduce Job using the same name appears under the Map/Reduce Jobs sub-node of the Job
    Design
    node.

If you need to create this Map/Reduce Job from scratch, you have to right-click the
Job Design node or the Map/Reduce Jobs sub-node and select Create
Map/Reduce Job
from the contextual menu. Then an empty Job is opened in
the workspace. For further information, see the section describing how to create a
Map/Reduce Job of the Talend Big Data Getting Started Guide.

Rearranging the components

  1. Double-click this new Map/Reduce Job to open it in the workspace. The
    Map/Reduce components’ Palette is opened
    accordingly and in the workspace, the crossed-out components, if any,
    indicate that those components do not have the Map/Reduce version.

  2. Right-click each of those components in question and select Delete to remove them from the workspace.

  3. Drop a tHDFSInput component in the
    workspace. The tHDFSInput component reads
    data from the Hadoop distribution to be used.

    If from scratch, you have to drop tConvertType, tMap and
    tLogRow, too.

  4. Connect tHDFSInput to tConvertType using the Row
    > Main
    link and accept to get the schema of tConvertType.

Setting up Hadoop connection

  1. Click Run to open its view and then click the
    Hadoop Configuration tab to display its
    view for configuring the Hadoop connection for this Job.

    This view looks like the image below:

    use_case-hadoop_config-common.png
  2. From the Property type list, select Built-in. If you have created the connection to be
    used in Repository, then select Repository and thus the Studio will reuse that set of
    connection information for this Job.

    For further information about how to create an Hadoop connection in
    Repository, see the chapter describing the Hadoop
    cluster
    node of the Talend Big Data Getting Started Guide.

  3. In the Version area, select the Hadoop
    distribution to be used and its version. If you cannot find from the list the
    distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not officially
    supported in the Studio.

    For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution.

    Along with the evolution of Hadoop, please note the
    following changes:

    • If you use Hortonworks Data Platform
      V2.2
      , the configuration files of your cluster might be using
      environment variables such as ${hdp.version}. If this is your situation, you need to set
      the mapreduce.application.framework.path property in the
      Hadoop properties table with the path
      value explicitly pointing to the MapReduce framework archive of your
      cluster. For
      example:

    • If you use Hortonworks Data Platform
      V2.0.0
      , the type of the operating system for running the
      distribution and a Talend Job must be the same,
      such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same
      type of operating system in which the Hortonworks
      Data Platform V2.0.0
      distribution you are using is run. For
      further information about Talend Jobserver, see
      Talend
      Installation and Upgrade Guide
      .

  4. In the Name node field, enter the location of
    the master node, the NameNode, of the distribution to be used. For example,
    hdfs://tal-qa113.talend.lan:8020.

    If you are using a MapR distribution, you can simply leave maprfs:/// as it is in this field; then the MapR
    client will take care of the rest on the fly for creating the connection. The
    MapR client must be properly installed. For further information about how to set
    up a MapR client, see the following link in MapR’s documentation: http://doc.mapr.com/display/MapR/Setting+Up+the+Client

  5. In the Job tracker field, enter the location
    of the JobTracker of your distribution. For example, tal-qa114.talend.lan:8050.

    Note that the notion Job in this term JobTracker designates the MR or the
    MapReduce jobs described in Apache’s documentation on http://hadoop.apache.org/.

    If you use YARN in your Hadoop cluster such as Hortonworks Data Platform V2.0.0 or Cloudera CDH4.3 + (YARN mode), you need to specify the location
    of the Resource Manager instead of the
    Jobtracker. Then you can continue to set the following parameters depending on
    the configuration of the Hadoop cluster to be used (if you leave the check box
    of a parameter clear, then at runtime, the configuration about this parameter in
    the Hadoop cluster to be used will be ignored ):

    • Select the Set resourcemanager scheduler
      address
      check box and enter the Scheduler address in
      the field that appears.

    • Select the Set jobhistory address
      check box and enter the location of the JobHistory server of the
      Hadoop cluster to be used. This allows the metrics information of
      the current Job to be stored in that JobHistory server.

    • Select the Set staging directory
      check box and enter this directory defined in your Hadoop cluster
      for temporary files created by running programs. Typically, this
      directory can be found under the yarn.app.mapreduce.am.staging-dir property in the
      configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

    • Select the Use datanode hostname
      check box to allow the Job to access datanodes via their hostnames.
      This actually sets the dfs.client.use.datanode.hostname property to
      true. When connecting to a
      S3N filesystem, you must select this check box.

  6. If you are accessing the Hadoop cluster running with Kerberos security, select this check
    box, then, enter the Kerberos principal name for the NameNode in the field displayed. This
    enables you to use your user name to authenticate against the credentials stored in
    Kerberos.

    In addition, since this component performs Map/Reduce computations, you also need to
    authenticate the related services such as the Job history server and the Resource manager or
    Jobtracker depending on your distribution in the corresponding field. These principals can
    be found in the configuration files of your distribution. For example, in a CDH4
    distribution, the Resource manager principal is set in the yarn-site.xml file and the Job history principal in the mapred-site.xml file.

    If you need to use a Kerberos keytab file to log in, select Use a
    keytab to authenticate
    . A keytab file contains pairs of Kerberos principals
    and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the
    Keytab field.

    Note that the user that executes a keytab-enabled Job is not necessarily the one a
    principal designates but must have the right to read the keytab file being used. For
    example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

  7. In the User name field, enter the login user
    name for your distribution. If you leave it empty, the user name of the machine
    hosting the Studio will be used.

  8. In the Temp folder field, enter the path in
    HDFS to the folder where you store the temporary files generated during
    Map/Reduce computations.

  9. Leave the default value of the Path separator in server as
    it is, unless you have changed the separator used by your Hadoop distribution’s host machine
    for its PATH variable or in other words, that separator is not a colon (:). In that
    situation, you must change this value to the one you are using in that host.

  10. Leave the Clear temporary folder check box
    selected, unless you want to keep those temporary files.

  11. Leave the Compress intermediate map output to reduce
    network traffic
    check box selected, so as to spend shorter time
    to transfer the mapper task partitions to the multiple reducers.

    However, if the data transfer in the Job is negligible, it is recommended to
    clear this check box to deactivate the compression step, because this
    compression consumes extra CPU resources.

  12. If you need to use custom Hadoop properties, complete the Hadoop properties table with the property or
    properties to be customized. Then at runtime, these changes will override the
    corresponding default properties used by the Studio for its Hadoop
    engine.

    For further information about the properties required by Hadoop, see Apache’s
    Hadoop documentation on http://hadoop.apache.org, or
    the documentation of the Hadoop distribution you need to use.

  13. If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks
    Data Platform V1.3, you need to set proper memory allocations for the map and reduce
    computations to be performed by the Hadoop system.

    In that situation, you need to enter the values you need in the Mapred
    job map memory mb
    and the Mapred job reduce memory
    mb
    fields, respectively. By default, the values are both 1000 which are normally appropriate for running the
    computations.

    If the distribution is YARN, then the memory parameters to be set become Map (in Mb), Reduce (in Mb) and
    ApplicationMaster (in Mb), accordingly. These fields
    allow you to dynamically allocate memory to the map and the reduce computations and the
    ApplicationMaster of YARN.

For further information about this Hadoop
Configuration
tab, see the section describing how to configure the Hadoop
connection for a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

For further information about the Resource Manager, its scheduler and the
ApplicationMaster, see YARN’s documentation such as http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/.

For further information about how to determine YARN and MapReduce memory configuration
settings, see the documentation of the distribution you are using, such as the following
link provided by Hortonworks: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html.

Configuring components

Configuring tHDFSInput

  1. Double-click tHDFSInput to open its
    Component view.

    use_case-mr_tconverttype2.png
  2. Click the dotbutton.png button next to Edit
    schema
    to verify that the schema received in the earlier
    steps is properly defined.

    use_case-mr_tconverttype3.png

    Note that if you are creating this Job from scratch, you need to click the plus_button.png button to manually define the schema; otherwise, if the
    schema has been defined in Repository, you
    can select the Repository option from the
    Schema list in the Basic settings view to reuse it. For further
    information about how to define a schema in Repository, see the chapter describing metadata management
    in the Talend Studio User Guide or the chapter describing the
    Hadoop cluster node in Repository of Talend Big Data Getting Started Guide.

  3. If you make changes in the schema, click OK to validate these changes and accept the propagation
    prompted by the pop-up dialog box.

  4. In the Folder/File field, enter the path,
    or browse to the source file you need the Job to read.

    If this file is not in the HDFS system to be used, you have to place it in
    that HDFS, for example, using tFileInputDelimited and tHDFSOutput in a Standard
    Job.

Reviewing the transformation component

  • Double-click tConvertType to open its
    Component view.

    use_case-mr_tconverttype4.png

    This component keeps its both Basic
    settings
    and Advanced
    settings
    used by the original Job. Therefore, as its original
    one does, it converts the string type and the float type into
    integer.

Reviewing tMap

  • Double-click tMap to open its editor. The
    mapping configuration remains as it is in the original Job, that is to say,
    to output the converted StringtoInteger
    column and to make the sum of the IntegerField and the FloatToInteger columns.

    use_case-mr_tconverttype5.png

Executing the Job

Then you can run this Job.

The tLogRow component is used to present the
execution result of the Job.

  1. If you want to configure the presentation mode on its Component view, double-click the tLogRow component of interest to open the
    Component view and in the Mode area, then, select the Table (print values in cells of a table) option.

  2. Press F6 to run this Job.

During the execution, the Run view is
automatically opened, where you can read how this Job progresses, including the
status of the Map/Reduce computation the Job is performing.

In the meantime in the workspace, progress bars automatically appear under the
components performing Map/Reduce to graphically show the same status of the
Map/Reduce computation.

use_case-mr_tconverttype6.png

If you need to obtain more details about the Job, it is recommended to use the web
console of the Jobtracker provided by the Hadoop distribution you are using.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x