August 17, 2023

tReplace – Docs for ESB 5.x

tReplace

tReplace_icon32_white.png

tReplace Properties

Component family

Processing

 

Function

Carries out a Search & Replace operation in the input columns
defined.

Purpose

Helps to cleanse all files before further processing.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields to be processed and
passed on to the next component. The schema is either Built-in or stored remotely in the
Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.

Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are
available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local
    changes.

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository
    Content]
    window.

Two read-only columns, Value and Match are added to the output
schema automatically.

 

 

Built-in: The schema will be
created and stored locally for this component only. Related topic:
see Talend Studio User
Guide
.

 

 

Repository: The schema already
exists and is stored in the Repository, hence can be reused in
various projects and Job flowcharts. Related topic: see Talend Studio User
Guide
.

 

Simple Mode Search / Replace

Click the plus_button.png button to add as many conditions as needed. The
conditions are performed one after the other for each row.

Input column: Select the column of
the schema the search & replace is to be operated on

Search: Type in the value to search
in the input column

Replace with: Type in the
substitution value.

Whole word: Select this check box
if the searched value is to be considered as whole.

Case sensitive: Select this check
box to care about the case.

Note that you cannot use regular expression in these columns.

 

Use advanced mode

Select this check box when the operation you want to perform
cannot be carried out through the simple mode. In the text field,
type in the regular expression as required.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the job processing metadata at a
job level as well as at each component level. Note that this check box is not available in
the Map/Reduce version of the component.

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.

Usage

This component is not startable as it requires an input flow. And
it requires an output component.

Usage in Map/Reduce Jobs

If you have subscribed to one of the Talend solutions with Big Data, you can also
use this component as a Map/Reduce component. In a Talend Map/Reduce Job, this
component is used as an intermediate step and other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

For further information about a Talend Map/Reduce Job, see the sections
describing how to create, convert and configure a Talend Map/Reduce Job of the
Talend Big Data Getting Started Guide.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional Talend data
integration Jobs, and non Map/Reduce Jobs.

Log4j

The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User
Guide
.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

Scenario 1: Multiple replacements and column filtering

This following Job searches and replaces various typos and defects in a csv file then
operates a column filtering before producing a new csv file with the final
output.

UseCasetReplace1.png
  • Drop the following components from the Palette onto the design workspace: tFileInputDelimited, tReplace,
    tFilterColumn and tFileOutputDelimited.

  • Connect the components using Main Row
    connections via a right-click each component.

  • Select the tFileInputDelimited component and
    set the input flow parameters.

UseCasetReplace2.png
  • The File is a simple csv file stored
    locally. The Row Separator is a carriage return
    and the Field Separator is a semi-colon. In the
    Header is the name of the column, and no
    Footer nor Limit are to be set.

  • The file contains characters such as: *t, . or
    Nikson which we want to turn into Nixon, and
    streat, which we want to turn into Street.

UseCasetReplace3.png
  • The schema for this file is built in also and made of four columns of various
    types (string or int).

  • Now select the tReplace component to set the
    search & replace parameters.

UseCasetReplace4.png
  • The schema can be synchronized with the incoming flow.

  • Select the Simple mode check box as the
    search parameters can be easily set without requiring the use of regexp.

  • Click the plus sign to add some lines to the parameters table.

  • On the first parameter line, select Amount as InputColumn. Type “.” in the Search field, and
    “,” in the Replace
    field.

  • On the second parameter line, select Street as InputColumn. Type “streat” in the Search field,
    and “Street” in the Replace field.

  • On the third parameter line, select again Amount as
    InputColumn. Type “$” in the Search field, and
    “£” in the Replace field.

  • On the fourth paramater line, select Name
    as InputColumn. Type “Nikson” in the Search field,
    and “Nixon” in the Replace field.

  • On the fifth parameter line, select Firstname as
    InputColumn. Type “*t” in the Search field, and
    replace them with nothing between double quotes.

  • The advanced mode isn’t used in this scenario.

  • Select the next component in the Job, tFilterColumn.

UseCasetReplace5.png
  • The tFilterColumn component holds a schema
    editor allowing to build the output schema based on the column names of the
    input schema. In this use case, add one new column named empty_field and change the order of the input schema columns to
    obtain a schema as follows: empty_field, Firstname, Name, Street,
    Amount
    .

  • Click OK to validate.

Use_Case_tReplace6.png
  • Set the tFileOutputDelimited properties
    manually.

  • The schema is built-in for this scenario, and comes from the preceding
    component in the Job.

  • Save the Job and press F6 to execute
    it.

UseCasetReplace7.png

The first column is empty, the rest of the columns have been cleaned up from the
parasitical characters, and Nikson was replaced with
Nixon. The street column was moved and the
decimal delimiter has been changed from a dot to a comma, along with the currency
sign.

Scenario 2: Replacing values and filtering columns using Map/Reduce
components

You can use the Map/Reduce version of the Job described earlier using Map/Reduce
components. This Talend Map/Reduce Job generates Map/Reduce code and is run
natively in Hadoop.

use_case-mr_treplace1.png

Note that the Talend Map/Reduce components are available to
subscription-based Big Data users only and this scenario can be replicated only with
Map/Reduce components.

The sample data to be used in this scenario is the same as in the Job described
earlier, reading as follows:

Since Talend Studio allows you to convert a Job between its
Map/Reduce and Standard (Non Map/Reduce) versions, you can convert the scenario
explained earlier to create this Map/Reduce Job. This way, many components used can keep
their original settings so as to reduce your workload in designing this Job.

Before starting to replicate this scenario, ensure that you have appropriate rights
and permissions to access the Hadoop distribution to be used. Then proceed as
follows:

Converting the Job

  1. In the Repository tree view of the Integration perspective of Talend Studio, right-click the
    Job you have created in the earlier scenario to open its contextual menu and
    select Edit properties.

    Then the [Edit properties] dialog box is
    displayed. Note that the Job must be closed before you are able to make any
    changes in this dialog box.

    This dialog box looks like the image below:

    use_case-mr_convert_job-common.png

    Note that you can change the Job name as well as the other descriptive
    information about the Job from this dialog box.

  2. Click Convert to Map/Reduce Job. Then a
    Map/Reduce Job using the same name appears under the Map/Reduce Jobs sub-node of the Job
    Design
    node.

If you need to create this Map/Reduce Job from scratch, you have to right-click the
Job Design node or the Map/Reduce Jobs sub-node and select Create
Map/Reduce Job
from the contextual menu. Then an empty Job is opened in
the workspace. For further information, see the section describing how to create a
Map/Reduce Job of the Talend Big Data Getting Started Guide.

Rearranging the components

  1. Double-click this new Map/Reduce Job to open it in the workspace. The
    Map/Reduce components’ Palette is opened
    accordingly and in the workspace, the crossed-out components, if any,
    indicate that those components do not have the Map/Reduce version.

  2. Right-click each of those components in question and select Delete to remove them from the workspace.

  3. Drop a tHDFSInput component and a
    tHDFSOutput component in the workspace.
    The tHDFSInput component reads data from
    the Hadoop distribution to be used and the tHDFSOutput component writes data in that
    distribution.

    If from scratch, you have to drop a tReplace component and a tFilterColumns component, too.

  4. Connect tHDFSInput to tReplace using the Row >
    Main
    link and accept to get the schema of tReplace.

  5. Connect tFilterColumns to tHDFSOutput using Row >
    Main
    link.

Setting up Hadoop connection

  1. Click Run to open its view and then click the
    Hadoop Configuration tab to display its
    view for configuring the Hadoop connection for this Job.

    This view looks like the image below:

    use_case-hadoop_config-common.png
  2. From the Property type list, select Built-in. If you have created the connection to be
    used in Repository, then select Repository and thus the Studio will reuse that set of
    connection information for this Job.

    For further information about how to create an Hadoop connection in
    Repository, see the chapter describing the Hadoop
    cluster
    node of the Talend Big Data Getting Started Guide.

  3. In the Version area, select the Hadoop
    distribution to be used and its version. If you cannot find from the list the
    distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not officially
    supported in the Studio.

    For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution.

    Along with the evolution of Hadoop, please note the
    following changes:

    • If you use Hortonworks Data Platform
      V2.2
      , the configuration files of your cluster might be using
      environment variables such as ${hdp.version}. If this is your situation, you need to set
      the mapreduce.application.framework.path property in the
      Hadoop properties table with the path
      value explicitly pointing to the MapReduce framework archive of your
      cluster. For
      example:

    • If you use Hortonworks Data Platform
      V2.0.0
      , the type of the operating system for running the
      distribution and a Talend Job must be the same,
      such as Windows or Linux. Otherwise, you have to use Talend Jobserver to execute the Job in the same
      type of operating system in which the Hortonworks
      Data Platform V2.0.0
      distribution you are using is run. For
      further information about Talend Jobserver, see
      Talend
      Installation and Upgrade Guide
      .

  4. In the Name node field, enter the location of
    the master node, the NameNode, of the distribution to be used. For example,
    hdfs://tal-qa113.talend.lan:8020.

    If you are using a MapR distribution, you can simply leave maprfs:/// as it is in this field; then the MapR
    client will take care of the rest on the fly for creating the connection. The
    MapR client must be properly installed. For further information about how to set
    up a MapR client, see the following link in MapR’s documentation: http://doc.mapr.com/display/MapR/Setting+Up+the+Client

  5. In the Job tracker field, enter the location
    of the JobTracker of your distribution. For example, tal-qa114.talend.lan:8050.

    Note that the notion Job in this term JobTracker designates the MR or the
    MapReduce jobs described in Apache’s documentation on http://hadoop.apache.org/.

    If you use YARN in your Hadoop cluster such as Hortonworks Data Platform V2.0.0 or Cloudera CDH4.3 + (YARN mode), you need to specify the location
    of the Resource Manager instead of the
    Jobtracker. Then you can continue to set the following parameters depending on
    the configuration of the Hadoop cluster to be used (if you leave the check box
    of a parameter clear, then at runtime, the configuration about this parameter in
    the Hadoop cluster to be used will be ignored ):

    • Select the Set resourcemanager scheduler
      address
      check box and enter the Scheduler address in
      the field that appears.

    • Select the Set jobhistory address
      check box and enter the location of the JobHistory server of the
      Hadoop cluster to be used. This allows the metrics information of
      the current Job to be stored in that JobHistory server.

    • Select the Set staging directory
      check box and enter this directory defined in your Hadoop cluster
      for temporary files created by running programs. Typically, this
      directory can be found under the yarn.app.mapreduce.am.staging-dir property in the
      configuration files such as yarn-site.xml or mapred-site.xml of your distribution.

    • Select the Use datanode hostname
      check box to allow the Job to access datanodes via their hostnames.
      This actually sets the dfs.client.use.datanode.hostname property to
      true. When connecting to a
      S3N filesystem, you must select this check box.

  6. If you are accessing the Hadoop cluster running with Kerberos security, select this check
    box, then, enter the Kerberos principal name for the NameNode in the field displayed. This
    enables you to use your user name to authenticate against the credentials stored in
    Kerberos.

    In addition, since this component performs Map/Reduce computations, you also need to
    authenticate the related services such as the Job history server and the Resource manager or
    Jobtracker depending on your distribution in the corresponding field. These principals can
    be found in the configuration files of your distribution. For example, in a CDH4
    distribution, the Resource manager principal is set in the yarn-site.xml file and the Job history principal in the mapred-site.xml file.

    If you need to use a Kerberos keytab file to log in, select Use a
    keytab to authenticate
    . A keytab file contains pairs of Kerberos principals
    and encrypted keys. You need to enter the principal to be used in the Principal field and the access path to the keytab file itself in the
    Keytab field.

    Note that the user that executes a keytab-enabled Job is not necessarily the one a
    principal designates but must have the right to read the keytab file being used. For
    example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this situation, ensure that user1 has the right to read the keytab file to be used.

  7. In the User name field, enter the login user
    name for your distribution. If you leave it empty, the user name of the machine
    hosting the Studio will be used.

  8. In the Temp folder field, enter the path in
    HDFS to the folder where you store the temporary files generated during
    Map/Reduce computations.

  9. Leave the default value of the Path separator in server as
    it is, unless you have changed the separator used by your Hadoop distribution’s host machine
    for its PATH variable or in other words, that separator is not a colon (:). In that
    situation, you must change this value to the one you are using in that host.

  10. Leave the Clear temporary folder check box
    selected, unless you want to keep those temporary files.

  11. Leave the Compress intermediate map output to reduce
    network traffic
    check box selected, so as to spend shorter time
    to transfer the mapper task partitions to the multiple reducers.

    However, if the data transfer in the Job is negligible, it is recommended to
    clear this check box to deactivate the compression step, because this
    compression consumes extra CPU resources.

  12. If you need to use custom Hadoop properties, complete the Hadoop properties table with the property or
    properties to be customized. Then at runtime, these changes will override the
    corresponding default properties used by the Studio for its Hadoop
    engine.

    For further information about the properties required by Hadoop, see Apache’s
    Hadoop documentation on http://hadoop.apache.org, or
    the documentation of the Hadoop distribution you need to use.

  13. If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks
    Data Platform V1.3, you need to set proper memory allocations for the map and reduce
    computations to be performed by the Hadoop system.

    In that situation, you need to enter the values you need in the Mapred
    job map memory mb
    and the Mapred job reduce memory
    mb
    fields, respectively. By default, the values are both 1000 which are normally appropriate for running the
    computations.

    If the distribution is YARN, then the memory parameters to be set become Map (in Mb), Reduce (in Mb) and
    ApplicationMaster (in Mb), accordingly. These fields
    allow you to dynamically allocate memory to the map and the reduce computations and the
    ApplicationMaster of YARN.

For further information about this Hadoop
Configuration
tab, see the section describing how to configure the Hadoop
connection for a Talend Map/Reduce Job of the Talend Big Data Getting Started Guide.

For further information about the Resource Manager, its scheduler and the
ApplicationMaster, see YARN’s documentation such as http://hortonworks.com/blog/apache-hadoop-yarn-concepts-and-applications/.

For further information about how to determine YARN and MapReduce memory configuration
settings, see the documentation of the distribution you are using, such as the following
link provided by Hortonworks: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html.

Configuring input and output components

Configuring tHDFSInput

  1. Double-click tHDFSInput to open its
    Component view.

    use_case-mr_treplace2.png
  2. Click the dotbutton.png button next to Edit
    schema
    to verify that the schema received in the earlier
    steps is properly defined.

    use_case-mr_treplace3.png

    Note that if you are creating this Job from scratch, you need to click the plus_button.png button to manually add these schema columns; otherwise,
    if the schema has been defined in Repository, you can select the Repository option from the Schema list in the Basic
    settings
    view to reuse it. For further information about how
    to define a schema in Repository, see the
    chapter describing metadata management in the Talend Studio User Guide or the chapter describing the
    Hadoop cluster node in Repository of the Talend Big Data Getting Started Guide.

  3. If you make changes in the schema, click OK to validate these changes and accept the propagation
    prompted by the pop-up dialog box.

  4. In the Folder/File field, enter the path,
    or browse to the source file you need the Job to read.

    If this file is not in the HDFS system to be used, you have to place it in
    that HDFS, for example, using tFileInputDelimited and tHDFSOutput in a Standard
    Job.

Reviewing the transformation components

  1. Double-click tReplace to open its
    Component view.

    use_case-mr_treplace4.png

    This component keeps its configuration used by the original Job. It
    searches incoming entries and replaces the ones you have specified in the
    Search column with the values given in
    the Replace with column.

  2. Double-click tFilterColumns to open its
    Component view.

    use_case-mr_treplace5.png

    The components keeps its schema from the original Job while the order of
    its columns stays no longer as it was rearranged in the scenario earlier and
    has automatically changed back to its original order.

    use_case-mr_treplace6.png

Configuring tHDFSOutput

  1. Double-click tHDFSOutput to open its
    Component view.

    use_case-mr_treplace7.png
  2. As explained earlier for verifying the schema of tHDFSInput, do the same to verify the schema of tHDFSOutput. If it is not consistent with that of
    its preceding component, tFilterColumns,
    click Sync columns to retrieve the schema
    of tFilterColumns.

    use_case-mr_treplace8.png
  3. In the Folder field, enter the path, or
    browse to the folder you want to write the unique entries in.

  4. From the Action list, select the
    operation you need to perform on the folder in question. If the folder
    already exists, select Overwrite;
    otherwise, select Create.

Executing the Job

Then you can press F6 to run this Job.

Once done, view the execution results in the web console of HDFS.

use_case-mr_treplace9.png

If you need to obtain more details about the Job, it is recommended to use the web
console of the Jobtracker provided by the Hadoop distribution you are using.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x