July 30, 2023

tDuplicateRow – Docs for ESB 7.x

tDuplicateRow

Creates duplicates with meaningful data for data quality functional testing
purposes.

tDuplicateRow generates duplicate
data from an input flow. It groups similar duplicates together and
identifies the original record of each group by
true.

This component can be used in combination with the tRowGenerator component to generate
duplicate data.

Date functions in tDuplicateRow

The Function list for a Date
column is date-specific. It allows you to decide the type of modification you want to do
on date values.

There are three different ways to generate the date value for duplicated
records:

  • Modify date value: randomly selects the day,
    month or year to modify and puts a random value in place.

  • Switch day month value: switches the day and
    month values. If the original day value is greater than 12, then the new month
    value will be ((N-1) mod 12) +1. For example: If the
    original day is equal to 13, then the new month is equal to 1.

  • Replace by random date: generates a random
    date with day, month and year values.

tDuplicateRow Standard properties

These properties are used to configure tDuplicateRow running
in the Standard Job framework.

The Standard
tDuplicateRow component belongs to the Data Quality family.

This component is available in Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, Talend MDM Platform and Talend Data Fabric.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Sync columns
to retrieve the schema from the previous component in the Job.

The output schema of this component contains one read-only
column, ORIGINAL_MARK. This column identifies by
true or false if the record is an original record or a duplicate
record respectively. There is only one original record per group.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Percentage of duplicated records

Enter the percentage of the duplicate rows you want to
have in the output flow.

Distribution of duplicates

Name: Select the probability
distribution you want to use to generate duplicates: Bernoulli
distribution or Poisson distribution or Geometric distribution.

Average group size: Set the
average number of duplicate records you want to generate in the groups
of duplicates.

Modifications

Define in the table what fields to change in a row and
how to change them:

Input Column:
Select the column from the input flow from which you want to generate
duplicates through modifying its values.

Modification
Rate
: Enter a rate of the modifications you want to have in
the duplicate record generated from an input column. The rate is a value
between 0 and 1. If you
set the rate to 0, no modification will be done.
If you set the rate to 0.5, modification will be
done in average every two rows. But if you set the rate to 1, modification is done at each row.

These modifications are based on the function you select
in the Function column and the
number of modifications you set in the Max
Modification Count
column.

Function: Select
the function that will decide what modification to do on a value to
duplicate it. For example, you can decide to have exact or approximate
duplicate values through replacing or adding letters or numbers,
replacing values with synonyms from an index file or deleting values by
setting the function to null or blank.

The Function list
will vary according to the column type. For example, a column of a String type will have an Add
letters
option in the list while a column of an Integer type will have an Add
digits
option in the list. Also, the Function list for a Date column
is date-specific. For further information about the functions used on Date columns, see Date functions in tDuplicateRow.

Max Modification
Count
: Enter a maximum number of the values to be
modified.

Synonym Index
Path
: Set the path to the index file from which you use
the synonym.

This field is available if you select the Synonym replace function which means that the
value in the duplicate record is replaced by one of its synonym,
according to the given the rate.

You must use the tSynonymOutput component to create a Lucene index and
feed it with synonyms. For further information about how to create a
synonym index and define the reference entries, see tSynonymOutput.

Advanced settings

Seed for random generator

Set a random number if you want to generate the same
sample of duplicates in each execution of the Job.

Repeating the execution with a different value for the
seed will result in a different duplicate sample being generated.

Keep this field empty if you want to generate a different
duplicate sample each time you execute the Job.

tStat
Catcher
Statistics

Select this check box to collect log data at the
component level.

Usage

Usage rule

This component helps you to generate duplicate data of an
input flow according to certain criteria and use it for testing
purposes.

Generating duplicate data from an input flow

This scenario applies only to Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, Talend MDM Platform and Talend Data Fabric.

This scenario describes a basic Job that generates a sample of duplicate data from an
input flow by using probability theories and specific criteria on three columns:
Name, City and DOB (date
of birth).

Below is a capture of a sample data of the input flow:

tDuplicateRow_1.png

Setting up the Job

  1. Drop the following components from the Palette onto the design workspace: tFileInputDelimited, tDuplicateRow and tFileOutputDelimited.

    tDuplicateRow_2.png

  2. Connect all the components together using the Row
    > Main link.

Configuring the input data

  1. Double-click tFileInputDelimited to display
    the Basic settings view and define the
    component properties.

    tDuplicateRow_3.png

  2. In the File name/Stream field, browse to the
    file to be used as the main input.

    This file provides some information about customers.
  3. Define the row and field separators the header and footer in the corresponding
    fields, if any.
  4. Click the […] button next to Edit schema to open a dialog box and define the input
    schema. According to the input file structure, the schema is made of ten
    columns.

    tDuplicateRow_4.png

  5. Click the [+] button and define the input
    columns in the dialog box as in the above figure. Click OK to close the dialog box.
  6. If needed, right-click tFileInputDelimited
    and select Data Viewer to display a view of the
    input data.

Configuring the duplicate data

  1. Double-click tDuplicateRow to display the
    Basic settings view and define the
    component properties.

    tDuplicateRow_5.png

  2. Click the Edit schema button to view the
    input and output columns and do any modifications in the output schema, if
    needed.

    tDuplicateRow_6.png

    The output schema of this component contains one read-only column,
    ORIGINAL_MARK. This column identifies, by
    true or false, if the record is an original or a
    duplicate record. There is only one original record per group of
    duplicates.
  3. In the Percentage of duplicated records
    field, enter the percentage of the duplicate rows you want to have in the output
    flow.
  4. In the Distribution of duplicates area,
    select the Bernoulli distribution (probability
    theory) you want to use to generate duplicates. Set an average of how many
    duplicate records to have in each group.
  5. Click the plus button below the Modifications
    table and add four lines in the table.

    This table enables you to define what values to change in a given column and
    how to change them in order to generate duplicates:
    1. In the Input Column, select the
      column from the input flow from which you want to generate duplicates,
      Name, City and
      DOB in this example.

      When you add a column twice in the table and select different
      functions, you generate duplicates from the same field with different
      values. For example, in this scenario you modify the duplicate names
      with the Soundex replace function with probability
      of 50%, and then you modify it again with the Exchange
      characters
      function with the probability of 50%. This
      means, the Name field of the duplicate record may
      still not be modified after the second function with the following
      probability: (1-0.5)*(1-0.5) = 0.25.
      If you want to make sure that all the duplicated records are modified,
      you must set the Modification Rate to
      1.
    2. In the Modification Rate column,
      enter a rate of the duplicate records you want to generate from the
      input column.
    3. From the Function list, select the
      function that will decide what modification to do on a value to
      duplicate it.

      In this example, there will be duplicate names with different sounds
      and characters and duplicate city names with different sounds. Date
      values in the date of birth column will be randomly changed here.
    4. In the Max Modification Count column,
      enter a maximum number of the values to be modified in each
      field.
  6. Click the Advanced settings tab and enter a
    random number in the Seed for random generator
    field.

    By setting a number in this field, you will generate the same sample of
    duplicate data in each execution of the Job. Change the value if you want to
    generate a different sample.

Configuring the output component

  1. Double-click tFileOutputDelimited to display
    its Basic settings view and define the
    component properties.

    tDuplicateRow_7.png

  2. In the File Name field, specify the path to
    the file to which you want to write the duplicate data,
    duplicated_records in this example.
  3. Define the row and field separators in the corresponding fields, if
    any.

Executing the Job

  1. Save your Job and press F6 to execute
    it.

    Duplicate data is generated and written to the output file.
  2. Right-click the output component and select Data
    Viewer
    to display the duplicate data.

    Duplicate records have been marked as false in the
    ORIGINAL_MARK column.
    tDuplicateRow_8.png

    Some data has been modified in the Name,
    City and DOB fields according to
    the criteria you set in the Modifications table
    and duplicate records have been generated based on these modifications.
    For example, if you compare the original name Mrs Morgan
    Ross
    and the duplicate name Mrs M
    rganosRiss
    , you will see that the two functions have been used on
    this duplicate record: the letter o has been exchanged with
    a space, and also the sound has been replaced in Ross and
    Riss. However, the soundex code has not been changed
    for the replaced sound.
  3. In the tDuplicateRow basic settings and in
    the Distribution of duplicates area, select a
    different distribution, Bernoulli distribution for example,
    and run the Job.

    Different duplicates are generated from the same input flow according to the
    selected distribution as shown in the below figure.
    tDuplicateRow_9.png

Showing chart results of each of the probability distributions

The best way to see how duplicates are generated according to each of the three
probability distributions is to create a match analysis on each of the results and
compare the charts.

Define the match analysis

  1. From the
    Profiling
    perspective,
    right-click Metadata and create a file
    connection to the duplicated_records output file generated
    by the Job.

    For further information, check the Data Profiling part in the
    Talend Studio User Guide
    .
  2. Expand the new file connection under Metadata
    and select Analyze matches.
  3. Follow the steps in the wizard to define the analysis metadata and click
    Finish to open the analysis editor.
  4. In the Matching Key table, define a match key
    on the Code column to group records by their
    identification, records which have the same code are grouped together.

    tDuplicateRow_10.png

  5. Click Chart below the table to show the
    duplicates generated according to the Bernoulli
    distribution
    selected previously in the Job.

Run the analysis with different probability distributions

  1. Switch back to the
    Integration
    perspective,
    select Poisson distribution in the basic
    settings of tDuplicateRow and run the
    Job.
  2. In the
    Profiling
    perspective, click Chart below the Matching Key
    table to show the duplicates generated according to the Poisson distribution.
  3. Run the Job with the Geometric distribution,
    then click the Chart in the Profiling to show the duplicates generated according
    to the Geometric distribution.

    The table below shows how results of the generated duplicates differ according
    to the probability distribution you select in the tDuplicateRow component.

    Probability distribution

    Duplicate results

    Description

    Bernoulli distribution

    tDuplicateRow_11.png

    The curve is symmetrical. The groups of duplicates are
    distributed evenly on each side of an average value, 4 in
    this example. This average value is the average number of
    duplicates in a group of duplicates and this value is the
    number you set in the Average group
    size
    field in the basic settings of the
    tDuplicateRow
    component.

    Poisson distribution

    tDuplicateRow_12.png

    The curve is not symmetrical. The groups of duplicates are
    distributed unevenly.

    Geometric distribution

    tDuplicateRow_13.png

    The form of the curve is decided by the percentage you set
    for the duplicated records in the tDuplicateRow basic settings. The higher the
    percentage is, the fewer groups with many records you will
    have.

    In this example the percentage for the duplicate records
    is set to 80%. This is why many groups
    with two-record duplicates are generated
    (148 groups), while there is only
    one group that has 14, 15 and 16
    duplicates.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x