August 16, 2023

tTop – Docs for ESB 6.x

tTop

Sorts data and outputs several rows from the first one of this data.

tTop sorts input
records based on their schema columns and sends to its following component a given
number of first rows of the sorted records.

Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:

  • MapReduce: see tTop MapReduce properties.

    The component in this framework is available only if you have subscribed to one
    of the
    Talend
    solutions with Big Data.

  • Spark Batch: see tTop properties for Apache Spark Batch.

    The component in this framework is available only if you have subscribed to one
    of the
    Talend
    solutions with Big Data.

  • Spark Streaming: see tTop properties for Apache Spark Streaming.

    The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
    Fabric.

tTop MapReduce properties

These properties are used to configure tTop running in the MapReduce Job framework.

The MapReduce
tTop component belongs to the Processing family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Number of line selected

Enter the number of rows to be outputted. The current component selects this number of
rows down from the first rows of the sorted data.

Criteria

Click [+] to add as many lines as required for the sort
to be completed.

In the Schema column column, select the column from your
schema, which the sort will be based on. Note that the order is essential as it determines
the sorting priority.

In the other columns, select how you need the data to be sorted. For example, if you need
to sort the data in ascending alphabetical order (from A to Z), select alpha and asc in the corresponding
columns.

Usage

Usage rule

In a
Talend
Map/Reduce Job, this component is used as an intermediate
step and other components used along with it must be Map/Reduce components, too. They
generate native Map/Reduce code that can be executed directly in Hadoop.

This connection is effective on a per-Job basis.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tTop properties for Apache Spark Batch

These properties are used to configure tTop running in the Spark Batch Job framework.

The Spark Batch
tTop component belongs to the Processing family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Number of line selected

Enter the number of rows to be outputted. The current component selects this number of
rows down from the first rows of the sorted data.

Criteria

Click [+] to add as many lines as required for the sort
to be completed.

In the Schema column column, select the column from your
schema, which the sort will be based on. Note that the order is essential as it determines
the sorting priority.

In the other columns, select how you need the data to be sorted. For example, if you need
to sort the data in ascending alphabetical order (from A to Z), select alpha and asc in the corresponding
columns.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Performing download analysis using a Spark Batch Job

This scenario applies only to a subscription-based Talend solution with Big data.

In this scenario, you create a Spark Batch Job to analyze how often a given product is downloaded.

use_case-ttop1.png

In this Job, you analyze the download preference of some specific customers known to your
customer base.

The sample data used as the customer base is as
follows:

This data contains these customers’ ID numbers known to this customer base, their first and
last names and country codes, their support levels and registration dates, their email
addresses and phone numbers.

The sample web-click log of some of these customers reads as follows:

This data contains the ID numbers of the customers who visited different
Talend
web
pages and the pages they visited.

By reading this data, you can find that the visits come from customers of different
support-levels for different purposes. The Job to be designed is used to identify the
sources of these visits against the sample customer base and analyze which product is most
downloaded by the Silver-level customers.

Note that the sample data is created for demonstration purposes only.

To replicate this scenario, proceed as follows:

Linking the components

  1. In the
    Integration
    perspective of the Studio, create an empty Spark Batch Job from the Job
    Designs
    node in the Repository tree view.

    For further information about how to create a Spark Batch Job, see
    Talend Open Studio for Big Data Getting Started
    Guide

    .
  2. In the workspace, enter the name of the component to be used and select this
    component from the list that appears. In this scenario, the components are
    tHDFSConfiguration, two tFixedFlowInput components (label one to customer_base and the other to web_data), tSqlRow, tCacheOut, tCacheIn,
    tMap, tExtractDelimitedFields, tAggregateRow, tTop and tLogRow.

    The tFixedFlowInput components are used to load
    the sample data into the data flow. In the real-world practice, you can use
    other components such as tMysqlInput, alone or
    even with a tMap, in the place of tFixedFlowInput to design a sophisticated process to
    prepare your data to be processed.
  3. Connect customer_base (tFixedFlowInput), tSqlRow and
    tCacheOut using the Row >
    Main
    link. In this Subjob, the records about the Silver-level
    customers are selected and stored in cache.
  4. Connect web_data (tFixedFlowInput) to tMap using
    the Row > Main link. This is the main input
    flow to the tMap component.
  5. Do the same to connect tCacheIn to tMap. This is the lookup flow to tMap.
  6. Connect tMap to tExtractDelimitedFields using the Row >
    Main
    link and name this connection in the dialog box that is
    displayed. For example, name it to output.
  7. Connect tExtractDelimitedFields, tAggregateRow, tTop and
    tLogRow using the Row >
    Main
    link.
  8. Connect customer_base to web_data using the Trigger >
    OnSubjobOk
    link.
  9. Leave the tHDFSConfiguration component alone
    without any connection.

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.
  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Connecting to a custom Hadoop distribution.

Configuring the connection to the file system to be used by Spark

  1. Double-click tHDFSConfiguration to open its
    Component view. Note that tHDFSConfiguration is used because the Spark Yarn client mode is used to run Spark Jobs in this scenario.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. In the Version area, select the Hadoop distribution
    you need to connect to and its version.
  3. In the NameNode URI field, enter the location of the
    machine hosting the NameNode service of the cluster. If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; if this WebHDFS is secured
    with SSL, the scheme should be swebhdfs and you need to use
    a tLibraryLoad in the Job to load the library required by
    the secured WebHDFS.
  4. In the Username field, enter the
    authentication information used to connect to the HDFS system to be used. Note
    that the user name must be the same as you have put in the Spark configuration tab.

Loading the customer base

  1. Double-click the tFixedFlowIput component labeled
    customer_base to open its Component view.

    use_case-ttop3.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.
  3. Click the [+] button to add the schema columns as shown in
    this image.

    use_case-ttop4.png

  4. Click OK to validate these changes and accept the
    propagation prompted by the pop-up dialog box.
  5. In the Mode area, select the Use Inline
    Content
    radio button and paste the above-mentioned sample
    customer base data into the Content field that
    is displayed.
  6. In the Field separator field, enter a vertical bar (|).

Selecting the Silver-level customer data

  1. Double-click tSqlRow to open its Component view.

    use_case-ttop5.png

  2. Click the […] button next to Edit schema to open the schema editor.
  3. In the schema on the output side (the right side), change the column name
    Support to Silver_Support.

    use_case-ttop6.png

  4. From the SQL context drop-down list, select
    SQL Spark Context.
  5. In the SQL Query field, enter the query
    statement to be used to select the records about the Silver-level
    customers.

    You can read that the input link row1 is
    actually taken as the table in which this query is performed.

Accessing the selected data

  1. Double-click tCacheOut to open its
    Component view.

    use_case-ttop7.png

    This component stores the selected data into the cache.
  2. Click the […] button next to Edit
    schema
    to open the schema editor to verify the schema is identical to
    the input one. If not so, click Sync
    columns
    .
  3. On the output side of the schema editor, click the

    export.png

    button to export the schema to the local file system and
    click OK to close the editor.

  4. From the Storage level list, select Memory only.

    For further information about each of the storage level, see https://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence.

  5. Double-click tCacheIn to open its
    Component view.

    use_case-ttop10.png

  6. Click the […] button next to Edit
    schema
    to open the schema editor and click the

    import.png

    button to import the schema you exported in the previous
    step. Then click OK to close the editor.

  7. From the Output cache list, select the
    tCacheOut component from which you need to read
    the cached data. At runtime, this data is loaded into the lookup flow of the
    Subjob that is used to process the web-click log.

Loading the web-click log

  1. Double-click the tFixedFlowIput component labeled
    web_data to open its Component view.

    use_case-ttop8.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.
  3. Click the [+] button to add the schema columns as shown in
    this image.

    use_case-ttop9.png

  4. Click OK to validate these changes and accept the
    propagation prompted by the pop-up dialog box.
  5. In the Mode area, select the Use Inline
    Content
    radio button and paste the above-mentioned sample data
    about the web-click log into the Content field
    that is displayed.
  6. In the Field separator field, enter a vertical bar (|).

Joining the data

  1. Double-click tMap to open its
    Map editor.

    use_case-ttop11.png

    On the input side (the left side), the main flow (labeled row3 in this example) and the lookup flow (labeled
    row4 in this example) are presented as two
    tables.
    On the output side (the right side), an empty table is present.
  2. Drop all of the columns of the schema of the lookup flow into the output flow
    table on the right side, except the User_id
    column, and drop the user_id column and the
    url column from schema of the main flow into
    the same output flow table.
  3. On the left side, drop the user_id column
    from the main flow table into the Expr.key
    column in the User_id row in the lookup flow
    table. This makes the ID numbers of the customers the key for the join of the
    two input flows.
  4. In the lookup flow table, click the wrench icon to display the panel for the
    lookup settings and select Inner Join for the
    Join model property.
  5. Click Apply to validate these changes and
    click OK to close this editor.

Extracting the fields about the categories of the visited pages

  1. Double-click tExtractDelimitedFields to open its Component view.

    use_case-ttop12.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.

    use_case-ttop13.png

  3. On the output side, click the [+] button four
    times to add four rows to the output schema table and rename these new schema
    columns to root, page, specialization and
    product, respectively. These columns are used
    to carry the fields extracted from the url
    column in the input flow.
  4. Click OK to validate these changes.
  5. From the Prev.Comp.Column.List list, select
    the column you need to extract data from. In this example, it is url from the input schema.
  6. In the Field separator field, enter a slash
    (/).

Counting the download of each product

  1. Double-click tAggregateRow to open its
    Component view.

    use_case-ttop14.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.

    use_case-ttop15.png

  3. On the output side, click the [+] button two times to
    add two rows to the output schema table and rename these new schema columns to
    product and nb_download, respectively.
  4. Click OK to validate these changes and accept
    the propagation prompted by the pop-up dialog box.
  5. In the Group by table, add one row by
    clicking the [+] button and select product for both the Output
    column
    column and the Input column
    position
    column. This passes data from the product column of the input schema to the product column of the output schema.
  6. In the Operations table, add one row by
    clicking the [+] button.
  7. In the Output column column, select nb_download, in the Function column, select count and
    in the Input column position column, select
    product.

Selecting the most downloaded product

  1. Double-click tTop to open its
    Component view.

    use_case-ttop16.png

  2. In the Number of line selected field, enter
    the number of rows to be output to the next component, counting down from the
    first row of the data sorted by tTop.
  3. In the Criteria table, added one row by
    clicking the [+] button.
  4. In the Schema column column, select nb_download, the column for which the data is sorted,
    in the sort num or alpha column, select
    num, which means the data to be sorted are
    numbers, and in the Order asc or desc column,
    select desc to arrange the data in descending
    order.

Executing the Job

Then you can run this Job.

The tLogRow component is used to present the execution
result of the
Job.

  1. Double-click the tLogRow component to open the Component view.
  2. Select the Define a storage configuration component check box
    and select tHDFSConfiguration.
  3. Press F6 to run this Job.

Once done, in the console of the Run view, you can check the execution result.

use_case-ttop17.png

You can read that the most downloaded product is Talend Open
Studio
. It counts two of the total five downloads.

Note that you can manage the level of the execution information to be outputted in this
console by selecting the log4jLevel check box in the
Advanced settings tab and then selecting the level of
the information you want to display.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

tTop properties for Apache Spark Streaming

These properties are used to configure tTop running in the Spark Streaming Job framework.

The Spark Streaming
tTop component belongs to the Processing family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

Number of line selected

Enter the number of rows to be outputted. The current component selects this number of
rows down from the first rows of the sorted data.

Criteria

Click [+] to add as many lines as required for the sort
to be completed.

In the Schema column column, select the column from your
schema, which the sort will be based on. Note that the order is essential as it determines
the sorting priority.

In the other columns, select how you need the data to be sorted. For example, if you need
to sort the data in ascending alphabetical order (from A to Z), select alpha and asc in the corresponding
columns.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

For a related scenario, see Analyzing a Twitter flow in near real-time.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x