July 30, 2023

tTop – Docs for ESB 7.x

tTop

Sorts data and outputs several rows from the first one of this data.

tTop sorts input
records based on their schema columns and sends to its following component a given
number of first rows of the sorted records.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tTop MapReduce properties (deprecated)

These properties are used to configure tTop running in the MapReduce Job framework.

The MapReduce
tTop component belongs to the Processing family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Number of line selected

Enter the number of rows to be outputted. The current component selects this number of
rows down from the first rows of the sorted data.

Criteria

Click [+] to add as many lines as required for the sort
to be completed.

In the Schema column column, select the column from your
schema, which the sort will be based on. Note that the order is essential as it determines
the sorting priority.

In the other columns, select how you need the data to be sorted. For example, if you need
to sort the data in ascending alphabetical order (from A to Z), select alpha and asc in the corresponding
columns.

Usage

Usage rule

In a
Talend
Map/Reduce Job, this component is used as an intermediate
step and other components used along with it must be Map/Reduce components, too. They
generate native Map/Reduce code that can be executed directly in Hadoop.

This connection is effective on a per-Job basis.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tTop properties for Apache Spark Batch

These properties are used to configure tTop running in the Spark Batch Job framework.

The Spark Batch
tTop component belongs to the Processing family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Number of line selected

Enter the number of rows to be outputted. The current component selects this number of
rows down from the first rows of the sorted data.

Criteria

Click [+] to add as many lines as required for the sort
to be completed.

In the Schema column column, select the column from your
schema, which the sort will be based on. Note that the order is essential as it determines
the sorting priority.

In the other columns, select how you need the data to be sorted. For example, if you need
to sort the data in ascending alphabetical order (from A to Z), select alpha and asc in the corresponding
columns.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Performing download analysis using a Spark Batch Job

This scenario applies only to subscription-based Talend products with Big
Data
.

In this scenario, you create a Spark Batch Job to analyze how often a given product is downloaded.

tTop_1.png

In this Job, you analyze the download preference of some specific customers known to your
customer base.

The sample data used as the customer base is as
follows:

This data contains these customers’ ID numbers known to this customer base, their first and
last names and country codes, their support levels and registration dates, their email
addresses and phone numbers.

The sample web-click log of some of these customers reads as follows:

This data contains the ID numbers of the customers who visited different
Talend
web
pages and the pages they visited.

By reading this data, you can find that the visits come from customers of different
support-levels for different purposes. The Job to be designed is used to identify the
sources of these visits against the sample customer base and analyze which product is most
downloaded by the Silver-level customers.

Note that the sample data is created for demonstration purposes only.

To replicate this scenario, proceed as follows:

Linking the components

  1. In the
    Integration
    perspective of the Studio, create an empty Spark Batch Job from the Job
    Designs
    node in the Repository tree view.

    For further information about how to create a Spark Batch Job, see
    Talend Open Studio for Big Data Getting Started Guide
    .
  2. In the workspace, enter the name of the component to be used and select this
    component from the list that appears. In this scenario, the components are
    tHDFSConfiguration, two tFixedFlowInput components (label one to customer_base and the other to web_data), tSqlRow, tCacheOut, tCacheIn,
    tMap, tExtractDelimitedFields, tAggregateRow, tTop and tLogRow.

    The tFixedFlowInput components are used to load
    the sample data into the data flow. In the real-world practice, you can use
    other components such as tMysqlInput, alone or
    even with a tMap, in the place of tFixedFlowInput to design a sophisticated process to
    prepare your data to be processed.
  3. Connect customer_base (tFixedFlowInput), tSqlRow and
    tCacheOut using the Row >
    Main
    link. In this subJob, the records about the Silver-level
    customers are selected and stored in cache.
  4. Connect web_data (tFixedFlowInput) to tMap using
    the Row > Main link. This is the main input
    flow to the tMap component.
  5. Do the same to connect tCacheIn to tMap. This is the lookup flow to tMap.
  6. Connect tMap to tExtractDelimitedFields using the Row >
    Main
    link and name this connection in the dialog box that is
    displayed. For example, name it to output.
  7. Connect tExtractDelimitedFields, tAggregateRow, tTop and
    tLogRow using the Row >
    Main
    link.
  8. Connect customer_base to web_data using the Trigger >
    OnSubjobOk
    link.
  9. Leave the tHDFSConfiguration component alone
    without any connection.

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Loading the customer base

  1. Double-click the tFixedFlowIput component labeled
    customer_base to open its Component view.

    tTop_2.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.
  3. Click the [+] button to add the schema columns as shown in
    this image.

    tTop_3.png

  4. Click OK to validate these changes and accept the
    propagation prompted by the pop-up dialog box.
  5. In the Mode area, select the Use Inline
    Content
    radio button and paste the above-mentioned sample
    customer base data into the Content field that
    is displayed.
  6. In the Field separator field, enter a vertical bar (|).

Selecting the Silver-level customer data

  1. Double-click tSqlRow to open its Component view.

    tTop_4.png

  2. Click the […] button next to Edit schema to open the schema editor.
  3. In the schema on the output side (the right side), change the column name
    Support to Silver_Support.

    tTop_5.png

  4. From the SQL context drop-down list, select
    SQL Spark Context.
  5. In the SQL Query field, enter the query
    statement to be used to select the records about the Silver-level
    customers.

    You can read that the input link row1 is
    actually taken as the table in which this query is performed.

Accessing the selected data

  1. Double-click tCacheOut to open its
    Component view.

    tTop_6.png

    This component stores the selected data into the cache.
  2. Click the […] button next to Edit
    schema
    to open the schema editor to verify the schema is identical to
    the input one. If not so, click Sync
    columns
    .
  3. On the output side of the schema editor, click the

    tTop_7.png

    button to export the schema to the local file system and
    click OK to close the editor.

  4. From the Storage level list, select Memory only.

    For further information about each of the storage level, see https://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence.

  5. Double-click tCacheIn to open its
    Component view.

    tTop_8.png

  6. Click the […] button next to Edit
    schema
    to open the schema editor and click the

    tTop_9.png

    button to import the schema you exported in the previous
    step. Then click OK to close the editor.

  7. From the Output cache list, select the
    tCacheOut component from which you need to read
    the cached data. At runtime, this data is loaded into the lookup flow of the
    subJob that is used to process the web-click log.

Loading the web-click log

  1. Double-click the tFixedFlowIput component labeled
    web_data to open its Component view.

    tTop_10.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.
  3. Click the [+] button to add the schema columns as shown in
    this image.

    tTop_11.png

  4. Click OK to validate these changes and accept the
    propagation prompted by the pop-up dialog box.
  5. In the Mode area, select the Use Inline
    Content
    radio button and paste the above-mentioned sample data
    about the web-click log into the Content field
    that is displayed.
  6. In the Field separator field, enter a vertical bar (|).

Joining the data

  1. Double-click tMap to open its
    Map editor.

    tTop_12.png

    On the input side (the left side), the main flow (labeled row3 in this example) and the lookup flow (labeled
    row4 in this example) are presented as two
    tables.
    On the output side (the right side), an empty table is present.
  2. Drop all of the columns of the schema of the lookup flow into the output flow
    table on the right side, except the User_id
    column, and drop the user_id column and the
    url column from schema of the main flow into
    the same output flow table.
  3. On the left side, drop the user_id column
    from the main flow table into the Expr.key
    column in the User_id row in the lookup flow
    table. This makes the ID numbers of the customers the key for the join of the
    two input flows.
  4. In the lookup flow table, click the wrench icon to display the panel for the
    lookup settings and select Inner Join for the
    Join model property.
  5. Click Apply to validate these changes and
    click OK to close this editor.

Extracting the fields about the categories of the visited pages

  1. Double-click tExtractDelimitedFields to open its Component view.

    tTop_13.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.

    tTop_14.png

  3. On the output side, click the [+] button four
    times to add four rows to the output schema table and rename these new schema
    columns to root, page, specialization and
    product, respectively. These columns are used
    to carry the fields extracted from the url
    column in the input flow.
  4. Click OK to validate these changes.
  5. From the Prev.Comp.Column.List list, select
    the column you need to extract data from. In this example, it is url from the input schema.
  6. In the Field separator field, enter a slash
    (/).

Counting the download of each product

  1. Double-click tAggregateRow to open its
    Component view.

    tTop_15.png

  2. Click the […] button next to Edit
    schema
    to open the schema editor.

    tTop_16.png

  3. On the output side, click the [+] button two times to
    add two rows to the output schema table and rename these new schema columns to
    product and nb_download, respectively.
  4. Click OK to validate these changes and accept
    the propagation prompted by the pop-up dialog box.
  5. In the Group by table, add one row by
    clicking the [+] button and select product for both the Output
    column
    column and the Input column
    position
    column. This passes data from the product column of the input schema to the product column of the output schema.
  6. In the Operations table, add one row by
    clicking the [+] button.
  7. In the Output column column, select nb_download, in the Function column, select count and
    in the Input column position column, select
    product.

Selecting the most downloaded product

  1. Double-click tTop to open its
    Component view.

    tTop_17.png

  2. In the Number of line selected field, enter
    the number of rows to be output to the next component, counting down from the
    first row of the data sorted by tTop.
  3. In the Criteria table, added one row by
    clicking the [+] button.
  4. In the Schema column column, select nb_download, the column for which the data is sorted,
    in the sort num or alpha column, select
    num, which means the data to be sorted are
    numbers, and in the Order asc or desc column,
    select desc to arrange the data in descending
    order.

Executing the Job

Then you can run this Job.

The tLogRow component is used to present the execution
result of the
Job.

  1. Double-click the tLogRow component to open the Component view.
  2. Select the Define a storage configuration component check box
    and select tHDFSConfiguration.
  3. Press F6 to run this Job.

Once done, in the console of the Run view, you can check the execution result.

tTop_18.png

You can read that the most downloaded product is Talend Open
Studio
. It counts two of the total five downloads.

Note that you can manage the level of the execution information to be outputted in this
console by selecting the log4jLevel check box in the
Advanced settings tab and then selecting the level of
the information you want to display.

For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html.

tTop properties for Apache Spark Streaming

These properties are used to configure tTop running in the Spark Streaming Job framework.

The Spark Streaming
tTop component belongs to the Processing family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Click Sync
columns
to retrieve the schema from the previous component connected in the
Job.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Number of line selected

Enter the number of rows to be outputted. The current component selects this number of
rows down from the first rows of the sorted data.

Criteria

Click [+] to add as many lines as required for the sort
to be completed.

In the Schema column column, select the column from your
schema, which the sort will be based on. Note that the order is essential as it determines
the sorting priority.

In the other columns, select how you need the data to be sorted. For example, if you need
to sort the data in ascending alphabetical order (from A to Z), select alpha and asc in the corresponding
columns.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a related scenario, see Analyzing a Twitter flow in near real-time.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x