August 16, 2023

tPartition – Docs for ESB 6.x

tPartition

Allows you to visually define how an input dataset is partitioned.

The tPartition splits the input dataset
into a given number of partitions.

tPartition properties for Apache Spark Batch

These properties are used to configure tPartition running in the Spark Batch Job framework.

The Spark Batch
tPartition component belongs to the Processing family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Click Sync columns to retrieve the schema from
the previous component connected in the Job.

 Number of partitions

Enter the number of partitions you want to split the input dataset up into.

Partition key

Complete this table to define the key to be used for the partitioning.

In the Partition key table, the schema columns are
automatically added into the Column
column and in the Partition column
column, you need to select the check box(es) corresponding to the
column(s) you want to use as the key of the partitioning.

This partitioning proceeds in the hash mode, that is to say, the
records meeting the same criteria (the key) are dispatched into the
same partition.

Use custom partitioner

Select this check box to use a Spark partitioner you need to
import from outside the Studio. For example, a partitioner you have
developed by yourself. In this situation, you need to give the
following information:

  • Custom partitioner FQCN:
    enter the fully qualified class name of the partitioner to
    be imported.

  • Custom partitioner JAR:
    click the [+] button as
    many time as needed to add the same number of rows. In each
    row, click the […] button
    to import the jar file containing this partitioner class and
    its dependent jar files.

Sort within partitions

Select this check box to sort the records within each
partition.

This feature is useful when a partition contains several distinct
key values.

  • Natural key order: keys
    are sorted in their natural order, for example, in the
    alphabetical order.

  • Custom comparator: this
    allows you to use a custom program to sort the keys.

    You need to enter the fully qualified class name of the
    comparator to be imported in the Custom comparator FQCN field and add the jar
    files to be loaded in the Custom
    comparator JAR
    table.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Batch version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x