August 16, 2023

tPubSubInput – Docs for ESB 6.x

tPubSubInput

Connects to the Google Cloud PubSub service that transmits messages to the
components that run transformations over these messages.

tPubSubInput properties for Apache Spark Streaming

These properties are used to configure tPubSubInput running in the Spark Streaming Job framework.

The Spark Streaming
tPubSubInput component belongs to the Messaging family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Define a Goolge Cloud configuration component

If you are using Dataproc as your Spark cluster, clear this check box.

Otherwise, select this check box to allow the Pub/Sub component to use the Google Cloud
configuration information provided by a
tGoogleCloudConfiguration component.

Schema and Edit
schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Note that the schema of this component is read-only. It stores the
message body sent from the message producer.

Output type

Select the type of the data to be sent to the next component.

Typically, using String is recommended, because tPubSubInput can automatically translate the PubSub byte[] messages
into strings to be processed by the Job. However, in case that the format of the messages
is not known to tPubSubInput, such as Protobuf, you can select byte[] and then use a Custom code component such as tJavaRow to deserialize the messages into strings so that the other
components of the same Job can process these messages.

Topic name

Enter the name of topic from which you want to consume the messages.

Subscription name

Enter the name of the subscription that needs to consume the specified topic.

If the subscription exists, it must be connected to the given topic; if
the subscription does not exist, it is created and connected to the
given topic at runtime.

Advanced settings

Storage level

From the Storage level drop-down list that is displayed, select how the cached RDDs are
stored, such as in memory only or in memory and on disk.

For further information about each of the storage level, see https://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence.

Usage

Usage rule

This component is used as a start component and requires an output link.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

PubSub access permissions

When you use Pub/Sub with a Dataproc cluster, ensure that this cluster
has the appropriate permissions to access the Pub/Sub service.

To do this, you can create the Dataproc cluster by checking
Allow API access to all Google Cloud services in
the same project in the advanced options on Google Cloud Platform, or
via the command line, assigning the scopes explicitly (the following
example is for a low-resource test cluster):


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x