August 15, 2023

tGoogleCloudConfiguration – Docs for ESB 6.x

tGoogleCloudConfiguration

Provides the connection configuration to Google Cloud Platform for a Spark
Job.

tGoogleCloudConfiguration properties for Apache Spark Streaming

These properties are used to configure tGoogleCloudConfiguration
running in the Spark Streaming Job framework.

The Spark Streaming
tGoogleCloudConfiguration
component belongs to the Storage family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Project identifier

Enter the ID of your Google Cloud Platform project.

If you are not certain about your project ID, check it in the Manage
Resources page of your Google Cloud Platform services.

Use Google Cloud Platform credentials file

Leave this check box clear, when you
launch your Job from a given machine in which Google Cloud SDK has been
installed and authorized to use your user account credentials to access
Google Cloud Platform. In this situation, this machine is often your
local machine.

When you launch your
Job from a remote machine, such as a Jobserver, select this check box,
then select the format of your credentials file and in the
Path to Google Credentials file field that is
displayed, enter the directory in which this credential file is stored
in the Jobserver machine.

In the
Service account Id field that is displayed,
enter the ID of the service account for which this P12 credentials file
has been created.

For further information about this Google
Credentials file, see the administrator of your Google Cloud Platform or
visit Google Cloud Platform Auth
Guide
.

Usage

Usage rule

This component is used only when your Job needs to connect to Google
Cloud Platform while the cluster you use to run Spark is not Dataproc.

It works standalone in a Subjob to provide the connection configuration
for the whole Job.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x