August 15, 2023

tBigQueryConfiguration – Docs for ESB 6.x

tBigQueryConfiguration

Provides the connection configuration to Google BigQuery and Google Cloud Storage
for a Spark Job.

tBigQueryConfiguration properties for Apache Spark Batch

These properties are used to configure tBigQueryConfiguration running in the Spark Batch Job framework.

The Spark Batch
tBigQueryConfiguration component belongs to the Storage and the Databases families.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

When you use this component with Google Dataproc:

BigQuery temp GCS path

Enter the directory on Google Storage to temporarily
store the data to be used with Google BigQuery. If not existing yet,
this directory is created on the fly but the bucket that contains this
directory must already exist. The syntax of the directory should be
gs://my_bucket/my_directory.

When you use BigQuery with Dataproc, in
Google Cloud Platform, select the same region for your BigQuery dataset
as for the Dataproc cluster to be run.

When you use this component with the other distributions

Project identifier

Enter the ID of your Google Cloud Platform project.

If you are not certain about your project ID, check it in the Manage
Resources page of your Google Cloud Platform services.

Path to Google Credentials file

Enter the path to the credentials file associated to the user account to
be used. This file must be stored in the machine in which your Talend Job is actually
launched and executed.

If you use Talend Jobserver to run your
Job, store the credentials file not only in the machine of the
Jobserver, in which the Job is launched, but also in the worker machines
of the Spark cluster, in which the Job is executed; if you do not use
the Jobserver, store the credentials file in your local machine from
which you launch the Job and in the worker machines of the Spark
cluster.

Use P12 credentials file format

When the Google credentials file to be used is in P12 format, select this
check box and then in the Service account Id
field that is displayed, enter the ID of the service account for which
this P12 credentials file has been created.

BigQuery temp GCS path

Enter the directory on Google Storage to temporarily
store the data to be used with Google BigQuery. If not existing yet,
this directory is created on the fly but the bucket that contains this
directory must already exist. The syntax of the directory should be
gs://my_bucket/my_directory.

When you use BigQuery with Dataproc, in
Google Cloud Platform, select the same region for your BigQuery dataset
as for the Dataproc cluster to be run.

Usage

Usage rule

This component is used standalone in a Subjob to provide
connection configuration to Google BigQuery and Google Storage for the
whole Job.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

tBigQueryConfiguration properties for Apache Spark Streaming

These properties are used to configure tBigQueryConfiguration running
in the Spark Streaming Job framework.

The Spark Streaming
tBigQueryConfiguration component
belongs to the Storage and the Databases families.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

When you use this component with Google Dataproc:

BigQuery temp GCS path

Enter the directory on Google Storage to temporarily
store the data to be used with Google BigQuery. If not existing yet,
this directory is created on the fly but the bucket that contains this
directory must already exist. The syntax of the directory should be
gs://my_bucket/my_directory.

When you use BigQuery with Dataproc, in
Google Cloud Platform, select the same region for your BigQuery dataset
as for the Dataproc cluster to be run.

When you use this component with the other distributions

Project identifier

Enter the ID of your Google Cloud Platform project.

If you are not certain about your project ID, check it in the Manage
Resources page of your Google Cloud Platform services.

Path to Google Credentials file

Enter the path to the credentials file associated to the user account to
be used. This file must be stored in the machine in which your Talend Job is actually
launched and executed.

If you use Talend Jobserver to run your
Job, store the credentials file not only in the machine of the
Jobserver, in which the Job is launched, but also in the worker machines
of the Spark cluster, in which the Job is executed; if you do not use
the Jobserver, store the credentials file in your local machine from
which you launch the Job and in the worker machines of the Spark
cluster.

Use P12 credentials file format

When the Google credentials file to be used is in P12 format, select this
check box and then in the Service account Id
field that is displayed, enter the ID of the service account for which
this P12 credentials file has been created.

BigQuery temp GCS path

Enter the directory on Google Storage to temporarily
store the data to be used with Google BigQuery. If not existing yet,
this directory is created on the fly but the bucket that contains this
directory must already exist. The syntax of the directory should be
gs://my_bucket/my_directory.

When you use BigQuery with Dataproc, in
Google Cloud Platform, select the same region for your BigQuery dataset
as for the Dataproc cluster to be run.

Usage

Usage rule

This component is used standalone in a Subjob to provide
connection configuration to Google BigQuery and Google Storage for the
whole Job.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x