tBigQueryBulkExec Standard properties
These properties are used to configure tBigQueryBulkExec running in the Standard Job framework.
The Standard
tBigQueryBulkExec component belongs to the Big Data family.
The component in this framework is available in all Talend
products.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields
Click Edit
schema to make changes to the schema. Note: If you
make changes, the schema automatically becomes built-in.
This This |
 |
|
Authentication mode | Select the mode to be used to authenticate to your project.
|
Service account credentials file | Enter the path to the credentials file created for the service account to be used. This file must be stored in the machine in which your Talend Job is actually launched and executed. For further information about how to create a Google service |
Client ID and Client |
Paste the client ID and the client secret, both created and viewable on the To enter the client secret, click the […] button next |
Project ID |
Paste the ID of the project hosting the Google BigQuery service you The ID of your project can be found in the URL of the Google |
Authorization code |
Paste the authorization code provided by Google for the access you are To obtain the authorization code, you need to execute the Job using this |
Dataset |
Enter the name of the dataset you need to transfer data to. |
Table |
Enter the name of the table you need to transfer data to. If this table does not exist, select the Create the table if it doesn’t exist check box. |
Action on data |
Select the action to be performed from the drop-down list when
|
Bulk file already exists in Google storage |
Select this check box to reuse the authentication information for Google |
Access key and Secret |
Paste the authentication information obtained from Google for making To enter the secret key, click the […] button next to These keys can be consulted on the Interoperable Access tab view under the |
File to upload |
When the data to be transferred to Google BigQuery is not stored on Google Cloud |
Bucket |
Enter the name of the bucket, the Google Cloud Storage |
File |
Enter the directory of the data stored on Google Cloud Storage If the data is not on Google Cloud Storage, this directory is |
Header |
Set values to ignore the header of the transferred data. For |
Die on error |
This check box is cleared by default, meaning to skip the row on |
Advanced settings
token properties File Name |
Enter the path to, or browse to the refresh token file you need to use. At the first Job execution using the Authorization With only the token file name entered, For further information about the refresh token, see the manual of Google |
Set the field delimiter |
Enter character, string or regular expression to separate fields for the transferred |
Drop table if exists |
Select the Drop table if exists check box to remove the table specified in the Table field, if this table already exists. |
Encoding |
Select the encoding from the list or select Custom |
tStatCatcher Statistics |
Select this check box to collect the log data at the component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This is a standalone component. This component automatically detects and |