tBigQueryBulkExec
Transfers given data to Google BigQuery.
The tBigQueryOutputBulk and tBigQueryBulkExec components are generally used together as parts of a two step
process. In the first step, an output file is generated. In the second step, this file is
used to feed a dataset. These two steps are fused together in the tBigQueryOutput component, detailed in a separate section. The advantage of
using two separate components is that the data can be transformed before it is loaded in the
dataset.
This component transfers a given file from Google Cloud Storage to
Google BigQuery, or uploads a given file into Google Cloud Storage
and then transfers it to Google BigQuery.
tBigQueryBulkExec Standard properties
These properties are used to configure tBigQueryBulkExec running in the Standard Job framework.
The Standard
tBigQueryBulkExec component belongs to the Big Data family.
The component in this framework is generally available.
Basic settings
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
|
Built-In: You create and store the |
|
Repository: You have already created |
Client ID and Client |
Paste the client ID and the client secret, both created and viewable on the To enter the client secret, click the […] button next |
Project ID |
Paste the ID of the project hosting the BigQuery service you The ID of your project can be found in the URL of the Google |
Authorization code |
Paste the authorization code provided by Google for the access you are To obtain the authorization code, you need to execute the Job using this |
Dataset |
Enter the name of the dataset you need to transfer data to. |
Table |
Enter the name of the table you need to transfer data to. If this table does not exist, select the Create the table if it doesn’t exist check |
Action on data |
Select the action to be performed from the drop-down list when
|
Bulk file already exists in Google |
Select this check box to reuse the authentication information for Google |
Access key and Secret key |
Paste the authentication information obtained from Google for making To enter the secret key, click the […] button next to These keys can be consulted on the Interoperable Access tab view under the |
File to upload |
When the data to be transferred to BigQuery is not stored on Google Cloud |
Bucket |
Enter the name of the bucket, the Google Cloud Storage |
File |
Enter the directory of the data stored on Google Cloud Storage If the data is not on Google Cloud Storage, this directory is |
Header |
Set values to ignore the header of the transferred data. For |
Die on error |
This check box is cleared by default, meaning to skip the row on |
Advanced settings
token properties File Name |
Enter the path to, or browse to the refresh token file you need to use. At the first Job execution using the Authorization With only the token file name entered, For further information about the refresh token, see the manual of Google |
Set the field delimiter |
Enter character, string or regular expression to separate fields for the transferred |
Encoding |
Select the encoding from the list or select Custom and define it manually. This field is compulsory for database |
tStatCatcher Statistics |
Select this check box to collect the log data at the component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This is a standalone component. |
Related Scenario
For related topic, see Scenario: Writing data in BigQuery