tSnowflakeOutput
Uses the data incoming from its preceding component to insert, update, upsert or
delete data in a Snowflake table.
tSnowflakeOutput uses the bulk loader provided by Snowflake for high
performance database operations.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tSnowflakeOutput Standard properties.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric. -
Spark Batch: see tSnowflakeOutput properties for Apache Spark Batch (technical preview).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
tSnowflakeOutput Standard properties
These properties are used to configure tSnowflakeOutput running in the Standard Job framework.
The Standard
tSnowflakeOutput component belongs to the
Cloud family.
The component in this framework is available in all Talend
products.
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.
Basic settings
Database |
Select a type of database from the list and click |
Property Type |
Select the way the connection details
This property is not available when other connection component is selected |
Connection Component |
Select the component that opens the database connection to be reused by this |
Account |
In the Account field, enter, in double quotation marks, the account name |
Snowflake Region |
Select an AWS region or an Azure region from |
User Id and Password |
Enter, in double quotation marks, your authentication
|
Warehouse |
Enter, in double quotation marks, the name of the |
Schema |
Enter, within double quotation marks, the name of the |
Database |
Enter, in double quotation marks, the name of the |
Table |
Click the […] button and in the displayed wizard, select the Snowflake |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Built-In: You create and store the schema locally for this component Repository: You have already created the schema and stored it in the If the Snowflake data type to Click Edit
Note that if the input value of any non-nullable primitive This This |
Output Action |
Select the operation to insert, delete, update or merge The Upsert operation allows you to merge data in a Snowflake table |
Advanced settings
Additional JDBC Parameters |
Specify additional JDBC parameters for the |
Use Custom Snowflake Region |
Select this check box to specify a custom
Snowflake region. This option is available only when you select Use This Component from the Connection Component drop-down list in the Basic settings view.
|
Login Timeout |
Specify the timeout period (in minutes) |
Tracing |
Select the log level for the Snowflake JDBC driver. If |
Role |
Enter, in double quotation marks, the default access This role must already exist and has been granted to the |
Allow Snowflake to convert columns and tables to uppercase |
Select this check box to convert lowercase in the defined If you deselect the check box, all identifiers are This property is not available when you select the For more information on the Snowflake Identifier Syntax, |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level |
Global Variables
NB_LINE |
The number of rows processed. This is an After variable and it returns an integer. |
NB_SUCCESS |
The number of rows successfully processed. This is an After variable and it returns an |
NB_REJECT |
The number of rows rejected. This is an After variable and it returns an integer. |
ERROR_MESSAGE |
The error message generated by the component when an error occurs. This is an After |
Usage
Usage rule |
This component is end component of a data flow in your Job. It receives data fromRow > Main link. It can also send error messages to other components via a Row > Rejects link. The provided information about an error could be:
|
Related
scenario
For a related scenario, see Writing data into and reading data from a Snowflake table.
tSnowflakeOutput properties for Apache Spark Batch (technical preview)
These properties are used to configure tSnowflakeOutput running in the Spark Batch Job framework.
The Spark Batch
tSnowflakeOutput component belongs to the
Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Use an existing configuration |
Select this check box and in the Component List click the relevant connection component to |
Account |
In the Account field, enter, in double quotation marks, the account name |
Snowflake Region |
Select an AWS region or an Azure region from |
User Id and Password |
Enter, in double quotation marks, your authentication
|
Warehouse |
Enter, in double quotation marks, the name of the |
Schema |
Enter, within double quotation marks, the name of the |
Database |
Enter, in double quotation marks, the name of the |
Table |
Click the […] button and in the displayed wizard, select the Snowflake |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Built-In: You create and store the schema locally for this component Repository: You have already created the schema and stored it in the If the Snowflake data type to Click Edit
Note that if the input value of any non-nullable primitive |
Output Action |
Only the Insert action is supported by Snowflake on Spark. |
Usage
Usage rule |
This component is used as an end component and requires an input link. Use a tSnowFlakeConfiguration: update component in the same Job to connect |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |