tSnowflakeInput properties for Apache Spark Batch (technical preview)
These properties are used to configure tSnowflakeInput running in the
Spark Batch Job framework.
The Spark Batch
tSnowflakeInput component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Use an existing configuration |
Select this check box and in the Component List click the relevant connection component to |
Account |
In the Account field, enter, in double quotation marks, the account name |
Region |
Select an AWS or Azure region from the drop-down list. |
Username and Password |
Enter, in double quotation marks, your authentication
|
Database |
Enter, in double quotation marks, the name of the |
Warehouse |
Enter, in double quotation marks, the name of the |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Built-In: You create and store the schema locally for this component Repository: You have already created the schema and stored it in the If the Snowflake data type to Click Edit
Note that if the input value of any non-nullable primitive This This |
Table Name | Enter, within double quotation marks, the name of the Snowflake table to be used. This name is case-sensitive and is normally upper case in Snowflake. |
Read from | Select either Table or Query from the dropdown list. |
Advanced settings
Allow Snowflake to convert columns and tables to uppercase |
Select this check box to convert lowercase in the defined If you deselect the check box, all identifiers are This property is not available when you select the For more information on the Snowflake Identifier Syntax, |
Use Custom Region | Select this check box to use the customized Snowflake region. |
Custom Region | Enter, within double quotation marks, the name of the region to be used. This name is case-sensitive and is normally upper case in Snowflake. |
Trim all the String/Char columns |
Select this check box to remove leading and trailing whitespace from all |
Trim column | Remove the leading and trailing whitespace from the defined columns. |
Usage
Usage rule |
This component is used as a start component and requires an output Use a tSnowFlakeConfiguration: update component in the same Job to connect |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |