tRedshiftUnload
Unloads data on Amazon Redshift to files on Amazon S3.
This component runs a specified query in Amazon Redshift and then unloads the result of
the query to one or more files on Amazon S3.
tRedshiftUnload Standard properties
These properties are used to configure tRedshiftUnload running in the Standard Job framework.
The Standard
tRedshiftUnload component belongs to the Cloud and the Databases families.
The component in this framework is generally available.
Basic settings
|
Property Type |
Either Built-In or Repository. |
|
|
Built-In: No property data stored |
|
|
Repository: Select the repository |
|
Use an existing connection |
Select this check box and in the Component |
|
Host |
Type in the IP address or hostname of the database server. |
|
Port |
Type in the listening port number of the database server. |
|
Database |
Type in the name of the database. |
|
Schema |
Type in the name of the schema. |
|
Username and Password |
Type in the database user authentication data. To enter the password, click the […] button next to the |
|
Additional JDBC Parameters |
Specify additional JDBC properties for the connection you are creating. The |
|
Table Name |
Type in the name of the table from which the data will be read. |
|
Schema and Edit schema |
A schema is a row description. It defines the number of fields (columns) to |
|
|
Built-In: You create and store the |
|
|
Repository: You have already created |
|
|
Click Edit schema to make changes to the schema.
|
|
Query Type and Query |
Enter the database query paying particularly attention to the proper |
|
Guess Query |
Click the button to generate the query which corresponds to the table |
|
Access Key |
Specify the Access Key ID that uniquely identifies an AWS Account. For |
|
Secret Key |
Specify the Secret Access Key, constituting the security credentials To enter the secret key, click the […] button next to |
|
Bucket |
Type in the name of the Amazon S3 bucket, namely the top level folder, |
|
Key prefix |
Type in the name prefix for the unload files on Amazon S3. By default, |
Advanced settings
|
File type |
Select the type of the unload files on Amazon S3 from the list:
|
|
Fields terminated by |
Enter the character used to separate fields. This field appears only when Delimited file or |
|
Enclosed by |
Select the character in a pair of which the fields are This list appears only when Delimited file or |
|
Fixed width mapping |
Enter a string that specifies a user-defined column label and column
Note that the column label in the string has no relation to the table This field appears only when Fixed |
|
Compressed by |
Select this check box and from the list displayed select the |
|
Encrypt |
Select this check box to encrypt unload file(s) using Amazon S3 |
|
Specify null string |
Select this check box and from the list displayed select a string that |
|
Escape |
Select this check box to place an escape character () before every |
|
Overwrite s3 object if exist |
Select this check box to overwrite the existing Amazon S3 object |
|
Parallel |
Select this check box to write data in parallel to multiple unload |
|
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job |
Global Variables
|
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
|
Usage rule |
This component covers all possible SQL queries for the Amazon Redshift |
|
Dynamic settings |
Click the [+] button to add a The Dynamic settings table is For examples on using dynamic parameters, see Scenario: Reading data from databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Related Scenario
For a related scenario, see Scenario: loading/unloading data from/to Amazon S3.