tDynamoDBOutput
Writes data into an Amazon DynamoDB table.
tDynamoDBOutput creates, updates or
deletes data in an Amazon DynamoDB table.
Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:
-
Standard: see tDynamoDBOutput Standard properties.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
-
Spark Batch: see tDynamoDBOutput properties for Apache Spark Batch.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data. -
Spark Streaming: see tDynamoDBOutput properties for Apache Spark Streaming.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
tDynamoDBOutput Standard properties
These properties are used to configure tDynamoDBOutput running in the Standard Job framework.
The Standard
tDynamoDBOutput component belongs to the Big Data family.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
Basic settings
Access |
Enter the access key ID that uniquely identifies an AWS Account. For |
Secret |
Enter the secret access key, constituting the security credentials in To enter the secret key, click the […] button next to |
Inherit credentials from AWS role |
Select this check box to leverage the instance profile credentials. These |
Assume role |
Select this check box and specify the values for the following parameters used to
create a new assumed role session.
For more information about assuming roles, see AssumeRole. |
Use End Point |
Select this check box and in the Server Url field |
Region |
Specify the AWS region by selecting a region name from the list or entering |
Action on table |
Select an operation to be performed on the table defined.
|
Action on data |
On the data of the table defined, you can perform one of the following
|
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to
|
|
Click Edit schema to make changes to the schema.
|
Table Name |
Specify the name of table to be written. |
Partition Key |
Specify the partition key of the specified table. |
Sort Key |
Specify the sorted key of the specified table. |
Advanced settings
STS Endpoint |
Select this check box and in the field displayed, specify the AWS Security Token This check box is available only when the Assume |
Read Capacity Unit |
Specify the number of read capacity units. For more information, see |
Write Capacity Unit |
Specify the number of write capacity units. For more information, see |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component is usually used as an end component of a Job or Subjob and it |
Related scenarios
No scenario is available for the Standard version of this component yet.
tDynamoDBOutput properties for Apache Spark Batch
These properties are used to configure tDynamoDBOutput running in the Spark Batch Job framework.
The Spark Batch
tDynamoDBOutput component belongs to the Databases family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
Use an existing connection |
Select this check box and in the Component |
Access |
Enter the access key ID that uniquely identifies an AWS Account. For |
Secret |
Enter the secret access key, constituting the security credentials in To enter the secret key, click the […] button next to |
Use End Point |
Select this check box and in the Server Url field |
Region |
Specify the AWS region by selecting a region name from the list or entering |
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to
|
|
Click Edit schema to make changes to the schema.
|
Table Name |
Specify the name of the table in which you need to write data. This table must already |
Die on error |
Select the check box to stop the execution of the Job when an error |
Advanced settings
Throughput write percent |
Enter, without using quotation marks, the percentage (expressed in decimal) to be used of |
Advanced properties |
Add properties to define extra operations you need tDynamoDBOutput to perform when writing data. This table is present for future evolution of the component and using it requires the |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component should use a tDynamoDBConfiguration This component, along with the Spark Batch component Palette it belongs to, appears only Note that in this documentation, unless otherwise |
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.
tDynamoDBOutput properties for Apache Spark Streaming
These properties are used to configure tDynamoDBOutput running in the Spark Streaming Job framework.
The Spark Streaming
tDynamoDBOutput component belongs to the Databases family.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
Use an existing connection |
Select this check box and in the Component |
Access |
Enter the access key ID that uniquely identifies an AWS Account. For |
Secret |
Enter the secret access key, constituting the security credentials in To enter the secret key, click the […] button next to |
Use End Point |
Select this check box and in the Server Url field |
Region |
Specify the AWS region by selecting a region name from the list or entering |
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to
|
|
Click Edit schema to make changes to the schema.
|
Table Name |
Specify the name of the table in which you need to write data. This table must already |
Die on error |
Select the check box to stop the execution of the Job when an error |
Advanced settings
Throughput write percent |
Enter, without using quotation marks, the percentage (expressed in decimal) to be used of |
Advanced properties |
Add properties to define extra operations you need tDynamoDBOutput to perform when writing data. This table is present for future evolution of the component and using it requires the |
Usage
Usage rule |
This component is used as an end component and requires an input link. This component should use a tDynamoDBConfiguration This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.