tDynamoDBInput
follows for transformation.
Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:
-
Standard: see tDynamoDBInput Standard properties.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
-
Spark Batch: see tDynamoDBInput properties for Apache Spark Batch.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
tDynamoDBInput Standard properties
These properties are used to configure tDynamoDBInput running in the Standard Job framework.
The Standard
tDynamoDBInput component belongs to the Big Data family.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
Basic settings
Access Key |
Enter the access key ID that uniquely identifies an AWS Account. For |
Secret Key |
Enter the secret access key, constituting the security credentials in To enter the secret key, click the […] button next to |
Inherit credentials from AWS role |
Select this check box to leverage the instance profile credentials. These |
Assume role |
Select this check box and specify the values for the following parameters used to
create a new assumed role session.
For more information about assuming roles, see AssumeRole. |
Use End Point |
Select this check box and in the Server Url field |
Region |
Specify the AWS region by selecting a region name from the list or entering |
Action |
Select the operation to be performed from the drop-down list, either |
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to
|
|
Click Edit schema to make changes to the schema.
|
Table Name |
Specify the name of the table to be queried or scanned. |
Use advanced key condition expression |
Select this check box and in the Advanced key |
Key condition expression |
Specify the key condition expressions used to determine the items to
Note that only the items that meet all the key conditions defined in This table is not available when the Use |
Use filter expression |
Select this check box to use the filter expression for the query or |
Use advanced filter expression |
Select this check box and in the Advanced filter This check box is available when the Use filter |
Filter expression |
Specify the filter expressions used to refine the results returned to
Note that only the items that meet all the filter conditions defined This table is available when the Use filter |
Value mapping |
Specify the placeholders for the expression attribute values.
For more information, see Expression Attribute Values. |
Name mapping |
Specify the placeholders for the attribute names that conflict with
For more information, see Expression Attribute Names. |
Advanced settings
STS Endpoint |
Select this check box and in the field displayed, specify the AWS Security Token This check box is available only when the Assume |
tStatCatcher Statistics |
Select this check box to gather the Job processing metadata at the Job level |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component is usually used as a start component of a Job or Subjob and it |
Related scenarios
No scenario is available for the Standard version of this component yet.
tDynamoDBInput properties for Apache Spark Batch
These properties are used to configure tDynamoDBInput running in the Spark Batch Job framework.
The Spark Batch
tDynamoDBInput component belongs to the Databases family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
Use an existing connection |
Select this check box and in the Component |
Access |
Enter the access key ID that uniquely identifies an AWS Account. For |
Secret |
Enter the secret access key, constituting the security credentials in To enter the secret key, click the […] button next to |
Region |
Specify the AWS region by selecting a region name from the list or entering |
Use End Point |
Select this check box and in the Server Url field |
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to
|
|
Click Edit schema to make changes to the schema.
|
Table Name |
Specify the name of the table from which you need to read data. |
Advanced settings
Number of scan segments |
Enter, without using quotation marks, the number of segments for the parallel scan. |
Number of partitions |
Enter, without using quotation marks, the maximum number of partitions into which you want |
Throughput read percent |
Enter, without using quotation marks, the percentage (expressed in decimal) to be used of |
Advanced settings |
Add properties to define extra operations you need tDynamoDBInput to perform when reading data. This table is present for future evolution of the component and using it requires the |
Usage
Usage rule |
This component is used as a start component and requires an output link.. This component should use a tDynamoDBConfiguration This component, along with the Spark Batch component Palette it belongs to, appears only Note that in this documentation, unless otherwise |
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.