July 30, 2023

tDynamoDBConfiguration – Docs for ESB 7.x

tDynamoDBConfiguration

Stores connection information and credentials to be reused by other DynamoDB
components.

You define the connection to a DynamoDB database in tDynamoDBConfiguration and configure the
other DynamoDB components to reuse this configuration. At runtime, the
Spark executors read this configuration in order to connect to
DynamoDB.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tDynamoDBConfiguration properties for Apache Spark Batch

These properties are used to configure tDynamoDBConfiguration running in the Spark Batch Job framework.

The Spark Batch
tDynamoDBConfiguration component belongs to the Storage and the Databases families.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Access key

Enter the access key ID that uniquely identifies an AWS
Account. For further information about how to get your Access Key and Secret Key,
see Getting Your AWS Access
Keys
.

Secret key

Enter the secret access key, constituting the security
credentials in combination with the access Key.

To enter the secret key, click the […] button next to
the secret key field, and then in the pop-up dialog box enter the password between double
quotes and click OK to save the settings.

Region

Specify the AWS region by selecting a region name from the list. For more information
about the AWS Region, see Regions
and Endpoints
.

Use End Point

Select this check box and in the field displayed, specify the Web service URL of the
DynamoDB database service.

Usage

Usage rule

This component is used with no need to be connected to other
components.

The configuration in a tDynamoDBConfiguration component
applies only on the DynamoDB related components in the same Job. In other words, the
DynamoDB components used in a child or a parent Job that is called via tRunJob cannot reuse this configuration.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tDynamoDBConfiguration properties for Apache Spark Streaming

These properties are used to configure tDynamoDBConfiguration running in the Spark Streaming Job framework.

The Spark Streaming
tDynamoDBConfiguration component belongs to the Storage and the Databases families.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Access key

Enter the access key ID that uniquely identifies an AWS
Account. For further information about how to get your Access Key and Secret Key,
see Getting Your AWS Access
Keys
.

Secret key

Enter the secret access key, constituting the security
credentials in combination with the access Key.

To enter the secret key, click the […] button next to
the secret key field, and then in the pop-up dialog box enter the password between double
quotes and click OK to save the settings.

Region

Specify the AWS region by selecting a region name from the list. For more information
about the AWS Region, see Regions
and Endpoints
.

Use End Point

Select this check box and in the field displayed, specify the Web service URL of the
DynamoDB database service.

Advanced settings

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control
the number of connections that stay open simultaneously. The default values given to the
following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number
    of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same
    time.

  • Max waiting time (ms): enter the maximum amount of time
    at the end of which the response to a demand for using a connection should be returned by
    the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number
    of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number
    of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The
following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval
    (in milliseconds) at the end of which the component checks the status of the connections and
    destroys the idle ones.

  • Min idle time for a connection to be eligible to
    eviction
    : enter the time interval (in milliseconds) at the end of which the idle
    connections are destroyed.

  • Soft min idle time for a connection to be eligible to
    eviction
    : this parameter works the same way as Min idle
    time for a connection to be eligible to eviction
    but it keeps the minimum number
    of idle connections, the number you define in the Min number of idle
    connections
    field.

Usage

Usage rule

This component is used with no need to be connected to other components.

The configuration in a tDynamoDBConfiguration component
applies only on the DynamoDB related components in the same Job. In other words, the
DynamoDB components used in a child or a parent Job that is called via tRunJob cannot reuse this configuration.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x