tS3Configuration properties for Apache Spark Streaming
These properties are used to configure tS3Configuration running in the Spark Streaming Job framework.
The Spark Streaming
tS3Configuration component belongs to the Storage family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Access Key |
Enter the access key ID that uniquely identifies an AWS |
||
Access Secret |
Enter the secret access key, constituting the security To enter the secret key, click the […] button next to |
||
Bucket name |
Enter the bucket name and its folder you need to use. You |
||
Use s3a filesystem |
Select this check box to use the S3A filesystem instead This feature is available when you are using one of the
following distributions with Spark:
|
||
Inherit credentials from AWS | If you are using the S3A filesystem with EMR, you can select this check box to obtain AWS security credentials from your EMR instance metadata. To use this option, the Amazon EMR cluster must be started and your Job must be running on this cluster. For more information, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. This option enables you to develop your Job without having to put any |
||
Use SSE-KMS encryption with CMK | If you are using the S3A filesystem with EMR, you can select this check box to use the SSE-KMS encryption service enabled on AWS to read or write the encrypted data on S3. On the EMR side, the SSE-KMS service must have been enabled with the For further information about the AWS SSE-KMS encryption, see Protecting Data Using Server-Side For further information about how to enbale the Default Encryption feature for an |
||
Use S3 bucket policy | If you have defined bucket policy for the bucket to be used, select this check box and add the following parameter about AWS signature versions to the JVM argument list of your Job in the Advanced settings of the Run tab:
|
||
Assume Role |
If you are using the S3A filesystem, you can select this check box to Ensure that access to this role has been After selecting this check box, specify the parameters the
administrator of the AWS system to be used defined for this role.
The External ID In addition, if the AWS administrator has enabled the STS endpoints for This check box is available only for the following distributions
Talend supports:
This check box is also available when you are using Spark V1.6 and |
||
Set region |
Select this check box and select the region to connect This feature is available when you are using one of the
following distributions with Spark:
|
||
Set endpoint |
Select this check box and in the Endpoint field that is displayed, enter the Amazon If you leave this check box clear, the endpoint will be This feature is available when you are using one of the
following distributions with Spark:
|
Advanced settings
Set STS region and Set STS endpoint |
If the AWS administrator has enabled the STS endpoints for the regions If the endpoint you want to use is not available in this regional This service allows you to request temporary, For a list of the STS endpoints you can use, see These check boxes are available only when you have selected the |
Usage
Usage rule |
This component is used with no need to be connected to other components. You need to drop tS3Configuration along with the file system related Subjob to be |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use |