tDataEncrypt properties for Apache Spark Batch
These properties are used to configure tDataEncrypt running in the Spark Batch Job
framework.
The Standard
tDataEncrypt component belongs to the Data Quality family.
The component in this framework is available in Talend Data Management Platform, Talend Big Data Platform, Talend Real Time Big Data Platform, Talend Data Services Platform, Talend MDM Platform and in Talend Data Fabric.
Basic settings
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Sync Click Edit
|
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
Password |
This value must be enclosed in double quotes. When using an existing cryptographic file, enter the password required to use When generating a cryptographic file, enter This password is required to decrypt back data |
Cryptographic file |
This value must be enclosed in double quotes. When using an existing cryptographic file, When generating a cryptographic file, enter the destination file path. This must be a local file path. This cryptographic file is encrypted using AES-GCM. This cryptographic file is required to decrypt back data using the tDataDecrypt |
Generate cryptographic file |
Click this button to generate the In the dialog box, select the cryptographic
method used to encrypt the input data:
|
Encryption |
Select the corresponding Encrypt check boxes to encrypt You can encrypt all column data types Configure the output schema of the The columns that are not selected |
Advanced settings
tStat Catcher Statistics |
Select this check box to gather the Job processing metadata at the Job level |
Usage
Usage rule |
This component is usually used as an intermediate component, and it requires an |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |