August 16, 2023

tAzureFSConfiguration properties for Apache Spark Batch – Docs for ESB 6.x

tAzureFSConfiguration properties for Apache Spark Batch

These properties are used to configure tAzureFSConfiguration running in the Spark Batch Job framework.

The Spark Batch
tAzureFSConfiguration component belongs to the Storage family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Azure FileSystem

Select the file system to be used. Then the parameters to be defined are
displayed accordingly.

The Azure Data Lake Store option is available only
when you are using Hortonworks Data Platform V2.6.0 or Cloudera CDH
5.12.

When you use this component with Azure Blob Storage:

Blob storage account

Enter the name of the storage account you need to access. A storage account
name can be found in the Storage accounts dashboard of the Microsoft Azure Storage
system to be used. Ensure that the administrator of the system has granted you the
appropriate access permissions to this storage account.

Account key

Enter the key associated with the storage account you need to access. Two
keys are available for each account and by default, either of them can be used for
this access.

Container

Enter the name of the blob
container you need to use.

When you use this component with Azure Datalake Storage:

Data Lake Store account

Enter the name of the Data Lake Store account you need to access. Ensure that
the administrator of the system has granted you the appropriate access
permissions to this account.

Application ID and Application
key

Enter the authentication ID and the authentication key generated upon the
registration of the application that the current Job you are developing uses
to access Azure Data Lake Store.

Ensure that the application to be used has appropriate permissions to access
Azure Data Lake. You can check this on the Required permissions view of this
application on Azure. For further information, see Azure documentation Application authentication.

Token endpoint

Copy-paste the OAuth 2.0 token endpoint that you can obtain from the
Endpoints list accessible on the App registrations page.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is used standalone in a Subjob to provide connection configuration to your Azure file system for the whole Job.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Parent topic: tAzureFSConfiguration

Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x