July 30, 2023

tKuduConfiguration – Docs for ESB 7.x

tKuduConfiguration

Enables the reuse of the connection configuration to Cloudera Kudu in the same
Job.

tKuduConfiguration provides Kudu
connection information for the Kudu components used in the same Spark
Job. The Spark cluster to be used reads this configuration to eventually
connect to Kudu.

tKuduConfiguration properties for Apache Spark Batch

These properties are used to configure tKuduConfiguration running in the Spark Batch Job framework.

The Spark Batch
tKuduConfiguration component belongs to the Storage and the Databases families.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Server connection

Click the [+] button to add as many rows as the Kudu masters you need to use, each row for a master.

Then enter the locations and the listening ports of the master nodes of the Kudu service to be used.

This component supports only the Apache Kudu service installed on Cloudera.

For compatibility information between Apache Kudu and Cloudera, see the related Cloudera
documentation:Compatibility Matrix for
Apache Kudu
.

Usage

Usage rule

This component is used with no need to be connected to other
components.

Use it only when you need to connect to a Cloudera Kudu cluster.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenario


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x