July 31, 2023

Reading and writing data in MongoDB using a Spark Streaming Job – Docs for ESB Jdbc 7.x

Reading and writing data in MongoDB using a Spark Streaming Job

This scenario applies only to Talend Real Time Big Data Platform and Talend Data Fabric.

In this scenario, you create a Spark Streaming Job to extract data about
given movie directors from MongoDB, use this data to filter and complete movie
information and then write the result into a MongoDB collection.

Reading and writing data in MongoDB using a Spark Streaming Job_1.png
The sample data about movie directors reads as
follows:

This data contains the names of these directors and the ID numbers
distributed to them.

The structure of this data in MongoDB reads as
follows:

Note that the sample data is created for demonstration purposes only.

tHDFSConfiguration is used in this scenario by Spark to connect
to the HDFS system where the jar files dependent on the Job are transferred.

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

Prerequisites:

  • The Spark cluster and the MongoDB database to be used have
    been properly installed and are running.

  • The above-mentioned data has been loaded in the MongoDB
    collection to be used.

To replicate this scenario, proceed as follows:


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x