August 16, 2023

Writing and reading data from MongoDB using a Spark Batch Job – Docs for ESB 6.x

Writing and reading data from MongoDB using a Spark Batch Job

This scenario applies only to a subscription-based Talend solution with Big data.

In this scenario, you create a Spark Batch Job to write data about some movie directors
into the MongoDB default database and then read the data
from this database.

use_case-mongodbinput_spark_batch1.png
The sample data about movie directors reads as
follows:

This data contains the names of these directors and the ID numbers distributed to
them.

Note that the sample data is created for demonstration purposes only.

Prerequisite: ensure that the Spark cluster and the
MongoDB database to be used have been properly installed and are running.

To replicate this scenario, proceed as follows:


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x