Writing and reading data from MongoDB using a Spark Batch Job
This scenario applies only to a subscription-based Talend solution with Big data.
In this scenario, you create a Spark Batch Job to write data about some movie directors
into the MongoDB default database and then read the data
from this database.
This data contains the names of these directors and the ID numbers distributed to
Note that the sample data is created for demonstration purposes only.
Prerequisite: ensure that the Spark cluster and the
MongoDB database to be used have been properly installed and are running.
To replicate this scenario, proceed as follows: