July 30, 2023

tMongoDBOutput – Docs for ESB 7.x

tMongoDBOutput

Executes the action defined on the collection in the MongoDB database.

tMongoDBOutput
inserts, updates, upserts or deletes documents in a MongoDB database collection based on
the incoming flow from the preceding component in the Job.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tMongoDBOutput Standard properties

These properties are used to configure tMongoDBOutput running in the Standard Job framework.

The Standard
tMongoDBOutput component belongs to the Big Data and the Databases NoSQL families.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Use existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

DB Version

List of the database versions.

Available when the Use existing
connection
check box is not selected.

Use replica set address

Select this check box to show the Replica
address
table.

In the Replica address table, you can
define multiple MongoDB database servers for failover.

Available when the Use existing
connection
check box is not selected.

Server and Port

IP address and listening port of the database server.

Available when the Use existing
connection
or Use replica set
address
check box is not selected.

Database

Name of the database.

Use SSL connection

Select this check box to enable the SSL or TLS encrypted connection.

Then you need to use the tSetKeystore
component in the same Job to specify the encryption information.

Note that the SSL connection is available only for the version 2.4 + of
MongoDB.

Set write concern

Select this check box to set the level of acknowledgement requested from for write
operations. Then you need to select the level of this operation.

For further information, see the related MongoDB documentation on http://docs.mongodb.org/manual/core/write-concern/.

Bulk write

Select this check box to insert, update or remove data in bulk. Note this feature is available only when the version of MongoDB you are using is 2.6+.

Then you need to select Ordered or Unordered to define how the MongoDB database processes the data
sent by the Studio.

  • If you select Ordered,
    MongoDB processes the queries sequentially.

  • If you select Unordered,
    MongoDB optimizes the bulk write operations without keeping the order in which
    the individual operations were inserted in the bulk write.

In the Bulk write size field, enter the size of each
query group to be processed by MongoDB. In the documentation of MongoDB, some restrictions
and expected behaviors as to this size are explained. You can find the details on http://docs.mongodb.org/manual/core/bulk-write-operations/.

Required authentication

Select this check box to enable the database authentication.

Among the mechanisms listed on the Authentication mechanism
drop-down list, the NEGOTIATE one is recommended if
you are not using Kerberos, because it automatically select the authentication mechanism
the most adapted to the MongoDB version you are using.

For details about the other mechanisms in this list, see MongoDB Authentication from the MongoDB
documentation.

Set Authentication database

If the username to be used to connect to MongoDB has been created in a specific
Authentication database of MongoDB, select this check box to enter the name of this
Authentication database in the Authentication database
field that is displayed.

For further information about the MongoDB Authentication database, see User Authentication database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Available when the Required
authentication
check box is selected.

If the security system you have selected from the Authentication mechanism drop-down list is Kerberos, you need to
enter the User principal, the Realm and the KDC
server
fields instead of the Username and the Password
fields.

Collection

Name of the collection in the MongoDB database.

Drop collection if exist

Select this check box to drop the collection if it already
exists.

Action on data

The following operations are available:

  • Insert: insert documents.

  • Set: modifies the existing fields of an existing
    document and appends a field if it does not exist in this document.

    If you need to apply this action on all the documents in the collection to be
    used, select the Update all document check box that
    is displayed; otherwise, only the first document is updated.

  • Update: replaces the existing documents with the
    incoming data but keeps the technical ID of these documents.

  • Upsert: inserts a document if it does not exist
    otherwise it applies the same rules as Update.

  • Upsert with set: inserts a document if it does
    not exist otherwise it applies the same rules as Set

    If you need to apply this action on all the documents in the collection to be
    used, select the Update all document check box that
    is displayed; otherwise, only the first document is updated.

  • Delete: delete documents.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Click Sync columns to retrieve the
schema from the previous component connected in the Job.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Mapping

Each column of the schema defined for this component represents a field of the documents
to be read. In this table, you need to specify the parent nodes of these fields, if
any.

For example, in the document reading as
follows
The
first and the last
fields have person as their parent node but the _id field does not have any parent node. So once completed, this
Mapping table should read as
follows:

Not available when the Generate JSON
Document
check box is selected in Advanced settings.

Die on error

This check box is cleared by default, meaning to skip the row on error
and to complete the process for error-free rows.

Advanced settings

Generate JSON Document

Select this check box for JSON configuration:

Configure JSON Tree: click the
[…] button to open the interface
for JSON tree configuration. For more information, see Configuring a JSON Tree.

Group by: click the [+] button to add lines and choose the input
columns for grouping the records.

Remove root node: select this check
box to remove the root node.

Data node and Query node (available for update and upsert actions):
type in the name of data node and query node configured on the JSON
tree.

Warning:

These nodes are mandatory for update and upsert actions. They are
intended to enable the update and upsert actions though will not be
stored in the database.

No query timeout

Select this check box to prevent MongoDB servers from stopping idle
cursors at the end of 10-minute inactivity of these cursors. In this
situation, an idle cursor will stay open until either the results of
this cursor are exhausted or you manually close it using the
cursor.close() method.

A cursor for MongoDB is a pointer to the result set of a query. By
default, that is to say, with this check box being clear, a MongoDB
server automatically stops idle cursors after a given inactivity period
to avoid excess memory use. For further information about MongoDB
cursors, see https://docs.mongodb.org/manual/core/cursors/.

tStatCatcher Statistics

Select this check box to collect the log data at the component
level.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

tMongoDBOutput executes the action
defined on the collection in the MongoDB database based on the flow
incoming from the preceding component in the Job.

Limitation

Note:

  • The “multi” parameter, which allows to update multiple
    documents at a time, is not supported. Therefore, if two
    documents have the same key, the first is always updated,
    but the second never will.

  • For the update operation, the key cannot be a JSON
    array.

Creating a collection and writing data to it

This scenario applies only to Talend products with Big Data.

This scenario creates the collection blog and
writes post data to it.

Linking the components

  1. Drop tMongoDBConnection, tFixedFlowInput, tMongoDBOutput, tMongoDBClose, tMongoDBInput and tLogRow
    onto the workspace.
  2. Rename tFixedFlowInput as blog_post_data, tMongoDBOutput as write_data_to_collection, tMongoDBInput as read_data_from_collection and tLogRow as show_data_from_collection.
  3. Link tMongoDBConnection to tFixedFlowInput using the OnSubjobOk trigger.
  4. Link tFixedFlowInput to tMongoDBOutput using a Row > Main
    connection.
  5. Link tFixedFlowInput to tMongoDBInput using the OnSubjobOk trigger.
  6. Link tMongoDBInput to tMongoDBClose using the OnSubjobOk trigger.
  7. Link tMongoDBInput to tLogRow using a Row > Main
    connection.

    tMongoDBOutput_1.png

Configuring the components

  1. Double-click tMongoDBConnection to open
    its Basic settings view.

    tMongoDBOutput_2.png

  2. From the DB Version list, select the
    MongoDB version you are using.
  3. In the Server and Port fields, enter the connection details.

    In the Database field, enter the name of the MongoDB
    database.
  4. Double-click tFixedFlowInput to open its
    Basic settings view.

    tMongoDBOutput_3.png

    Select Use Inline Content (delimited
    file)
    in the Mode
    area.
    In the Content field, enter the data to write to the
    MongoDB database, for example:
  5. Double-click tMongoDBOutput to open its
    Basic settings view.

    tMongoDBOutput_4.png

    Select the Use existing connection and
    Drop collection if exist check
    boxes.
    In the Collection field, enter the name
    of the collection, namely blog.
  6. Click the […] button next to Edit schema to open the schema editor.

    tMongoDBOutput_5.png

  7. Click the [+] button to add five columns
    in the right part, namely id, author, title, keywords and
    contents, with the type as Integer and String respectively.

    Click

    tMongoDBOutput_6.png

    to copy all the columns to the input table.

    Click Ok to close the editor.
  8. The columns now appear in the left part of the Mapping area.

    For columns author, title, keywords and
    contents, enter their parent node
    post. By doing so, those nodes reside
    under the node post in the MongoDB
    collection.
  9. Double-click tMongoDBInput to open its
    Basic settings view.

    tMongoDBOutput_7.png

    Select the Use existing connection check
    box.
    In the Collection field, enter the name
    of the collection, namely blog.
  10. Click the […] button next to Edit schema to open the schema editor.

    tMongoDBOutput_8.png

  11. Click the [+] button to add five columns,
    namely id, author, title, keywords and contents, with the type as Integer and String
    respectively.

    Click OK to close the editor.
  12. The columns now appear in the left part of the Mapping area.

    For columns author, title, keywords and contents,
    enter their parent node post so that the
    data can be retrieved from the correct positions.
  13. In the Sort by area, click the [+] button to add one line and enter id under Column.

    Select asc from the Order asc or desc? column to the right of the id column. This way, the retrieved records will
    appear in ascending order of the id
    column.

Executing the Job

  1. Press Ctrl+S to save the Job.
  2. Press F6 to run the Job.

    tMongoDBOutput_9.png

  3. Switch to the database talend and read data from the
    collection blog in the MongoDB command
    line client. You can find that author,
    title, keywords and contents all
    reside under the node post. Meanwhile,
    the records are stored in the same order as the source input.

    tMongoDBOutput_10.png

Upserting records in a collection

This scenario applies only to Talend products with Big Data.

This scenario upserts the collection blog as an existing
record has its author changed and a new record is added. Before the upsert, the collection blog looks like:

Such records can be inserted to the database following the instructions of
Creating a collection and writing data to it.

Linking the components

  1. Drop tMongoDBConnection, tFixedFlowInput, tMongoDBOutput, tMongoDBClose, tMongoDBInput and tLogRow
    from the Palette onto the design
    workspace.
  2. Rename tFixedFlowInput as blog_post_data, tMongoDBOutput as write_data_to_collection, tMongoDBInput as read_data_from_collection and tLogRow as show_data_from_collection.
  3. Link tMongoDBConnection to tFixedFlowInput using the OnSubjobOk trigger.
  4. Link tFixedFlowInput to tMongoDBOutput using a Row > Main
    connection.
  5. Link tFixedFlowInput to tMongoDBInput using the OnSubjobOk trigger.
  6. Link tMongoDBInput to tMongoDBClose using the OnSubjobOk trigger.
  7. Link tMongoDBInput to tLogRow using a Row > Main
    connection.

    tMongoDBOutput_11.png

Configuring the components

  1. Double-click tMongoDBConnection to open
    its Basic settings view.

    tMongoDBOutput_12.png

  2. From the DB Version list, select the
    MongoDB version you are using.
  3. In the Server and Port fields, enter the connection details.

    In the Database field, enter the name of the MongoDB
    database.
  4. Double-click tFixedFlowInput to open its
    Basic settings view.

    tMongoDBOutput_13.png

    Select Use Inline Content (delimited
    file)
    in the Mode
    area.
    In the Content field, enter the data for upserting the
    MongoDB database, for example:
    As shown above, the 3rd record has its author changed and the 4th record
    is new.
  5. Double-click tMongoDBOutput to open its
    Basic settings view.

    tMongoDBOutput_14.png

    Select the Use existing connection and
    Die on error check boxes.
    In the Collection field, enter the name
    of the collection, namely blog.
    Select Upsert from the Action on data list.
  6. Click the […] button next to Edit schema to open the schema editor.

    tMongoDBOutput_5.png

  7. Click the [+] button to add five columns
    in the right part, namely id, author, title, keywords and
    contents, with the type as Integer and String respectively.

    Click

    tMongoDBOutput_6.png

    to copy all the columns to the input table.

    Click Ok to close the editor.
  8. In the Advanced Settings view, select the
    Generate JSON Document check
    box.

    Select the Remove root node check box.
    In the Data node and Query node fields, enter “data” and “query”.
    tMongoDBOutput_17.png

  9. Click the […] button next to Configure JSON Tree to open the configuration
    interface.

    tMongoDBOutput_18.png

  10. Right-click the node rootTag and select
    Add Sub-element from the contextual
    menu.

    In the dialog box that appears, type in data for the Data
    node
    :
    tMongoDBOutput_19.png

    Click OK to close the window.
    Repeat this operation to define query
    as the Query node.
    Right-click the node data and select
    Set As Loop Element from the contextual
    menu.
    Warning:

    These nodes are mandatory for update and upsert actions. They are
    intended to enable the update and upsert actions though will not be
    stored in the database.

  11. Select all the columns under the Schema
    list
    and drop them to the data node.

    In the window that appears, select Create as
    sub-element of target node
    .
    tMongoDBOutput_20.png

    Click OK to close the window.
    Repeat this operation to drop the id
    column from the Schema list under the
    Query node.
  12. Right-click the node id under data and select Add
    Attribute
    from the contextual menu.

    In the dialog box that appears, type in type as the attribute name:
    tMongoDBOutput_21.png

    Click OK to close the window.
    Right-click the node @type under
    id and select Set A Fix Value from the contextual menu.
    In the dialog box that appears, type in integer as the attribute value, ensuring the id values are stored as integers in the
    database.
    tMongoDBOutput_22.png

    Click OK to close the window.
    Repeat this operation to set this attribute for the id node under Query.
    Click OK to close the JSON Tree
    configuration interface.
  13. Double-click tMongoDBInput to open its
    Basic settings view.

    tMongoDBOutput_23.png

    Select the Use existing connection check
    box.
    In the Collection field, enter the name
    of the collection, namely blog.
    Click the […] button next to Edit schema to open the schema editor.
    tMongoDBOutput_24.png

    Click the [+] button to add five columns,
    namely id, author, title, keywords and contents, with the type as Integer and String
    respectively.
    Click OK to close the editor.
    The columns now appear in the left part of the Mapping area.
    For columns author, title, keywords and contents,
    enter their parent node post so that the
    data can be retrieved from the correct positions.
  14. Double-click tLogRow to open its
    Basic settings view.

    tMongoDBOutput_25.png

    In the Mode area, select Table (print values in cells of a table for
    better display.

Executing the Job

  1. Press Ctrl+S to save the Job.
  2. Press F6 to run the Job.

    tMongoDBOutput_26.png

    As shown above, the 3rd record has its author updated and the 4th record
    is inserted.

tMongoDBOutput properties for Apache Spark Batch

These properties are used to configure tMongoDBOutput running in the Spark Batch Job framework.

The Spark Batch
tMongoDBOutput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

MongoDB configuration

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

If a column in the database is a JSON document and you need to read
the entire document, put an asterisk (*) in the DB
column
column, without quotation marks around.

Collection

Enter the name of the collection to be used.

A MongoDB collection is the equivalent of an RDBMS table and contains
documents.

Set write concern

Select this check box to set the level of acknowledgement requested from for write
operations. Then you need to select the level of this operation.

For further information, see the related MongoDB documentation on http://docs.mongodb.org/manual/core/write-concern/.

Action on data

The following operations are available:

  • Insert: insert documents.

  • Set: modifies the existing fields of an existing
    document and appends a field if it does not exist in this document.

    If you need to apply this action on all the documents in the collection to be
    used, select the Update all document check box that
    is displayed; otherwise, only the first document is updated.

    If you need to append a new field to a parent node you specify in the Mapping table, select Append to
    parent
    . If you leave this check box clear, this new field is appended
    to the root of the document being updated.

  • Upsert with set: inserts a document if it does
    not exist otherwise it applies the same rules as Set

    If you need to apply this action on all the documents in the collection to be
    used, select the Update all document check box that
    is displayed; otherwise, only the first document is updated.

    If you need to append a new field to a parent node you specify in the Mapping table, select Append to
    parent
    . If you leave this check box clear, this new field is appended
    to the root of the document being updated.

Mapping

Each column of the schema defined for this component represents a field of the documents
to be read. In this table, you need to specify the parent nodes of these fields, if
any.

For example, in the document reading as
follows
The
first and the last
fields have person as their parent node but the _id field does not have any parent node. So once completed, this
Mapping table should read as
follows:

Advanced settings

Advanced Hadoop MongoDB
properties

Add properties to define extra operations you need tMongoDBOutput to perform when writing data.

The available properties are listed and explained in MongoDB Connector for
Hadoop
.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component should use a tMongoDBConfiguration component present in the same Job to connect
to a MongoDB database. You need to drop a tMongoDBConfiguration component alongside this component and
configure the Basic settings of this component
to use tMongoDBConfiguration.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario in which tMongoDBOutput is used, see Writing and reading data from MongoDB using a Spark Batch Job.

tMongoDBOutput properties for Apache Spark Streaming

These properties are used to configure tMongoDBOutput running in the Spark Streaming Job framework.

The Spark Streaming
tMongoDBOutput component belongs to the Databases family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

MongoDB configuration

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

If a column in the database is a JSON document and you need to read
the entire document, put an asterisk (*) in the DB
column
column, without quotation marks around.

Collection

Enter the name of the collection to be used.

A MongoDB collection is the equivalent of an RDBMS table and contains
documents.

Set write concern

Select this check box to set the level of acknowledgement requested from for write
operations. Then you need to select the level of this operation.

For further information, see the related MongoDB documentation on http://docs.mongodb.org/manual/core/write-concern/.

Action on data

The following operations are available:

  • Insert: insert documents.

  • Set: modifies the existing fields of an existing
    document and appends a field if it does not exist in this document.

    If you need to apply this action on all the documents in the collection to be
    used, select the Update all document check box that
    is displayed; otherwise, only the first document is updated.

    If you need to append a new field to a parent node you specify in the Mapping table, select Append to
    parent
    . If you leave this check box clear, this new field is appended
    to the root of the document being updated.

  • Upsert with set: inserts a document if it does
    not exist otherwise it applies the same rules as Set

    If you need to apply this action on all the documents in the collection to be
    used, select the Update all document check box that
    is displayed; otherwise, only the first document is updated.

    If you need to append a new field to a parent node you specify in the Mapping table, select Append to
    parent
    . If you leave this check box clear, this new field is appended
    to the root of the document being updated.

Mapping

Each column of the schema defined for this component represents a field of the documents
to be read. In this table, you need to specify the parent nodes of these fields, if
any.

For example, in the document reading as
follows
The
first and the last
fields have person as their parent node but the _id field does not have any parent node. So once completed, this
Mapping table should read as
follows:

Advanced settings

Advanced Hadoop MongoDB
properties

Add properties to define extra operations you need tMongoDBOutput to perform when writing data.

The available properties are listed and explained in MongoDB Connector for
Hadoop
.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component should use a tMongoDBConfiguration component present in the same Job to connect
to a MongoDB database. You need to drop a tMongoDBConfiguration component alongside this component and
configure the Basic settings of this component
to use tMongoDBConfiguration.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario in which tMongoDBOutput is used, see Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x