July 30, 2023

tMapRDBOutput – Docs for ESB 7.x

tMapRDBOutput

Writes columns of data into a given MapRDB database.

tMapRDBOutput
receives data from its preceding component, creates a table in a given MapRDB database
and writes the received data into this table.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tMapRDBOutput Standard properties

These properties are used to configure tMapRDBOutput running in the Standard Job framework.

The Standard
tMapRDBOutput component belongs to the Big Data and the Databases NoSQL families.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Distribution and
Version

Select the MapR distribution to be used. Only MapR V5.2 onwards is supported
by the MapRDB components.

If the distribution you need to use with your MapRDB database is not
officially supported by this MapRBD component, that is to say, this distribution is MapR
but is not listed in the Version drop-down list of
this components or this distribution is not MapR at all, select Custom.

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Hadoop version of the
distribution

This list is displayed only when you have selected Custom from the distribution list to connect to a cluster not yet
officially supported by the Studio. In this situation, you need to select the Hadoop
version of this custom cluster, that is to say, Hadoop
1
or Hadoop 2.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

Use kerberos authentication

If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    authentication
    check box and the Use Kerberos
    authentication
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to
be used in the Principal field and the access
path to the keytab file itself in the Keytab
field. This keytab file must be stored in the machine in which your Job actually
runs, for example, on a Talend
Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Set table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a
MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Table name

Type in the name of the HBase table you need to create.

Action on table

Select the action you need to take for creating a table.

Custom Row Key

Select this check box to use the customized row keys. Once selected, the corresponding
field appears. Then type in the user-defined row key to index the rows
of the table being created.

For example, you can type in
"France"+Numeric.sequence("s1",1,1) to produce the
row key series: France1,
France2, France3 and so on.

Families

Complete this table to specify the column or columns to be created
and the corresponding column family or families they belong to
respectively. The Column column of
this table is automatically filled once you have defined the
schema.

Die on error

This check box is cleared by default, meaning to skip the row on
error and to complete the process for error-free rows.

Advanced settings

Use batch mode

Select this check box to activate the batch mode for data processing.

Batch size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Properties

If you need to use custom configuration for your database, complete this table with the
property or properties to be customized. Then at runtime, the customized property or
properties will override the corresponding ones used by the Studio.

For example, you need to define the value of the dfs.replication property as 1 for the
database configuration. Then you need to add one row to this table using the plus button and
type in the name and the value of this property in this row.

Note:

This table is not available when you are using an existing
connection by selecting the Using an
existing connection
check box in the Basic settings view.

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Family parameters

Type in the names and, when needs be, the custom performance options of the column
families to be created. This feature leverages attributes defined by the
HBase data model, so for further explanation about these options, see
Apache HBase documentation.

Note: The parameter Compression type allows you to select the
format for output data compression.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component is normally an end component of a Job and always
needs an input link.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your
database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Related scenario

This component is similar to tHBaseOutput. For related scenario to tHBaseOutput, see Exchanging customer data with HBase.

tMapRDBOutput MapReduce properties (deprecated)

These properties are used to configure tMapRDBOutput running in the MapReduce Job framework.

The MapReduce
tMapRDBOutput component belongs to the MapReduce and the Databases families.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

The properties are stored centrally under the Hadoop
Cluster
node of the Repository
tree.

Distribution and
Version

Select the MapR distribution to be used. Only MapR V5.2 onwards is supported
by the MapRDB components.

If the distribution you need to use with your MapRDB database is not
officially supported by this MapRBD component, that is to say, this distribution is MapR
but is not listed in the Version drop-down list of
this components or this distribution is not MapR at all, select Custom.

  1. Select Import from existing
    version
    to import an officially supported distribution as base
    and then add other required jar files which the base distribution does not
    provide.

  2. Select Import from zip to
    import the configuration zip for the custom distribution to be used. This zip
    file should contain the libraries of the different Hadoop elements and the index
    file of these libraries.

    In
    Talend

    Exchange, members of
    Talend
    community have shared some ready-for-use configuration zip files
    which you can download from this Hadoop configuration
    list and directly use them in your connection accordingly. However, because of
    the ongoing evolution of the different Hadoop-related projects, you might not be
    able to find the configuration zip corresponding to your distribution from this
    list; then it is recommended to use the Import from
    existing version
    option to take an existing distribution as base
    to add the jars required by your distribution.

    Note that custom versions are not officially supported by

    Talend
    .
    Talend
    and its community provide you with the opportunity to connect to
    custom versions from the Studio but cannot guarantee that the configuration of
    whichever version you choose will be easy, due to the wide range of different
    Hadoop distributions and versions that are available. As such, you should only
    attempt to set up such a connection if you have sufficient Hadoop experience to
    handle any issues on your own.

    Note:

    In this dialog box, the active check box must be kept
    selected so as to import the jar files pertinent to the connection to be
    created between the custom distribution and this component.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Zookeeper quorum

Type in the name or the URL of the Zookeeper service you use to coordinate the transaction
between your Studio and your database. Note that when you configure the Zookeeper, you might
need to explicitly set the zookeeper.znode.parent
property to define the path to the root znode that contains all the znodes created and used
by your database; then select the Set Zookeeper znode
parent
check box to define this property.

Zookeeper client port

Type in the number of the client listening port of the Zookeeper service you are
using.

Use kerberos authentication

If the database to be used is running with Kerberos security, select this
check box, then, enter the principal names in the displayed fields. You should be
able to find the information in the hbase-site.xml file of the
cluster to be used.

  • If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
    MapR ticket authentication configuration in addition or as an alternative by following
    the explanation in Connecting to a security-enabled MapR.

    Keep in mind that this configuration generates a new MapR security ticket for the username
    defined in the Job in each execution. If you need to reuse an existing ticket issued for the
    same username, leave both the Force MapR ticket
    authentication
    check box and the Use Kerberos
    authentication
    check box clear, and then MapR should be able to automatically
    find that ticket on the fly.

If you need to use a Kerberos keytab file to log in, select Use a keytab to authenticate. A keytab file contains
pairs of Kerberos principals and encrypted keys. You need to enter the principal to
be used in the Principal field and the access
path to the keytab file itself in the Keytab
field. This keytab file must be stored in the machine in which your Job actually
runs, for example, on a Talend
Jobserver.

Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.

Schema et Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table name

Type in the name of the table in which you need to write data. This table must already
exist.

Table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a
MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Row key column

Select the column used as the row key column of the table.

Then if needs be, select the Store row key column to HBase
column
check box to make the row key column a column
belonging to a specific column family.

Families

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

The Column column of this table is automatically filled
once you have defined the schema; in the Family name
column, enter the column families you want to create or use to group the columns in the
Column column. For further information about a column
family, see Apache documentation at Column families.

Advanced settings

Properties

If you need to use custom configuration for your database, complete this table with the
property or properties to be customized. Then at runtime, the customized property or
properties will override the corresponding ones used by the Studio.

For example, you need to define the value of the dfs.replication property as 1 for the
database configuration. Then you need to add one row to this table using the plus button and
type in the name and the value of this property in this row.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

The Hadoop configuration you use for the whole Job and the Hadoop distribution you use for
this component must be the same. Actually, this component requires that its Hadoop
distribution parameter be defined separately so as to launch its database driver only when
that component is used.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Prerequisites

Before starting, ensure that you have met the Loopback IP prerequisites expected by your
database.

The Hadoop distribution must be properly installed, so as to guarantee the interaction
with
Talend Studio
. The following list presents MapR related information for
example.

  • Ensure that you have installed the MapR client in the machine where the Studio is,
    and added the MapR client library to the PATH variable of that machine. According
    to MapR’s documentation, the library or libraries of a MapR client corresponding to
    each OS version can be found under MAPR_INSTALL
    hadoophadoop-VERSIONlib
    ative
    . For example, the library for
    Windows is lib
    ativeMapRClient.dll
    in the MapR
    client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.

    Without adding the specified library or libraries, you may encounter the following
    error: no MapRClient in java.library.path.

  • Set the -Djava.library.path argument, for example, in the Job Run VM arguments area
    of the Run/Debug view in the Preferences dialog box in the Window menu. This argument provides to the Studio the path to the
    native library of that MapR client. This allows the subscription-based users to make
    full use of the Data viewer to view locally in the
    Studio the data stored in MapR.

For further information about how to install a Hadoop distribution, see the manuals
corresponding to the Hadoop distribution you are using.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tMapRDBOutput properties for Apache Spark Batch

These properties are used to configure tMapRDBOutput running in the Spark Batch Job framework.

The Spark Batch
tMapRDBOutput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Storage configuration

Select the tMapRDBConfiguration component from which the
Spark system to be used reads the configuration information to connect to MapRDB.

Schema et Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table name

Type in the name of the table in which you need to write data. This
table must already exist.

Table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a
MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Row key column

Select the column used as the row key column of the table.

Then if needs be, select the Store row key
column to HBase column
check box to make the row key column a
column belonging to a specific column family.

Families

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

The Column column of this table is automatically filled
once you have defined the schema; in the Family name
column, enter the column families you want to create or use to group the columns in the
Column column. For further information about a column
family, see Apache documentation at Column families.

Advanced settings

Use batch mode

Select this check box to activate the batch mode for data processing.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component uses a tMapRDBConfiguration component present in the same Job to connect to
MapR-DB.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tMapRDBOutput properties for Apache Spark Streaming

These properties are used to configure tMapRDBOutput running in the Spark Streaming Job framework.

The Spark Streaming
tMapRDBOutput component belongs to the Databases family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Storage configuration

Select the tMapRDBConfiguration component from which the
Spark system to be used reads the configuration information to connect to MapRDB.

Schema et Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table name

Type in the name of the table in which you need to write data. This
table must already exist.

Table Namespace mappings

Enter the string to be used to construct the mapping between an Apache HBase table and a
MapR table.

For the valid syntax you can use, see http://doc.mapr.com/display/MapR40x/Mapping+Table+Namespace+Between+Apache+HBase+Tables+and+MapR+Tables.

Row key column

Select the column used as the row key column of the table.

Then if needs be, select the Store row key
column to HBase column
check box to make the row key column a
column belonging to a specific column family.

Families

Complete this table to map the columns of the table to be used with the schema columns you
have defined for the data flow to be processed.

The Column column of this table is automatically filled
once you have defined the schema; in the Family name
column, enter the column families you want to create or use to group the columns in the
Column column. For further information about a column
family, see Apache documentation at Column families.

Advanced settings

Use batch mode

Select this check box to activate the batch mode for data processing.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component uses a tMapRDBConfiguration component present in the same Job to connect to
MapR-DB.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x