tMapRStreamsOutput
Publishes messages into a MapR Streams system. Only MapR V5.2 onwards is supported
by this component.
This component receives messages serialized into byte arrays by its preceding component and issues these messages into a given MapR Streams system.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tMapRStreamsOutput Standard properties.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric. -
Spark Streaming:
see tMapRStreamsOutput properties for Apache Spark Streaming.This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tMapRStreamsOutput Standard properties
These properties are used to configure tMapRStreamsOutput running in the Standard Job framework.
The Standard
tMapRStreamsOutput component belongs to the Internet family.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description. It defines the number of fields Note that the schema of this component is read-only. It stores the |
Use an existing connection |
Select this check box and from the list displayed select the |
Distribution and |
Select the MapR distribution to be used. Only MapR V5.2 onwards is supported If the distribution you need to use with your MapRDB database is not
|
Topic name |
Enter the name of the topic you want to publish messages to. This topic must already |
Compress the data |
Select the Compress the data check box to compress the |
Advanced settings
Producer properties |
Add the MapR Streams producer properties you need to customize to this table. For further information about the producer configuration you can define in this table, see |
tStatCatcher Statistics |
Select this check box to gather the processing metadata at the Job |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component is an end component. It requires a tJavaRow or tJava component to transform the incoming data into The following sample shows how to construct a statement to perform
In this code, the output_row |
||
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Related scenarios
No scenario is available for the Standard version of this component yet.
tMapRStreamsOutput properties for Apache Spark Streaming
These properties are used to configure tMapRStreamsOutput running in the Spark Streaming Job framework.
The Spark Streaming
tMapRStreamsOutput component belongs to the Messaging family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description. It defines the number of fields Note that the schema of this component is read-only. It stores the |
Topic name |
Enter the name of the topic you want to publish messages to. This topic must already |
Compress the data |
Select the Compress the data check box to compress the |
Advanced settings
Producer properties |
Add the MapR Streams producer properties you need to customize to this table. For further information about the producer configuration you can define in this table, see |
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The
|
Usage
Usage rule |
This component is used as an end component and requires an input link. This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data. |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.