tKafkaOutput
Publishes messages into a Kafka system.
This component receives messages serialized into byte arrays by its preceding component and issues these messages into a given Kafka system.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tKafkaOutput Standard properties.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric. -
Spark Streaming:
see tKafkaOutput properties for Apache Spark Streaming.This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tKafkaOutput Standard properties
These properties are used to configure tKafkaOutput running in the Standard Job framework.
The Standard
tKafkaOutput component belongs to the Internet family.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description. It defines the number of fields Note that the schema of this component is read-only. It stores the |
Use an existing connection |
Select this check box and in the Component List click the relevant connection component to |
Version |
Select the version of the Kafka cluster to be used. |
Broker list |
Enter the addresses of the broker nodes of the Kafka cluster to be used. The form of this address should be hostname:port. This If you need to specify several addresses, separate them using a comma (,). |
Topic name |
Enter the name of the topic you want to publish messages to. This topic must already |
Compress the data |
Select the Compress the data check box to compress the |
Use SSL/TLS |
Select this check box to enable the SSL or TLS encrypted connection. Then you need to use the tSetKeystore This check box is available since Kafka 0.9.0.1. |
Use Kerberos authentication |
If the Kafka cluster to be used is secured with Kerberos, select this
For further information about how a Kafka cluster is secured with This check box is available since Kafka 0.9.0.1. |
Advanced settings
Kafka properties |
Add the Kafka new producer properties you need to customize to this table. For further information about the new producer properties you can define in this table, |
Set Headers |
Select this check box to add headers to messages to be This feature is available to Kafka 1.1.0 onwards. |
tStatCatcher Statistics |
Select this check box to gather the processing metadata at the Job |
Usage
Usage rule |
This component is an end component. It requires a tJavaRow or tJava component to transform the incoming data into The following sample shows how to construct a statement to perform
In this code, the output_row |
Related scenarios
No scenario is available for the Standard version of this component yet.
tKafkaOutput properties for Apache Spark Streaming
These properties are used to configure tKafkaOutput running in the Spark Streaming Job framework.
The Spark Streaming
tKafkaOutput component belongs to the Messaging family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description. It defines the number of fields Note that the schema of this component is read-only. It stores the |
Broker list |
Enter the addresses of the broker nodes of the Kafka cluster to be used. The form of this address should be hostname:port. This If you need to specify several addresses, separate them using a comma (,). |
Topic name |
Enter the name of the topic you want to publish messages to. This topic must already |
Compress the data |
Select the Compress the data check box to compress the |
Advanced settings
Kafka properties |
Add the Kafka new producer properties you need to customize to this table. For further information about the new producer properties you can define in this table, |
Connection pool |
In this area, you configure, for each Spark executor, the connection pool used to control
|
Evict connections |
Select this check box to define criteria to destroy connections in the connection pool. The
|
Usage
Usage rule |
This component is used as an end component and requires an input link. This component needs a Write component such as tWriteJSONField to define a serializedValue column in the input schema to send serialized data. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.