tKafkaConnection
Opens a reusable Kafka connection.
The tKafkaConnection component opens
a connection to a given Kafka cluster so that the other Kafka component
in subJobs can reuse this connection.
tKafkaConnection Standard properties
These properties are used to configure tKafkaConnection running in the Standard Job framework.
The Standard
tKafkaConnection component belongs to the Internet family.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Version |
Select the version of the Kafka cluster to be used. |
Zookeeper quorum list |
Enter the address of the Zookeeper service of the Kafka cluster to be used. The form of this address should be hostname:port. This If you need to specify several addresses, separate them using a comma (,). |
Broker list |
Enter the addresses of the broker nodes of the Kafka cluster to be used. The form of this address should be hostname:port. This If you need to specify several addresses, separate them using a comma (,). |
Use SSL/TLS |
Select this check box to enable the SSL or TLS encrypted connection. Then you need to use the tSetKeystore This check box is available since Kafka 0.9.0.1. |
Use Kerberos authentication |
If the Kafka cluster to be used is secured with Kerberos, select this
For further information about how a Kafka cluster is secured with This check box is available since Kafka 0.9.0.1. |
Advanced settings
tStatCatcher Statistics |
Select this check box to gather the processing metadata at the Job |
Usage
Usage rule |
This component is used standalone to create the Kafka connection that |
Related scenarios
No scenario is available for the Standard version of this component yet.
Kafka and AVRO in a Job
components (the regular Kafka components) and the Kafka components for AVRO handle AVRO data
differently, as is reflected in the approaches AVRO provides to (de)serialize the data of AVRO
format.
- The regular Kafka components read and write the JSON format only. Therefore, if your Kafka
produces or consumes AVRO data and for some reason, the Kafka components for AVRO are not
available, you must use an avro-tools library to convert your data between AVRO and JSON
outside your Job.For example,
1java -jar C:2_ProdAvroavro-tools-1.8.2.jar tojson out.avro
can download the avro-tools-1.8.2.jar library used in this example
from the MVN Repository. This command converts the
out.avro file to
json.Or1java -jar avro-tools-1.8.2.jar fromjson --schema-file twitter.avsc twitter.json > twitter.avro
command converts the twitter.json file to
twitter.avro using the schema from
twitter.avsc. - The Kafka components for AVRO are available in the Spark
framework only; they handle data directly in the AVRO format. If your Kafka cluster produces
and consumes AVRO data, use tKafkaInputAvro to read data directly from
Kafka and tWriteAvroFields to send AVRO data to
tKafkaOutput.However, these components do not handle the AVRO data
created by an avro-tools library, because the avro-tools libraries and the components for
AVRO do not use the same approach provided by AVRO.
format are as follows:
- AVRO files are generated with the embedded AVRO schema in each file (via
org.apache.avro.file.{DataFileWriter/DataFileReader}). The
avro-tools libraries use this approach. - AVRO records are generated without embedding the schema in each record (via
org.apache.avro.io.{BinaryEncoder/BinaryDecoder}). The Kafka
components for AVRO use this approach.This approach is highly recommended and favored
when AVRO encoded messages are constantly written to a Kafka topic, because in this
approach, no overhead is incurred to re-embed the AVRO schema in every single message.
This is a significant advantage over the other approach when using Spark Streaming to
read data from or write data to Kafka, since records (messages) are usually small while
the size of the AVRO schema is relatively large, so embedding the schema in each message
is not cost-effective.
The outputs of the two approaches cannot be mixed in the same read-write process.