tKafkaInputAvro
Transmits Avro-formatted messages you need to process to its following component
in the Job you are designing.
tKafkaInputAvro is a generic message broker that transmits messages in
the
AVRO
format to the Job that runs transformations over these
messages.This component
cannot handle AVRO messages created by the avro-tools libraries.
tKafkaInputAvro properties for Apache Spark Streaming
These properties are used to configure tKafkaInputAvro running in the Spark Streaming Job framework.
The Spark Streaming
tKafkaInputAvro component belongs to the Messaging family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description. It defines the number of fields |
Broker list |
Enter the addresses of the broker nodes of the Kafka cluster to be used. The form of this address should be hostname:port. This If you need to specify several addresses, separate them using a comma (,). |
Starting offset |
Select the starting point from which the messages of a topic are consumed. In Kafka, the sequential ID number of a message is called offset. From this list, you can select From Note that in order to enable the component to remember the position of a consumed message, Each consumer group has its own counter to remember the position of a message it has
|
Topic name |
Enter the name of the topic from which tKafkaInput |
Group ID |
Enter the name of the consumer group to which you want the current consumer (the tKafkaInput component) to belong. This consumer group will be created at runtime if it does not exist at that moment. This property is available only when you are using Spark 2.0 or the Hadoop distribution to |
Set number of records per second to read from |
Enter this number within double quotation marks to limit the size of each batch to be sent For example, if you put 100 and the batch value you If you leave this check box clear, the component tries to read all the available messages |
Use SSL/TLS |
Select this check box to enable the SSL or TLS encrypted connection. Then you need to use the tSetKeystore This property is available only when you are using Spark 2.0 or the Hadoop distribution to The TrustStore file and any used KeyStore file must be stored locally on |
Use Kerberos authentication |
If the Kafka cluster to be used is secured with Kerberos, select this
For further information about how a Kafka cluster is secured with This check box is available since Kafka 0.9.0.1. |
Advanced settings
Kafka properties |
Add the Kafka consumer properties you need to customize to this table. For example, you For further information about the consumer properties you can define in this table, see |
Use hierarchical mode |
Select this check box to map the binary (including hierarchical) Avro schema to the Once selecting it, you need set the following parameter(s):
|
Usage
Usage rule |
This component is used as a start component and requires an output link. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents In the implementation of the current component in Spark, the Kafka offsets are |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.