July 30, 2023

tKafkaConnection – Docs for ESB 7.x

tKafkaConnection

Opens a reusable Kafka connection.

The tKafkaConnection component opens
a connection to a given Kafka cluster so that the other Kafka component
in subJobs can reuse this connection.

tKafkaConnection Standard properties

These properties are used to configure tKafkaConnection running in the Standard Job framework.

The Standard
tKafkaConnection component belongs to the Internet family.

The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.

Basic settings

Version

Select the version of the Kafka cluster to be used.

Zookeeper quorum list

Enter the address of the Zookeeper service of the Kafka cluster to be used.

The form of this address should be hostname:port. This
information is the name and the port of the hosting node in this Kafka cluster.

If you need to specify several addresses, separate them using a comma (,).

Broker list

Enter the addresses of the broker nodes of the Kafka cluster to be used.

The form of this address should be hostname:port. This
information is the name and the port of the hosting node in this Kafka cluster.

If you need to specify several addresses, separate them using a comma (,).

Use SSL/TLS

Select this check box to enable the SSL or TLS encrypted connection.

Then you need to use the tSetKeystore
component in the same Job to specify the encryption information.

This check box is available since Kafka 0.9.0.1.

Use Kerberos authentication

If the Kafka cluster to be used is secured with Kerberos, select this
check box to display the related parameters to be defined:

  • JAAS configuration path: enter the
    path, or browse to the JAAS configuration file to be used by the Job to
    authenticate as a client to Kafka.

    This JAAS file describes how the clients, the Kafka-related
    Jobs in terms of
    Talend
    , can connect to the Kafka broker nodes, using either the kinit mode or
    the keytab mode. It must be stored in the machine where these Jobs are
    executed.


    Talend
    , Kerberos or Kafka does not provide this JAAS file. You need to create
    it by following the explanation in Configuring Kafka
    client
    depending on the security strategy of your
    organization.

  • Kafka brokers principal name: enter
    the primary part of the Kerberos principal you defined for the brokers when
    you were creating the broker cluster. For example, in this principal kafka/kafka1.hostname.com@EXAMPLE.COM, the primary
    part to be used to fill in this field is kafka.

  • Set kinit command path: Kerberos
    uses a default path to its kinit executable. If you have changed this path,
    select this check box and enter the custom access path.

    If you leave this check box clear, the default path is
    used.

  • Set Kerberos configuration path:
    Kerberos uses a default path to its configuration file, the krb5.conf file (or krb5.ini
    in Windows) for Kerberos 5 for example. If you have changed this path,
    select this check box and enter the custom access path to the Kerberos
    configuration file.

    If you leave this check box clear, a given strategy is applied
    by Kerberos to attempt to find the configuration information it requires.
    For details about this strategy, see the Locating the
    krb5.conf Configuration File
    section in Kerberos
    requirements
    .

For further information about how a Kafka cluster is secured with
Kerberos, see Authenticating using
SASL
.

This check box is available since Kafka 0.9.0.1.

Advanced settings

tStatCatcher Statistics

Select this check box to gather the processing metadata at the Job
level as well as at each component level.

Usage

Usage rule

This component is used standalone to create the Kafka connection that
the other Kafka components can reuse.

Related scenarios

No scenario is available for the Standard version of this component yet.

Kafka and AVRO in a Job

In a Talend Job, the Kafka
components (the regular Kafka components) and the Kafka components for AVRO handle AVRO data
differently, as is reflected in the approaches AVRO provides to (de)serialize the data of AVRO
format.

  • The regular Kafka components read and write the JSON format only. Therefore, if your Kafka
    produces or consumes AVRO data and for some reason, the Kafka components for AVRO are not
    available, you must use an avro-tools library to convert your data between AVRO and JSON
    outside your Job.

    For example,
    You
    can download the avro-tools-1.8.2.jar library used in this example
    from the MVN Repository. This command converts the
    out.avro file to
    json.
    Or This
    command converts the twitter.json file to
    twitter.avro using the schema from
    twitter.avsc.
  • The Kafka components for AVRO are available in the Spark
    framework only; they handle data directly in the AVRO format. If your Kafka cluster produces
    and consumes AVRO data, use tKafkaInputAvro to read data directly from
    Kafka and tWriteAvroFields to send AVRO data to
    tKafkaOutput.

    However, these components do not handle the AVRO data
    created by an avro-tools library, because the avro-tools libraries and the components for
    AVRO do not use the same approach provided by AVRO.

The two approaches AVRO provides to (de)serialize the data of AVRO
format are as follows:

  1. AVRO files are generated with the embedded AVRO schema in each file (via
    org.apache.avro.file.{DataFileWriter/DataFileReader}). The
    avro-tools libraries use this approach.
  2. AVRO records are generated without embedding the schema in each record (via
    org.apache.avro.io.{BinaryEncoder/BinaryDecoder}). The Kafka
    components for AVRO use this approach.

    This approach is highly recommended and favored
    when AVRO encoded messages are constantly written to a Kafka topic, because in this
    approach, no overhead is incurred to re-embed the AVRO schema in every single message.
    This is a significant advantage over the other approach when using Spark Streaming to
    read data from or write data to Kafka, since records (messages) are usually small while
    the size of the AVRO schema is relatively large, so embedding the schema in each message
    is not cost-effective.

The outputs of the two approaches cannot be mixed in the same read-write process.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x