July 30, 2023

tFlumeInput – Docs for ESB 7.x

tFlumeInput

Acts as interface to integrate Flume and the Spark Streaming Job developed with the
Studio to continuously read data from a given Flume agent.

tFlumeInput streams
data from a given Flume agent and sends this data to its following components.

tFlumeInput properties for Apache Spark Streaming

These properties are used to configure tFlumeInput running in the Spark Streaming Job framework.

The Spark Streaming
tFlumeInput component belongs to the Messaging
family.

The streaming version of this component is available in Talend Real Time Big Data Platform and in
Talend Data Fabric.

Basic settings

Host and Port

Enter the hostname and the port of the machine used as the sink (the data output point
bound to the channel of a Flume agent) to receive data from Flume.

  • If you select As Receiver from the Type drop-down list, this machine must be one of the machines on which a
    Spark worker runs and the hostname must be the same as the one used by the resource
    manager of the Spark cluster to be used.

  • If you select As Sink from the Type drop-down list, this machine must be a sink in a Flume agent and be
    accessible to the Spark cluster.

Type

Select the approach to read data from Flume.

  • As Receiver: this is the Push-based approach typically
    employed by Flume. In this approach, a machine from the Spark cluster is set up as
    an agent to receive data pushed by Flume and the Spark Streaming Job you are
    designing reads data from this agent.

  • As Sink: this is the Pull-based approach. In this approach, a
    machine is set up as sink to buffer data pushed by Flume and the Spark Streaming Job
    you are designing pulls data from this sink.

For further information about these two approaches, see https://spark.apache.org/docs/1.3.1/streaming-flume-integration.html.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Built-In: You create and store the schema locally for this component
only.

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

This read-only line column is used by tFlumeInput to automatically extract the body of an input Flume event
and construct an RDD along with the other columns used to store the header of the same
event.

Advanced settings

Encoding

Select the encoding from the list or select Custom and define it manually.

This encoding is used by tFlumeInput to decode the input
event arrays.

Usage

Usage rule

This component is used as a start component and requires an output link.

At runtime, the tFlumeInput component keeps listening to
the sink and reads new events once they are buffered in this sink.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Limitation

Due to license incompatibility, one or more JARs required to use
this component are not provided. You can install the missing JARs for this particular
component by clicking the Install button
on the Component tab view. You can also
find out and add all missing JARs easily on the Modules tab in the
Integration
perspective of your studio. You can find more details about how to install external modules in
Talend Help Center (https://help.talend.com)
.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x