August 15, 2023

tFlumeOutput – Docs for ESB 6.x

tFlumeOutput

Acts as interface to integrate Flume and the Spark Streaming Job developed with the
Studio to continuously send data to a given Flume agent.

tFlumeOutput
receives RDDs from its preceding component, constructs Flume events out of these RDDs and
sends them to the source (input point) of a Flume agent.

tFlumeOutput properties for Apache Spark Streaming

These properties are used to configure tFlumeOutput running in the Spark Streaming Job framework.

The Spark Streaming
tFlumeOutput component belongs to the Messaging family.

The streaming version of this component is available in the Palette of the Studio only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Host and Port

Enter the hostname and the port of the machine used as the RPC client of the Flume system
to be used.

The RPC client of Flume allows tFlumeOutput to send data
to Flume. For further information about this RPC client, see the Flume documentation at
https://flume.apache.org/FlumeDeveloperGuide.html.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

This read-only line column is used by tFlumeOutput to write the body of a Flume event. Note that you must
define a same line column in the schema of the preceding
component to send data to this read-only column.

The other columns are added as header to the event to be outputted.

Advanced settings

Encoding

Select the encoding from the list or select Custom and define it manually.

This encoding is used by tFlumeOutput to encode the event
arrays to be outputted.

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control
the number of connections that stay open simultaneously. The default values given to the
following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number
    of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same
    time.

  • Max waiting time (ms): enter the maximum amount of time
    at the end of which the response to a demand for using a connection should be returned by
    the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number
    of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number
    of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The
following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval
    (in milliseconds) at the end of which the component checks the status of the connections and
    destroys the idle ones.

  • Min idle time for a connection to be eligible to
    eviction
    : enter the time interval (in milliseconds) at the end of which the idle
    connections are destroyed.

  • Soft min idle time for a connection to be eligible to
    eviction
    : this parameter works the same way as Min idle
    time for a connection to be eligible to eviction
    but it keeps the minimum number
    of idle connections, the number you define in the Min number of idle
    connections
    field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Limitation

Due to license incompatibility, one or more JARs required to use this component are not
provided. You can install the missing JARs for this particular component by clicking the
Install button on the Component tab view. You can also find out and add all missing JARs easily on the
Modules tab in the
Integration
perspective of your
studio. You can find more details about how to install external modules in Talend Help Center (https://help.talend.com).

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x