August 16, 2023

tSocketTextStreamInput – Docs for ESB 6.x

tSocketTextStreamInput

Creates a textual input stream by connecting to a network.

tSocketTextStreamInput uses a TCP socket to receive data from a given
network server, interprets the incoming data to UTF8 with
as
line delimiter and sends the data to the following component for processing.

tSocketTextStreamInput properties for Apache Spark Streaming

These properties are used to configure tSocketTextStreamInput running in the Spark Streaming Job framework.

The Spark Streaming
tSocketTextStreamInput component belongs to the Internet family.

The streaming version of this component is available in the Palette of the Studio only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Host name

Enter the name or the IP address of the server to be connected to.

Port

Enter the number of the listening port of the server to be connected to.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

The schema of this component is read-only. You can click Edit schema to view the schema.

This read-only line column is used to carry the strings
to be passed to the next component in the Job. Depending on the format of the strings, you
select the corresponding component to process the strings, for example, tExtractJSONFields to process JSON Strings.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark
Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x