July 30, 2023

tFixedFlowInput – Docs for ESB 7.x

tFixedFlowInput

Generates a fixed flow from internal variables.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tFixedFlowInput Standard properties

These properties are used to configure tFixedFlowInput running in the Standard Job framework.

The Standard
tFixedFlowInput component belongs to the Misc family.

The component in this framework is available in all Talend
products
.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields
that will be processed and passed on to the next component. The
schema is either built-in or remote in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-in: The schema will be
created and stored locally for this component only. Related topic:
see
Talend Studio User Guide
.

 

Repository: You have already
created the schema and stored it in the Repository, hence can be
reused in various projects and job designs. Related topic: see

Talend Studio User
Guide
.

Mode

From the three options, select the mode that you want to
use.

Use Single Table : Enter the data
that you want to generate in the relevant value field.

Use Inline Table : Add the row(s)
that you want to generate.

Use Inline Content : Enter the data
that you want to generate, separated by the separators that you have
already defined in the Row and
Field Separator fields.

Number of rows

Enter the number of lines to be generated.

Values

Between inverted commas, enter the values corresponding to the
columns you defined in the schema dialog box via the Edit schema button.

Advanced settings

tStat
Catcher Statistics

Select this check box to gather the Job processing metadata at a
Job level as well as at each component level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component can be used as a start or intermediate component
and thus requires an output component.

tFixedFlowInput MapReduce properties (deprecated)

These properties are used to configure tFixedFlowInput running in the MapReduce Job framework.

The MapReduce
tFixedFlowInput component belongs to the Misc family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields that
will be processed and passed on to the next component. The schema is
either built-in or remote in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-in: The schema will be created
and stored locally for this component only. Related topic: see

Talend Studio User Guide
.

 

Repository: You have already created
the schema and stored it in the Repository, hence can be reused in
various projects and job designs. Related topic: see
Talend Studio User Guide
.

Mode

From the three options, select the mode that you want to use.

Use Single Table : Enter the data that
you want to generate in the relevant value field.

Use Inline Table : Add the row(s) that
you want to generate.

Use Inline Content : Enter the data
that you want to generate, separated by the separators that you have
already defined in the Row and
Field Separator fields.

Number of rows

Enter the number of lines to be generated.

Values

Between inverted commas, enter the values corresponding to the columns
you defined in the schema dialog box via the Edit
schema
button.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tFixedFlowInput properties for Apache Spark Batch

These properties are used to configure tFixedFlowInput running in the Spark Batch Job framework.

The Spark Batch
tFixedFlowInput component belongs to the Misc family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields that
will be processed and passed on to the next component. The schema is
either built-in or remote in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-in: The schema will be created
and stored locally for this component only. Related topic: see

Talend Studio User Guide
.

 

Repository: You have already created
the schema and stored it in the Repository, hence can be reused in
various projects and job designs. Related topic: see
Talend Studio User Guide
.

Mode

From the three options, select the mode that you want to use.

Use Single Table : Enter the data that
you want to generate in the relevant value field.

Use Inline Table : Add the row(s) that
you want to generate.

Use Inline Content : Enter the data
that you want to generate, separated by the separators that you have
already defined in the Row and
Field Separator fields.

Number of rows

Enter the number of lines to be generated.

Values

Between inverted commas, enter the values corresponding to the columns
you defined in the schema dialog box via the Edit
schema
button.

Advanced settings

Set the number of partitions

Select this check box and then enter the number of partitions into
which you want to dispatch the input rows.

If you leave this check box clear, each input row forms a partition.
For example, with 5 in the Number of rows field, each row is handled as
one partition and thus they make 5 partitions in total.

Usage

Usage rule

This component is used as a start component and requires an output
link..

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

tFixedFlowInput properties for Apache Spark Streaming

These properties are used to configure tFixedFlowInput running in the Spark Streaming Job framework.

The Spark Streaming
tFixedFlowInput component belongs to the Misc family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields that
will be processed and passed on to the next component. The schema is
either built-in or remote in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-in: The schema will be created
and stored locally for this component only. Related topic: see

Talend Studio User Guide
.

 

Repository: You have already created
the schema and stored it in the Repository, hence can be reused in
various projects and job designs. Related topic: see
Talend Studio User Guide
.

Mode

From the three options, select the mode that you want to use.

Use Single Table : Enter the data that
you want to generate in the relevant value field.

Use Inline Table : Add the row(s) that
you want to generate.

Use Inline Content : Enter the data
that you want to generate, separated by the separators that you have
already defined in the Row and
Field Separator fields.

Number of rows

Enter the number of lines to be generated.

Input repetition interval

Enter the time interval in millisecond at the end of which the input
data is sent to the following component another time.

This allows you to generate a stream of data flow.

Values

Between inverted commas, enter the values corresponding to the columns
you defined in the schema dialog box via the Edit
schema
button.

Usage

Usage rule

This component is used as a start component and requires an output link.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.

tFixedFlowInput Storm properties (deprecated)

These properties are used to configure tFixedFlowInput running in the Storm Job framework.

The Storm
tFixedFlowInput component belongs to the Misc family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

The Storm framework is deprecated from Talend 7.1 onwards. Use Talend Jobs for Apache Spark Streaming to accomplish your Streaming related tasks.

Basic settings

Schema and Edit
Schema

A schema is a row description, it defines the number of fields that
will be processed and passed on to the next component. The schema is
either built-in or remote in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-in: The schema will be created
and stored locally for this component only. Related topic: see

Talend Studio User Guide
.

 

Repository: You have already created
the schema and stored it in the Repository, hence can be reused in
various projects and job designs. Related topic: see
Talend Studio User Guide
.

Mode

From the three options, select the mode that you want to use.

Use Single Table : Enter the data that
you want to generate in the relevant value field.

Use Inline Table : Add the row(s) that
you want to generate.

Use Inline Content : Enter the data
that you want to generate, separated by the separators that you have
already defined in the Row and
Field Separator fields.

Number of rows

Enter the number of lines to be generated.

Values

Between inverted commas, enter the values corresponding to the columns
you defined in the schema dialog box via the Edit
schema
button.

Usage

Usage rule

In a
Talend
Storm Job, it is used as a start component. The other
components used along with it must be Storm components, too. They generate native Storm code
that can be executed directly in a Storm system.

The Storm version does not support the use of the global variables.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Storm Connection

You need to use the Storm Configuration tab in the
Run view to define the connection to a given Storm
system for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Storm version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x