tFixedFlowInput
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tFixedFlowInput Standard properties.
The component in this framework is available in all Talend
products. -
MapReduce: see tFixedFlowInput MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Batch: see tFixedFlowInput properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming: see tFixedFlowInput properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
-
Storm: see tFixedFlowInput Storm properties (deprecated).
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tFixedFlowInput Standard properties
These properties are used to configure tFixedFlowInput running in the Standard Job framework.
The Standard
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available in all Talend
products.
Basic settings
Schema and Edit |
A schema is a row description, it defines the number of fields Click Edit
|
 |
Built-in: The schema will be |
 |
Repository: You have already |
Mode |
From the three options, select the mode that you want to
Use Single Table : Enter the data
Use Inline Table : Add the row(s)
Use Inline Content : Enter the data |
Number of rows |
Enter the number of lines to be generated. |
Values |
Between inverted commas, enter the values corresponding to the |
Advanced settings
tStat |
Select this check box to gather the Job processing metadata at a |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component can be used as a start or intermediate component |
Related scenarios
tFixedFlowInput MapReduce properties (deprecated)
These properties are used to configure tFixedFlowInput running in the MapReduce Job framework.
The MapReduce
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit
|
 |
Built-in: The schema will be created |
 |
Repository: You have already created |
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
Number of rows |
Enter the number of lines to be generated. |
Values |
Between inverted commas, enter the values corresponding to the columns |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
In a Note that in this documentation, unless otherwise |
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tFixedFlowInput properties for Apache Spark Batch
These properties are used to configure tFixedFlowInput running in the Spark Batch Job framework.
The Spark Batch
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit
|
 |
Built-in: The schema will be created |
 |
Repository: You have already created |
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
Number of rows |
Enter the number of lines to be generated. |
Values |
Between inverted commas, enter the values corresponding to the columns |
Advanced settings
Set the number of partitions |
Select this check box and then enter the number of partitions into If you leave this check box clear, each input row forms a partition. |
Usage
Usage rule |
This component is used as a start component and requires an output This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a related scenario, see Performing download analysis using a Spark Batch Job.
tFixedFlowInput properties for Apache Spark Streaming
These properties are used to configure tFixedFlowInput running in the Spark Streaming Job framework.
The Spark Streaming
tFixedFlowInput component belongs to the Misc family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit
|
 |
Built-in: The schema will be created |
 |
Repository: You have already created |
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
Number of rows |
Enter the number of lines to be generated. |
Input repetition interval |
Enter the time interval in millisecond at the end of which the input This allows you to generate a stream of data flow. |
Values |
Between inverted commas, enter the values corresponding to the columns |
Usage
Usage rule |
This component is used as a start component and requires an output link. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.
tFixedFlowInput Storm properties (deprecated)
These properties are used to configure tFixedFlowInput running in the Storm Job framework.
The Storm
tFixedFlowInput component belongs to the Misc family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
The Storm framework is deprecated from Talend 7.1 onwards. Use Talend Jobs for Apache Spark Streaming to accomplish your Streaming related tasks.
Basic settings
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit
|
 |
Built-in: The schema will be created |
 |
Repository: You have already created |
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
Number of rows |
Enter the number of lines to be generated. |
Values |
Between inverted commas, enter the values corresponding to the columns |
Usage
Usage rule |
In a The Storm version does not support the use of the global variables. Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Storm Connection |
You need to use the Storm Configuration tab in the This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Storm version of this component
yet.