tFixedFlowInput
Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:
-
Standard: see tFixedFlowInput Standard properties.
The component in this framework is generally available.
-
MapReduce: see tFixedFlowInput MapReduce properties.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data. -
Spark Batch: see tFixedFlowInput properties for Apache Spark Batch.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data. -
Spark Streaming: see tFixedFlowInput properties for Apache Spark Streaming.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric. -
Storm: see tFixedFlowInput Storm properties.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
tFixedFlowInput Standard properties
These properties are used to configure tFixedFlowInput running in the Standard Job framework.
The Standard
tFixedFlowInput component belongs to the Misc family.
The component in this framework is generally available.
Basic settings
|
Schema and Edit |
A schema is a row description, it defines the number of fields Click Edit schema to make changes to the schema.
|
|
|
Built-in: The schema will be |
|
|
Repository: You have already |
|
Mode |
From the three options, select the mode that you want to
Use Single Table : Enter the data
Use Inline Table : Add the row(s)
Use Inline Content : Enter the data |
|
Number of rows |
Enter the number of lines to be generated. |
|
Values |
Between inverted commas, enter the values corresponding to the |
Advanced settings
|
tStat |
Select this check box to gather the Job processing metadata at a |
Global Variables
|
Global Variables |
NB_LINE: the number of rows processed. This is an After
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
|
Usage rule |
This component can be used as a start or intermediate component |
Related scenarios
tFixedFlowInput MapReduce properties
These properties are used to configure tFixedFlowInput running in the MapReduce Job framework.
The MapReduce
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
|
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit schema to make changes to the schema.
|
|
|
Built-in: The schema will be created |
|
|
Repository: You have already created |
|
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
|
Number of rows |
Enter the number of lines to be generated. |
|
Values |
Between inverted commas, enter the values corresponding to the columns |
Global Variables
|
Global Variables |
NB_LINE: the number of rows processed. This is an After
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
|
Usage rule |
In a Note that in this documentation, unless otherwise |
|
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tFixedFlowInput properties for Apache Spark Batch
These properties are used to configure tFixedFlowInput running in the Spark Batch Job framework.
The Spark Batch
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.
Basic settings
|
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit schema to make changes to the schema.
|
|
|
Built-in: The schema will be created |
|
|
Repository: You have already created |
|
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
|
Number of rows |
Enter the number of lines to be generated. |
|
Values |
Between inverted commas, enter the values corresponding to the columns |
Advanced settings
|
Set the number of partitions |
Select this check box and then enter the number of partitions into If you leave this check box clear, each input row forms a partition. |
Usage
|
Usage rule |
This component is used as a start component and requires an output link.. This component, along with the Spark Batch component Palette it belongs to, appears only Note that in this documentation, unless otherwise |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a related scenario, see Performing download analysis using a Spark Batch Job.
tFixedFlowInput properties for Apache Spark Streaming
These properties are used to configure tFixedFlowInput running in the Spark Streaming Job framework.
The Spark Streaming
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
|
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit schema to make changes to the schema.
|
|
|
Built-in: The schema will be created |
|
|
Repository: You have already created |
|
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
|
Number of rows |
Enter the number of lines to be generated. |
|
Input repetition interval |
Enter the time interval in millisecond at the end of which the input This allows you to generate a stream of data flow. |
|
Values |
Between inverted commas, enter the values corresponding to the columns |
Usage
|
Usage rule |
This component is used as a start component and requires an output link. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
Spark Connection |
You need to use the Spark Configuration tab in
the Run view to define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.
tFixedFlowInput Storm properties
These properties are used to configure tFixedFlowInput running in the Storm Job framework.
The Storm
tFixedFlowInput component belongs to the Misc family.
The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.
Basic settings
|
Schema and Edit |
A schema is a row description, it defines the number of fields that Click Edit schema to make changes to the schema.
|
|
|
Built-in: The schema will be created |
|
|
Repository: You have already created |
|
Mode |
From the three options, select the mode that you want to use.
Use Single Table : Enter the data that
Use Inline Table : Add the row(s) that
Use Inline Content : Enter the data |
|
Number of rows |
Enter the number of lines to be generated. |
|
Values |
Between inverted commas, enter the values corresponding to the columns |
Usage
|
Usage rule |
In a The Storm version does not support the use of the global variables. Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
Storm Connection |
You need to use the Storm Configuration tab in the This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Storm version of this component
yet.