tTeradataInput
Executes a DB query with a strictly defined order which must correspond to the
schema definition.
tTeradataInput reads a database and
extracts fields based on a query. It passes on the field list to the next component via a
Main row link.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Standard: see tTeradataInput Standard properties.
The component in this framework is available in all Talend
products. -
MapReduce: see tTeradataInput MapReduce properties (deprecated).
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Batch: see tTeradataInput properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
tTeradataInput Standard properties
These properties are used to configure tTeradataInput running in the Standard Job framework.
The Standard
tTeradataInput component belongs to the Databases family.
The component in this framework is available in all Talend
products.
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.
Basic settings
Database |
Select a type of database from the list and click |
Property type |
Either Built-in or Repository |
 |
Built-in: No property data stored |
 |
Repository: Select the repository |
Use an existing connection |
Select this check box and in the Component List click the relevant connection component to Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:
For an example about how to share a database connection |
|
Click this icon to open a database connection wizard and store the For more information about setting up and storing database connection |
Host |
Database server IP address |
Database |
Name of the database |
Username and Password |
DB user authentication data. To enter the password, click the […] button next to the |
Schema and Edit |
A schema is a row description. It defines the number of fields This This |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
 |
Click Edit
|
Table name |
Browse to, or enter the name of the table to be used. |
Query type and Query |
Enter your DB query paying particularly attention to properly sequence
If using the dynamic schema feature, the SELECT query must |
Advanced settings
Additional JDBC parameters |
Specify additional connection properties in the existing DB |
Trim all the String/Char columns |
Select this check box to remove leading and trailing whitespace from |
Trim column |
Remove leading and trailing whitespace from defined columns. |
Query band |
Select this check box to use the Teradata Query Banding feature to add metadata to the query Once selecting the check box, the Query Band parameters This check box actually generates the SET QUERY_BAND FOR SESSION statement with the key/value This check box is not available when you have selected the Using an |
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
NB_LINE: the number of rows processed. This is an After
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component covers all possible SQL queries for Teradata |
Dynamic settings |
Click the [+] button to add a row in the table The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Limitation |
Due to license incompatibility, one or more JARs required to use |
Related scenarios
tTeradataInput MapReduce properties (deprecated)
These properties are used to configure tTeradataInput running in the MapReduce Job framework.
The MapReduce
tTeradataInput component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.
Basic settings
Property type |
Either Built-in or Repository |
 |
Built-in: No property data stored |
 |
Repository: Select the repository |
Host |
Database server IP address |
Database |
Name of the database |
Username and Password |
DB user authentication data. To enter the password, click the […] button next to the |
Schema and Edit |
A schema is a row description. It defines the number of fields |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
 |
Click Edit
|
Table name |
Browse to, or enter the name of the table to be used. |
Query type and Query |
Enter your DB query paying particularly attention to properly sequence |
Die on error |
Select the check box to stop the execution of the Job when an error Clear the check box to skip any rows on error and complete the process for |
Advanced settings
Additional JDBC parameters |
Specify additional connection properties in the existing DB |
Trim all the String/Char columns |
Select this check box to remove leading and trailing whitespace from |
Trim column |
Remove leading and trailing whitespace from defined columns. |
Usage
Usage rule |
In a You need to use the Hadoop Configuration tab in the Note that in this documentation, unless otherwise |
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use |
Related scenarios
No scenario is available for the Map/Reduce version of this component yet.
tTeradataInput properties for Apache Spark Batch
These properties are used to configure tTeradataInput running in the Spark Batch Job framework.
The Spark Batch
tTeradataInput component belongs to the Databases family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Property type |
Either Built-in or Repository |
 |
Built-in: No property data stored |
 |
Repository: Select the repository |
Use an existing configuration |
Select this check box and in the Component List click the relevant connection component to |
|
Click this icon to open a database connection wizard and store the For more information about setting up and storing database connection |
Host |
Database server IP address |
Database |
Name of the database |
Username and Password |
DB user authentication data. To enter the password, click the […] button next to the |
Schema and Edit |
A schema is a row description. It defines the number of fields |
 |
Built-In: You create and store the schema locally for this component |
 |
Repository: You have already created the schema and stored it in the |
 |
Click Edit
|
Table Name |
Type in the name of the table from which you need to read |
Query type and Query |
Specify the database query statement paying particularly attention to the If you are using Spark V2.0 onwards, the Spark SQL does not For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the schema that the mytable table belongs to, in the query, you must enter mytable only. |
Advanced settings
Additional JDBC parameters |
Specify additional connection properties in the existing DB |
Spark SQL JDBC parameters |
Add the JDBC properties supported by Spark SQL to this table. This component automatically set the url, dbtable and driver properties by using the configuration from |
Trim all the String/Char columns |
Select this check box to remove leading and trailing whitespace from |
Trim column |
Remove leading and trailing whitespace from defined columns. |
Enable partitioning |
Select this check box to read data in partitions. Define, within double quotation marks, the following parameters to
configure the partitioning:
The average size of the partitions is the result of the difference between the For example, to partition 1000 rows into 4 partitions, if you enter 0 for |
Usage
Usage rule |
This component is used as a start component and requires an output This component should use a tTeradataConfiguration This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Limitation |
Due to license incompatibility, one or more JARs required to use |
Related scenarios
For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.