
Component family |
Databases/JDBC |
|
Function |
tJDBCInput reads any database If you have subscribed to one of the Talend solutions with Big Data, you are |
|
Purpose |
tJDBCInput executes a database |
|
Basic settings |
Property type |
Either Built-in or |
|
|
Built-in: No property data stored |
|
|
Repository: Select the repository |
|
Use an existing connection |
Select this check box and in the Component List click the NoteWhen a Job contains the parent Job and the child Job, if you need to share an existing
For an example about how to share a database connection across Job levels, see |
|
![]() |
Click this icon to open a database connection wizard and store the For more information about setting up and storing database |
|
JDBC URL |
Type in the database location path. |
|
Driver JAR |
Click the plus button under the table to add lines of the count of |
|
Class Name |
Type in the Class name to be pointed to in the driver. |
|
Username and |
Database user authentication data. To enter the password, click the […] button next to the |
|
Schema and Edit |
A schema is a row description. It defines the number of fields to be processed and passed on This component offers the advantage of the dynamic schema feature. This allows you to This dynamic schema feature is designed for the purpose of retrieving unknown columns |
|
|
Built-In: You create and store the schema locally for this |
|
|
Repository: The schema already |
Click Edit schema to make changes to the schema. If the
|
||
|
Table Name |
Type in the name of the table. |
|
Query type and |
Enter your database query paying particularly attention to Warning
If using the dynamic schema feature, |
Specify a data source alias |
Select this check box and specify the alias of a data source created on the Talend Runtime side to use the shared connection pool defined in the data source configuration. WarningIf you use the component’s own DB configuration, your data source connection will be This check box is not available when the Use an existing |
|
Advanced settings |
Use cursor |
When selected, helps to decide the row set to work with at a time |
|
Trim all the String/Char columns |
Select this check box to remove leading and trailing whitespace |
|
Trim column |
Remove leading and trailing whitespace from defined |
|
Enable Mapping File for Dynamic |
Select this check box to use the specified metadata mapping file For more information about metadata mapping files, see the section |
|
Mapping File |
Specify the metadata mapping file to use by selecting a type of This list field appears only when the Enable |
|
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Dynamic settings |
Click the [+] button to add a row in the table and fill The Dynamic settings table is available only when the For more information on Dynamic settings and context |
|
Global Variables |
NB_LINE: the number of rows processed. This is an After
QUERY: the SQL query statement being processed. This is a ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see Talend Studio |
|
Usage |
This component covers all possible SQL queries for any database |
|
Log4j |
The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User For more information on the log4j logging levels, see the Apache documentation at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html. |
Warning
The information in this section is only for users that have subscribed to one of
the Talend solutions with Big Data and is not applicable to
Talend Open Studio for Big Data users.
In a Talend Map/Reduce Job, tJDBCInput, as well as the other Map/Reduce components preceding it,
generates native Map/Reduce code. This section presents the specific properties of
tJDBCInput when it is used in that situation. For
further information about a Talend Map/Reduce Job, see Talend Big Data Getting Started Guide.
Component family |
MapReduce/Input |
|
Basic settings |
Property type |
Either Built-in or |
|
|
Built-in: No property data stored |
|
|
Repository: Select the repository |
|
![]() |
Click this icon to open a database connection wizard and store the For more information about setting up and storing database |
|
JDBC URL |
Type in the database location path. For example, if a MySQL |
|
Driver JAR |
Click the plus button under the table to add lines of the count of |
|
Class Name |
Type in the Class name to be pointed to in the driver. For |
|
Username and |
Database user authentication data. To enter the password, click the […] button next to the |
|
Schema and Edit |
A schema is a row description. It defines the number of fields to be processed and passed on |
|
|
Built-In: You create and store the schema locally for this |
|
|
Repository: You have already created the schema and |
Click Edit schema to make changes to the schema. If the
|
||
|
Table Name |
Type in the name of the table from which you need to read |
|
Die on error |
Select this check box to stop the execution of the Job when an error occurs. Clear the check box to skip any rows on error and complete the process for error-free rows. |
|
Query type and |
Enter your database query paying particularly attention to Warning
If using the dynamic schema feature, the |
Usage |
In a Talend Map/Reduce Job, it is used as a start component and requires For further information about a Talend Map/Reduce Job, see the sections Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
|
Hadoop Connection |
You need to use the Hadoop Configuration tab in the This connection is effective on a per-Job basis. |
|
Limitation |
We recommend using the following databases with the Map/Reduce It may work with other databases as well, but these may not |
Related topics in tDBInput
and tMysqlInput
scenarios:
Related topic in tContextLoad: see Scenario: Reading data from different MySQL databases using dynamically loaded connection
parameters.