August 15, 2023

tJDBCInput – Docs for ESB 6.x

tJDBCInput

Reads any database using a JDBC API connection and extracts fields based on a
query.

tJDBCInput executes a database query
with a strictly defined order which must correspond to the schema definition. Then it
passes on the field list to the next component via a Main row link.

Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:

  • Standard: see tJDBCInput Standard properties.

    The component in this framework is generally available.

  • MapReduce: see tJDBCInput MapReduce properties.

    The component in this framework is available only if you have subscribed to one
    of the
    Talend
    solutions with Big Data.

  • Spark Batch: see tJDBCInput properties for Apache Spark Batch.

    This component also allows you to connect and read data from a RDS MariaDB, a
    RDS PostgreSQL or a RDS SQLServer database.

    The component in this framework is available only if you have subscribed to one
    of the
    Talend
    solutions with Big Data.

tJDBCInput Standard properties

These properties are used to configure tJDBCInput running in the Standard Job framework.

The Standard
tJDBCInput component belongs to the Databases family.

The component in this framework is generally available.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

Use an existing connection

Select this check box and in the Component
List
click the relevant connection component to reuse the connection
details you already defined.

Note:

When a Job contains the parent Job and the child Job, if you need to share an
existing connection between the two levels, for example, to share the connection created by
the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared
    in the Basic settings view of the
    connection component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read
    that registered database connection.

For an example about how to share a database connection across Job levels, see


Talend Studio
User Guide
.

Save_Icon.png

Click this icon to open a database connection wizard and store the database connection
parameters you set in the component Basic settings
view.

For more information about setting up and storing database connection parameters, see


Talend Studio
User Guide
.

JDBC URL

Specify the JDBC URL of the database to be used. For example, the
JDBC URL for the Amazon Redshift database is
jdbc:redshift://endpoint:port/database.

Driver JAR

Complete this table to load the driver JARs needed. To do this, click the
[+] button under the table to add as many rows as needed, each
row for a driver JAR, then select the cell and click the […]
button at the right side of the cell to open the Select
Module
wizard from which you can select the driver JAR of your interest.
For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for
the Redshift database.

Class Name

Enter the class name for the specified driver between double quotation marks.
For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the
name to be entered is com.amazon.redshift.jdbc41.Driver.

Username and Password

Enter the authentication information to the database you need to connect
to.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Schema and Edit
schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

This component offers the
advantage of the dynamic schema feature. This allows you to retrieve unknown columns
from source files or to copy batches of columns from a source without mapping each
column individually. For further information about dynamic schemas, see
Talend Studio

User Guide.

This dynamic schema
feature is designed for the purpose of retrieving unknown columns of a table and is
recommended to be used for this purpose only; it is not recommended for the use of
creating tables.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

 

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Table Name

Type in the name of the table from which you need to read data.

Query type and Query

Specify the database query statement paying particularly attention to the
properly sequence of the fields which must correspond to the schema definition.

If using the dynamic schema feature, the SELECT query must
include the * wildcard, to retrieve all of the
columns from the table selected.

Specify a data source alias

Select this check box and specify the alias of a data source created on the
Talend Runtime
side to use the shared connection pool defined in the
data source configuration. This option works only when you deploy and run your Job in

Talend Runtime
. For a related use case, see Scenario: Deploying your Job on Talend Runtime to retrieve data from a MySQL database.

If you use the component’s own DB configuration, your data source connection will be
closed at the end of the component. To prevent this from happening, use a shared DB
connection with the data source alias specified.

This check box is not available when the Use an existing
connection
check box is selected.

Advanced settings

Use cursor

Select this check box to specify the number of rows you want to
work with at any given time. This option optimises
performance.

Trim all the String/Char columns

Select this check box to remove leading whitespace and trailing
whitespace from all String/Char columns.

Trim column

This table is filled automatically with the schema being used. Select the check
box(es) corresponding to the column(s) to be trimmed.

Enable Mapping File for Dynamic

Select this check box to use the specified metadata mapping file when
reading data from a dynamic type column. This check box is cleared by
default.

For more information about metadata mapping files, see the section on
type conversion of
Talend Studio

User
Guide
.

Mapping File

Specify the metadata mapping file to use by selecting a type of
database from the list.

This list field appears only when the Enable
Mapping File for Dynamic
check box is selected.

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Global Variables

Global Variables

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

QUERY: the query statement being processed. This is a Flow
variable and it returns a string.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component covers all possible SQL queries for any database using
a JDBC connection.

Dynamic settings

Click the [+] button to add a
row in the table and fill the Code field
with a context variable to choose your database connection dynamically from
multiple connections planned in your Job. This feature is useful when you
need to access database tables having the same data structure but in
different databases, especially when you are working in an environment where
you cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of
Talend Studio
.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Scenario: Reading data from databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see
Talend Studio User Guide
.

tJDBCInput MapReduce properties

These properties are used to configure tJDBCInput running in the MapReduce Job framework.

The MapReduce
tJDBCInput component belongs to the MapReduce family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

Save_Icon.png

Click this icon to open a database connection wizard and store the database connection
parameters you set in the component Basic settings
view.

For more information about setting up and storing database connection parameters, see


Talend Studio
User Guide
.

JDBC URL

Specify the JDBC URL of the database to be used. For example, the
JDBC URL for the Amazon Redshift database is
jdbc:redshift://endpoint:port/database.

Driver JAR

Complete this table to load the driver JARs needed. To do this, click the
[+] button under the table to add as many rows as needed, each
row for a driver JAR, then select the cell and click the […]
button at the right side of the cell to open the Select
Module
wizard from which you can select the driver JAR of your interest.
For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for
the Redshift database.

Class Name

Enter the class name for the specified driver between double quotation marks.
For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the
name to be entered is com.amazon.redshift.jdbc41.Driver.

Username and
Password

Specify the JDBC URL of the database to be used. For example, the
JDBC URL for the Amazon Redshift database is
jdbc:redshift://endpoint:port/database.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Schema and Edit
schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

 

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Table Name

Type in the name of the table from which you need to read data.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Query type and
Query

Specify the database query statement paying particularly attention to the
properly sequence of the fields which must correspond to the schema definition.

If using the dynamic schema feature, the SELECT query must
include the * wildcard, to retrieve all of the
columns from the table selected.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

For further information about a
Talend
Map/Reduce Job, see the sections
describing how to create, convert and configure a
Talend
Map/Reduce Job of the

Talend Open Studio for Big Data Getting Started
Guide

.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Limitation

We recommend using the following databases with the Map/Reduce
version of this component: DB2, Informix, MSSQL, MySQL, Netezza,
Oracle, Postgres, Teradata and Vertica.

It may work with other databases as well, but these may not
necessarily have been tested.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tJDBCInput properties for Apache Spark Batch

These properties are used to configure tJDBCInput running in the Spark Batch Job framework.

The Spark Batch
tJDBCInput component belongs to the Databases family.

This component also allows you to connect and read data from a RDS MariaDB, a
RDS PostgreSQL or a RDS SQLServer database.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

Use an existing connection

Select this check box and in the Component
List
click the relevant connection component to reuse the connection
details you already defined.

JDBC URL

Specify the JDBC URL of the database to be used. For example, the
JDBC URL for the Amazon Redshift database is
jdbc:redshift://endpoint:port/database.

If you are using Spark V1.3, this URL should contain the authentication
information, such
as:

Driver JAR

Complete this table to load the driver JARs needed. To do this, click the
[+] button under the table to add as many rows as needed, each
row for a driver JAR, then select the cell and click the […]
button at the right side of the cell to open the Select
Module
wizard from which you can select the driver JAR of your interest.
For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for
the Redshift database.

Class Name

Enter the class name for the specified driver between double quotation marks.
For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the
name to be entered is com.amazon.redshift.jdbc41.Driver.

Username and Password

Enter the authentication information to the database you need to connect
to.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Available only for Spark V1.4. and onwards.

Schema and Edit
schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

 

Built-In: You create and store the
schema locally for this component only. Related topic: see
Talend Studio

User Guide.

 

Repository: You have already created
the schema and stored it in the Repository. You can reuse it in various projects and
Job designs. Related topic: see
Talend Studio

User Guide.

 

Click Edit schema to make changes to the schema.
If the current schema is of the Repository type, three
options are available:

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

Table Name

Type in the name of the table from which you need to read data.

Query type and Query

Specify the database query statement paying particularly attention to the
properly sequence of the fields which must correspond to the schema definition.

If you are using Spark V2.0 onwards, the Spark SQL does not recognize the
prefix of a database table anymore. This means that you must enter only the table name
without adding any prefix that indicates for example the schema this table belongs to.

For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the
schema that the mytable table belongs to, in the query, you must
enter mytable only.

Guess Query

Click the Guess Query button to
generate the query which corresponds to your table schema in the Query
field.

Guess schema

Click the Guess schema button to
retrieve the table schema.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are
creating. The properties are separated by semicolon and each property is a key-value
pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing
connection
check box is selected.

Spark SQL JDBC parameters

Add the JDBC properties supported by Spark SQL to this table. For a list of
the user configurable properties, see JDBC to other database.

This component automatically set the url, dbtable and driver properties by using the configuration
from the Basic settings tab.

Use cursor

Select this check box to specify the number of rows you want to
work with at any given time. This option optimises
performance.

Trim all the String/Char columns

Select this check box to remove leading whitespace and trailing
whitespace from all String/Char columns.

Trim column

This table is filled automatically with the schema being used. Select the check
box(es) corresponding to the column(s) to be trimmed.

Enable partitioning

Select this check box to read data in partitions.

Define, within double quotation marks, the following parameters to configure
the partitioning:

  • Partition column: the
    numeric column used as partition key.

  • Lower bound of the partition
    stride
    and Upper bound of the
    partition stride
    : enter the upper bounds and the lower bound to
    determine the partition stride. These bounds do not filter the table rows. All
    rows in the table are partitioned and returned.

  • Number of partitions:
    the number of partitions into which the table rows are split. Each Spark worker
    handles only one of the partitions at a time.

The average size of the partitions is the result of the difference between the
upper bound and the lower bound divided by the number of partitions, that is to say,
(upperBound – lowerBound)/partitionNumber, while the first and the last partitions also
include all the other rows that are not contained in the other partitions.

For example, to partition 1000 rows into 4 partitions, if you enter 0 for the
lower bound and 1000 for the upper bound, each partition will contain 250 rows and so
the partitioning is even. If you enter 250 for the lower bound and 750 for the upper
bound, the second and the third partition will each contain 125 rows and the first and
the last partitions each 375 rows. With this configuration, the partitioning is
skewed.

Usage

Usage rule

This component is used as a start component and requires an output link..

This component should use a tJDBCConfiguration component present in the same Job to connect to a
database. You need to drop a tJDBCConfiguration
component alongside this component and configure the Basic
settings
of this component to use tJDBCConfiguration.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x