July 30, 2023

tJDBCInput MapReduce properties (deprecated) – Docs for ESB 7.x

tJDBCInput MapReduce properties (deprecated)

These properties are used to configure tJDBCInput running in the MapReduce Job
framework.

The MapReduce
tJDBCInput component belongs to the MapReduce family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

 

Built-In: No property data stored centrally.

 

Repository: Select the repository file where the
properties are stored.

tJDBCInput MapReduce properties (deprecated)_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic
settings
view.

For more information about setting up and storing database
connection parameters, see Talend Studio User Guide.

JDBC URL

The JDBC URL of the database to be used. For
example, the JDBC URL for the Amazon Redshift database is jdbc:redshift://endpoint:port/database.

Driver JAR

Complete this table to load the driver JARs needed. To do
this, click the [+] button under the table to add
as many rows as needed, each row for a driver JAR, then select the cell and click the
[…] button at the right side of the cell to
open the Module dialog box from which you can select the driver JAR
to be used. For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for the Redshift database.

For more information, see Importing a database driver.

Class Name

Enter the class name for the specified driver between double
quotation marks. For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the name to be entered is
com.amazon.redshift.jdbc41.Driver.

Username and Password

The JDBC URL of the database to be used. For
example, the JDBC URL for the Amazon Redshift database is jdbc:redshift://endpoint:port/database.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Table Name

Type in the name of the table from which you need to read
data.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Query type and
Query

Specify the database query statement paying particularly attention to the
properly sequence of the fields which must correspond to the schema definition.

If using the dynamic schema feature, the SELECT query must
include the * wildcard, to retrieve all of the
columns from the table selected.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

For further information about a
Talend
Map/Reduce Job, see the sections
describing how to create, convert and configure a
Talend
Map/Reduce Job of the

Talend Open Studio for Big Data Getting Started Guide
.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Limitation

We recommend using the following databases
with the Map/Reduce version of this component: DB2, Informix, MSSQL, MySQL,
Netezza, Oracle, Postgres, Teradata and Vertica.

It may work with other databases as well, but
these may not necessarily have been tested.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x