July 30, 2023

tOracleInput – Docs for ESB 7.x

tOracleInput

Reads an Oracle database and extracts fields based on a query.

tOracleInput executes a database query with a strictly defined order
which must correspond to the schema definition. Then it passes on the field list to the
next component via a Main row link.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tOracleInput Standard properties

These properties are used to configure tOracleInput running in
the Standard Job framework.

The Standard
tOracleInput component belongs to the Databases family.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

tOracleInput_1.png

Click this icon to open a database connection wizard and
store the database connection parameters you set in the component
Basic settings view.

For more information about setting up and storing
database connection parameters, see
Talend Studio User Guide
.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection
    to be shared in the Basic
    settings
    view of the connection component which creates that very database
    connection.

  2. In the child level, use a dedicated connection
    component to read that registered database connection.

For an example about how to share a database connection
across Job levels, see

Talend Studio
User Guide
.

Connection type

Drop-down list of available drivers:

Oracle OCI: Select this
connection type to use Oracle Call Interface with a set of C-language
software APIs that provide an interface to the Oracle database.

Oracle Custom: Select this
connection type to access a clustered database.

Oracle Service Name: Select this
connection type to use the TNS alias that you give when you connect to
the remote database.

WALLET: Select this connection
type to store credentials in an Oracle wallet.

Oracle SID: Select this
connection type to uniquely identify a particular database on a
system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Oracle schema

Oracle schema name.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Schema and Edit Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

This
component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see
Talend Studio

User Guide.

This
dynamic schema feature is designed for the purpose of retrieving unknown columns of a
table and is recommended to be used for this purpose only; it is not recommended for the
use of creating tables.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table name

Database table name.

Query type and Query

Enter your DB query paying particularly attention to
properly sequence the fields in order to match the schema definition.

If
using the dynamic schema feature, the SELECT query must include the
* wildcard, to retrieve all
of the columns from the table selected.

Specify a data source alias

Select this check box and specify the alias of a data source created on the
Talend Runtime
side to use the shared connection pool defined in the data source configuration. This
option works only when you deploy and run your Job in
Talend Runtime
.

If you use the component’s own DB configuration, your data source
connection will be closed at the end of the component. To prevent this from
happening, use a shared DB connection with the data source alias specified.

This check box is not available when the Use an existing
connection
check box is selected.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are
creating. The properties are separated by semicolon and each property is a key-value
pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing
connection
check box is selected.

tStatCatcher Statistics

Select this check box to collect log data at the
component level.

Use fetch size

Select this check box and in the
Fetch size
field displayed, specify the number of rows to fetch in one go from the
database. The performance can be improved by tuning this fetch size to
an appropriate value.

Trim all the String/Char columns

Select this check box to remove leading and trailing
whitespace from all the String/Char columns.

Trim column

Remove leading and trailing whitespace from defined
columns.

No null values

Check this box to improve the performance if there are no
null values.

Global Variables

Global Variables 

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

QUERY: the query statement being processed. This is a Flow
variable and it returns a string.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

This component covers all possible SQL queries for Oracle
databases.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Limitation

Due to license incompatibility, one or more JARs required to use
this component are not provided. You can install the missing JARs for this particular
component by clicking the Install button
on the Component tab view. You can also
find out and add all missing JARs easily on the Modules tab in the
Integration
perspective of your studio. You can find more details about how to install external modules in
Talend Help Center (https://help.talend.com)
.

Using context parameters when reading a table from an Oracle database

In this scenario, we will read a table from an Oracle database using a context parameter to
refer to the table name.

Dropping and linking the components

  1. Create a new Job and add the following components by typing their names in the design
    workspace or dropping them from the Palette: a tOracleInput
    component and a tLogRow component.
  2. Connect tOracleInput to tLogRow using a Row > Main
    link.

    tOracleInput_2.png

Configuring the components

  1. Double-click tOracleInput to open its
    Basic Settings view in the Component tab.

    tOracleInput_3.png

  2. Select a connection type from the Connection
    Type
    drop-down list. In this example, it is Oracle SID.

    Select the version of the Oracle database to be used from the DB Version drop-down list. In this example, it is
    Oracle 12-7.
    In the Host field, enter the Oracle
    database server’s IP address. In this example, it is 192.168.31.32.
    In the Database field, enter the database
    name. In this example, it is TALEND.
    In the Oracle schema field, enter the
    Oracle schema name. In this example, it is TALEND.
    In the Username and Password fields, enter the authentication details.
  3. Click the […] button next to Edit schema to open the schema editor.

    tOracleInput_4.png

  4. Click the [+] button to add four
    columns: ID and AGE of the integer type, NAME and SEX of the
    string type.

    Click OK to close the schema editor and
    accept the propagation prompted by the pop-up dialog box.
  5. Put the cursor in the Table Name field
    and press F5 for context parameter setting.
    The dialog box New Context Parameter pops
    up.

    tOracleInput_5.png

    For more information about context settings, see
    Talend Studio User Guide
    .
  6. In the Name field, enter the context
    parameter name. In this example, it is TABLE.

    In the Default value field, enter the
    name of the Oracle database table to be queried. In this example, it is
    PERSON.
  7. Click Finish to validate the setting.

    The context parameter context.TABLE
    automatically appears in the Table Name
    field.
  8. In the Query Type list, select Built-In. Then, click Guess
    Query
    to get the query statement.

  9. Double-click tLogRow to open its
    Basic settings view in the Component tab.

    tOracleInput_6.png

  10. In the Mode area, select Table (print values in cells of a table) for a
    better display of the results.

Saving and executing the Job

  1. Press Ctrl + S to save the Job.
  2. Press F6 to run the Job.

    tOracleInput_7.png

    As shown above, the data in the Oracle database table PERSON is displayed on the console.

tOracleInput MapReduce properties (deprecated)

These properties are used to configure tOracleInput running in
the MapReduce Job framework.

The MapReduce
tOracleInput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

tOracleInput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database
connection parameters, see
Talend Studio User Guide
.

Connection type

Drop-down list of available drivers:

Oracle OCI: Select this connection
type to use Oracle Call Interface with a set of C-language software
APIs that provide an interface to the Oracle database.

Oracle Custom: Select this
connection type to access a clustered database.

Oracle Service Name: Select this
connection type to use the TNS alias that you give when you connect
to the remote database.

WALLET: Select this connection type
to store credentials in an Oracle wallet.

Oracle SID: Select this connection
type to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Oracle schema

Oracle schema name.

Username and
Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Schema and Edit
Schema

A schema is a row description, it defines the number of fields to
be processed and passed on to the next component. The schema is
either Built-in or stored remotely
in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table name

Database table name.

Query type and
Query

Enter your DB query paying particularly attention to properly
sequence the fields in order to match the schema definition.

Die on error

Select the check box to stop the execution of the Job when an error
occurs.

Clear the check box to skip any rows on error and complete the process for
error-free rows. When errors are skipped, you can collect the rows on error using a Row > Reject link.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as a start component and requires
a transformation component as output link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tOracleInput properties for Apache Spark Batch

These properties are used to configure tOracleInput running in
the Spark Batch Job framework.

The Spark Batch
tOracleInput component belongs to the Databases family.

This component also allows you to connect and read data from a RDS Oracle
database.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

tOracleInput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio User
Guide
.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection
    to be shared in the Basic
    settings
    view of the connection component which creates that very database
    connection.

  2. In the child level, use a dedicated connection
    component to read that registered database connection.

For an example about how to share a database connection
across Job levels, see

Talend Studio
User Guide
.

Connection type

The available drivers are:

  • Oracle OCI: Select this connection type
    to use Oracle Call Interface with a set of C-language software APIs that provide
    an interface to the Oracle database.

  • Oracle Custom: Select this connection
    type to access a clustered database. With this type of connection, the
    Username and the Password fields are deactivated and you need to
    enter the connection URL in the URL field
    that is displayed.

    For further information about the valid form of this URL, see JDBC Connection
    strings
    from the Oracle documentation.

  • Oracle Service Name: Select this
    connection type to use the TNS alias that you give when you connect to the
    remote database.

  • WALLET: Select this connection type to
    store credentials in an Oracle wallet.

  • Oracle SID: Select this connection type
    to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Oracle schema

Oracle schema name.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Schema and Edit
Schema

A schema is a row description, it defines the number of fields to be
processed and passed on to the next component. The schema is either
Built-in or stored remotely in the
Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

Table Name

Type in the name of the table from which you need to read
data.

Query type and Query

Specify the database query statement paying particularly attention to the
properly sequence of the fields which must correspond to the schema definition.

If you are using Spark V2.0 onwards, the Spark SQL does not
recognize the prefix of a database table anymore. This means that you must enter only
the table name without adding any prefix that indicates for example the schema this
table belongs to.

For example, if you need to perform a query in a table system.mytable, in which the system prefix indicates the schema that the mytable table belongs to, in the query, you must enter mytable only.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are
creating. The properties are separated by semicolon and each property is a key-value
pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing
connection
check box is selected.

Spark SQL JDBC parameters

Add the JDBC properties supported by Spark SQL to this table.
For a list of the user configurable properties, see JDBC to other database.

This component automatically set the url, dbtable and driver properties by using the configuration from
the Basic settings tab.

Trim all the String/Char columns

Select this check box to remove leading and trailing whitespace from
all the String/Char columns.

Trim column

Remove leading and trailing whitespace from defined columns.

Enable partitioning

Select this check box to read data in partitions.

Define, within double quotation marks, the following parameters to
configure the partitioning:

  • Partition column: the numeric
    column used as partition key.

  • Lower bound of the partition
    stride
    and Upper bound of
    the partition stride
    : enter the upper bounds and the
    lower bound to determine the partition stride. These bounds do not
    filter the table rows. All rows in the table are partitioned and
    returned.

  • Number of partitions: the
    number of partitions into which the table rows are split. Each Spark
    worker handles only one of the partitions at a time.

The average size of the partitions is the result of the difference between the
upper bound and the lower bound divided by the number of partitions, that is to say,
(upperBound – lowerBound)/partitionNumber, while the first and the last partitions
also include all the other rows that are not contained in the other partitions.

For example, to partition 1000 rows into 4 partitions, if you enter 0 for
the lower bound and 1000 for the upper bound, each partition will contain 250 rows
and so the partitioning is even. If you enter 250 for the lower bound and 750 for
the upper bound, the second and the third partition will each contain 125 rows and
the first and the last partitions each 375 rows. With this configuration, the
partitioning is skewed.

Usage

Usage rule

This component is used as a start component and requires an output
link..

This component should use a tOracleConfiguration component present in the same Job to connect to
Oracle. You need to select the Use an existing
connection
check box and then select the tOracleConfiguration component to be used.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x