July 30, 2023

tOracleOutput – Docs for ESB 7.x

tOracleOutput

Writes, updates, makes changes or suppresses entries in an Oracle database.

tOracleOutput executes the action
defined on the table and/or on the data contained in the table, based on the flow incoming
from the preceding component in the Job.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tOracleOutput Standard properties

These properties are used to configure tOracleOutput running in the Standard Job
framework.

The Standard
tOracleOutput component belongs to the Databases family.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property type

Either Built-in or Repository
.

 

Built-in: No property
data stored centrally.

 

Repository: Select the
repository file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

tOracleOutput_1.png

Click this icon to open a database connection
wizard and store the database connection parameters you set in the component
Basic settings
view.

For more information about setting up and
storing database connection parameters, see
Talend Studio
User Guide
.

Use an existing
connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection
    to be shared in the Basic
    settings
    view of the connection component which creates that very database
    connection.

  2. In the child level, use a dedicated connection
    component to read that registered database connection.

For an example about how to share a database connection
across Job levels, see

Talend Studio
User Guide
.

Connection type

Drop-down list of available drivers:

Oracle OCI: Select this
connection type to use Oracle Call Interface with a set of C-language software
APIs that provide an interface to the Oracle database.

Oracle Custom: Select
this connection type to access a clustered database.

Oracle Service Name:
Select this connection type to use the TNS alias that you give when you connect
to the remote database.

WALLET: Select this
connection type to store credentials in an Oracle wallet.

Oracle SID: Select this
connection type to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

Name of the table to be written. Note that only
one table can be written at a time.

Action on table

Note:

The Action on table list will not be available if
you select the Enable parallel
execution
check box in the Advanced settings view.

On the table defined, you can perform one of
the following operations:

Default: No operation
is carried out.

Drop and create table:
The table is removed and created again.

Create table: The table
does not exist and gets created.

Create table if does not
exist
: The table is created if it does not exist.

Drop table if exists and
create
: The table is removed if it already exists and created
again.

Clear table: The table
content is deleted.

Truncate table: The
table content is deleted. You do not have the possibility to rollback the
operation.

Truncate table with reuse
storage
: The table content is deleted. You do not have the
possibility to rollback the operation. However, it is allowed to reuse the
existing storage allocated to the table though the storage is considered empty.

Warning:

If you select the Use an existing connection
check box and select an option other than Default from the Action on table list, a commit statement
will be generated automatically before the data insert/update/delete
operation.

Action on data

On the data of the table defined, you can
perform:

Insert: Add new entries
to the table. If duplicates are found, job stops.

Update: Make changes to
existing entries

Insert or update: Insert a new record. If
the record with the given reference already exists, an update would be made.

Update or insert: Update the record with the
given reference. If the record does not exist, a new record would be inserted.

Delete: Remove entries
corresponding to the input flow.

Warning:

It is necessary to specify at least one
column as a primary key on which the Update and Delete operations are based. You can do that
by clicking Edit
Schema
and selecting the check box(es) next to the column(s)
you want to set as primary key(s). For an advanced use, click the Advanced settings view where
you can simultaneously define primary keys for the Update and Delete operations. To do
that: Select the Use field
options
check box and then in the Key in update column, select the check boxes
next to the column names you want to use as a base for the Update operation.
Do the same in the Key in delete
column
for the Delete operation.

Note:

The dynamic schema feature can be used in
the following modes: Insert; Update; Insert
or update
; Update
or insert
; Delete.

Schema and Edit schema

A schema is a row description, it defines the
number of fields to be processed and passed on to the next component. The
schema is either Built-in or stored remotely in the Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

This
component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see
Talend Studio

User Guide.

This
dynamic schema feature is designed for the purpose of retrieving unknown columns of a
table and is recommended to be used for this purpose only; it is not recommended for the
use of creating tables.

When writing
data with Oracle Long type using dynamic schema feature, to avoid data overflow
during data type conversion, change the default length in the length property
of the mapping file for the Oracle Long type or configure the length with
dynamic metadata.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Die on error

This check box is selected by default. Clear
the check box to skip the row on error and complete the process for error-free
rows. If needed, you can retrieve the rows on error via a Row > Rejects link.

Specify a data source
alias

Select this check box and specify the alias of a data source created on the
Talend Runtime
side to use the shared connection pool defined in the data source configuration. This
option works only when you deploy and run your Job in
Talend Runtime
.

If you use the component’s own DB configuration, your data source
connection will be closed at the end of the component. To prevent this from
happening, use a shared DB connection with the data source alias specified.

This check box is not available when the Use an existing
connection
check box is selected.

Advanced settings

Use alternate
schema

Select this option to use a schema other than
the one specified by the component that establishes the database connection (that is,
the component selected from the Component list
drop-down list in Basic settings view). After
selecting this option, provide the name of the desired schema in the Schema field.

This option is available when Use an
existing connection
is selected in Basic
settings
view.

Additional JDBC
parameters

Specify additional connection properties for
the DB connection you are creating. This option is not available if you have
selected the Use an existing
connection
check box in the Basic settings view.

Note:

You can press Ctrl+Space to access a list of predefined
global variables.

Commit every

Enter the number of rows to be completed
before committing batches of rows together into the DB. This option ensures
transaction quality (but not rollback) and, above all, better performance at
execution.

tStatCatcher Statistics

Select this check box to collect log data at
the component level.

Additional Columns

This option is not offered if you create (with
or without drop) the DB table. This option allows you to call SQL functions to
perform actions on columns, which are not insert, nor update or delete actions,
or actions that require particular preprocessing.

 

Name: Type in the name
of the schema column to be altered or inserted as new column.

 

SQL expression: Type in
the SQL statement to be executed in order to alter or insert the relevant
column data.

 

Position: Select
Before, Replace, or After following the action to
be performed on the reference column.

 

Reference column: Type
in a column of reference that the tDBOutput can use to place or replace the new or altered
column.

Use field options

Select this check box to customize a request,
especially when there is double action on data.

Use Hint Options

Select this check box to activate the hint
configuration area which helps you optimize a query’s execution. In this area,
parameters are:

  • HINT: specify the
    hint you need, using the syntax /*+
    */.
  • POSITION: specify
    where you put the hint in a SQL statement.
  • SQL STMT*: select
    the SQL statement you need to use.

Convert columns and table to
uppercase

Select this check box to set the names of
columns and table in upper case.

Debug query mode

Select this check box to display each step during processing entries
in a database.

Use Batch Size

Select this check box to activate the batch mode for data processing.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Support null in “SQL WHERE”
statement

Select this check box to validate null in
“SQL WHERE” statement.

Enable parallel
execution

Select this check box to perform high-speed data processing, by treating
multiple data flows simultaneously. Note that this feature depends on the database or
the application ability to handle multiple inserts in parallel as well as the number of
CPU affected. In the Number of parallel executions
field, either:

  • Enter the number of parallel executions desired.
  • Press Ctrl + Space and select the
    appropriate context variable from the list. For further information, see
    Talend Studio User Guide
    .

Note that when parallel execution is enabled, it is not possible to use global
variables to retrieve return values in a subjob.

  • The Action on
    table
    field is not available with the
    parallelization function. Therefore, you must use a tCreateTable component if you
    want to create a table.
  • When parallel execution is enabled, it is not
    possible to use global variables to retrieve return values in a
    subjob.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

NB_LINE_UPDATED: the number of rows updated. This is an
After variable and it returns an integer.

NB_LINE_INSERTED: the number of rows inserted. This is an
After variable and it returns an integer.

NB_LINE_DELETED: the number of rows deleted. This is an
After variable and it returns an integer.

NB_LINE_REJECTED: the number of rows rejected. This is an
After variable and it returns an integer.

QUERY: the query statement processed. This is an After
variable and it returns a string.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

Usage

Usage rule

This component offers the flexibility benefit
of the DB query and covers all of the SQL queries possible.

Row > Rejects This component must be used as an output
component. It allows you to carry out actions on a table or on the data of a
table in an Oracle database. It also allows you to create a reject flow using a

link to filter data in error. For an example of tMysqlOutput in use, see Retrieving data in error with a Reject link.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Limitation

Due to license incompatibility, one or more JARs required to use
this component are not provided. You can install the missing JARs for this particular
component by clicking the Install button
on the Component tab view. You can also
find out and add all missing JARs easily on the Modules tab in the
Integration
perspective of your studio. You can find more details about how to install external modules in
Talend Help Center (https://help.talend.com)
.

tOracleOutput MapReduce properties (deprecated)

These properties are used to configure tOracleOutput running in the MapReduce Job framework.

The MapReduce
tOracleOutput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-in or Repository
.

 

Built-in: No property data stored
centrally.

 

Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

tOracleOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio User
Guide
.

Connection type

The available drivers are:

  • Oracle OCI: Select this connection type
    to use Oracle Call Interface with a set of C-language software APIs that provide
    an interface to the Oracle database.

  • Oracle Custom: Select this connection
    type to access a clustered database. With this type of connection, the
    Username and the Password fields are deactivated and you need to
    enter the connection URL in the URL field
    that is displayed.

    For further information about the valid form of this URL, see JDBC Connection
    strings
    from the Oracle documentation.

  • Oracle Service Name: Select this
    connection type to use the TNS alias that you give when you connect to the
    remote database.

  • WALLET: Select this connection type to
    store credentials in an Oracle wallet.

  • Oracle SID: Select this connection type
    to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Oracle schema

Oracle schema name.

Table

Name of the table to be written. Note that only one table can be
written at a time.

Schema and Edit
schema

A schema is a row description, it defines the number of fields to be
processed and passed on to the next component. The schema is either
Built-in or stored remotely in the
Repository.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Die on error

This check box is selected by default. Clear the check box to skip the
row on error and complete the process for error-free rows. If needed,
you can retrieve the rows on error via a Row > Rejects
link.

Advanced settings

Use Batch

Select this check box to activate the batch mode for data processing.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tOracleOutput properties for Apache Spark Batch

These properties are used to configure tOracleOutput running in the Spark Batch Job framework.

The Spark Batch
tOracleOutput component belongs to the Databases family.

This component can also be used to write data to a RDS Oracle database.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Property type

Either Built-in or Repository
.

 

Built-in: No property data stored
centrally.

 

Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

tOracleOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio User
Guide
.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Connection type

The available drivers are:

  • Oracle OCI: Select this connection type
    to use Oracle Call Interface with a set of C-language software APIs that provide
    an interface to the Oracle database.

  • Oracle Custom: Select this connection
    type to access a clustered database. With this type of connection, the
    Username and the Password fields are deactivated and you need to
    enter the connection URL in the URL field
    that is displayed.

    For further information about the valid form of this URL, see JDBC Connection
    strings
    from the Oracle documentation.

  • Oracle Service Name: Select this
    connection type to use the TNS alias that you give when you connect to the
    remote database.

  • WALLET: Select this connection type to
    store credentials in an Oracle wallet.

  • Oracle SID: Select this connection type
    to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Oracle schema

Oracle schema name.

Table

Name of the table to be written. Note that only one table can be
written at a time.

Action on table

On the table defined, you can perform one of the following operations:

Default: No operation is carried out.

Drop and create table: The table is
removed and created again.

Create table: The table does not exist
and gets created.

Create table if not exists: The table
is created if it does not exist.

Drop table if exists and create: The
table is removed if it already exists and created again.

Clear table: The table content is
deleted.

Truncate table: The table content is
deleted. You do not have the possibility to rollback the operation.

Truncate table with reuse storage: The
table content is deleted. You do not have the possibility to rollback
the operation. However, it is allowed to reuse the existing storage
allocated to the table though the storage is considered empty.

Warning:

If you select the Use an existing
connection
check box and select an option other than
Default from the Action on table list, a commit statement
will be generated automatically before the data insert/update/delete
operation.

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Die on error

This check box is selected by default. Clear the check box to skip the
row on error and complete the process for error-free rows. If needed,
you can retrieve the rows on error via a Row > Rejects
link.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are
creating. The properties are separated by semicolon and each property is a key-value
pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing
connection
check box is selected.

Use Batch

Select this check box to activate the batch mode for data processing.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control
the number of connections that stay open simultaneously. The default values given to the
following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number
    of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same
    time.

  • Max waiting time (ms): enter the maximum amount of time
    at the end of which the response to a demand for using a connection should be returned by
    the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number
    of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number
    of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The
following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval
    (in milliseconds) at the end of which the component checks the status of the connections and
    destroys the idle ones.

  • Min idle time for a connection to be eligible to
    eviction
    : enter the time interval (in milliseconds) at the end of which the idle
    connections are destroyed.

  • Soft min idle time for a connection to be eligible to
    eviction
    : this parameter works the same way as Min idle
    time for a connection to be eligible to eviction
    but it keeps the minimum number
    of idle connections, the number you define in the Min number of idle
    connections
    field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component should use a tOracleConfiguration component present in the same Job to connect to
Oracle. You need to select the Use an existing
connection
check box and then select the tOracleConfiguration component to be used.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tOracleOutput properties for Apache Spark Streaming

These properties are used to configure tOracleOutput running in the Spark Streaming Job framework.

The Spark Streaming
tOracleOutput component belongs to the Databases family.

This component can also be used to write data to a RDS Oracle database.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Property type

Either Built-in or Repository
.

 

Built-in: No property data stored
centrally.

 

Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

tOracleOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio User
Guide
.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Connection type

The available drivers are:

  • Oracle OCI: Select this connection type
    to use Oracle Call Interface with a set of C-language software APIs that provide
    an interface to the Oracle database.

  • Oracle Custom: Select this connection
    type to access a clustered database. With this type of connection, the
    Username and the Password fields are deactivated and you need to
    enter the connection URL in the URL field
    that is displayed.

    For further information about the valid form of this URL, see JDBC Connection
    strings
    from the Oracle documentation.

  • Oracle Service Name: Select this
    connection type to use the TNS alias that you give when you connect to the
    remote database.

  • WALLET: Select this connection type to
    store credentials in an Oracle wallet.

  • Oracle SID: Select this connection type
    to uniquely identify a particular database on a system.

DB Version

Select the Oracle version in use.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Oracle schema

Oracle schema name.

Table

Name of the table to be written. Note that only one table can be
written at a time.

Action on table

On the table defined, you can perform one of the following operations:

Default: No operation is carried out.

Drop and create table: The table is
removed and created again.

Create table: The table does not exist
and gets created.

Create table if not exists: The table
is created if it does not exist.

Drop table if exists and create: The
table is removed if it already exists and created again.

Clear table: The table content is
deleted.

Truncate table: The table content is
deleted. You do not have the possibility to rollback the operation.

Truncate table with reuse storage: The
table content is deleted. You do not have the possibility to rollback
the operation. However, it is allowed to reuse the existing storage
allocated to the table though the storage is considered empty.

Warning:

If you select the Use an existing
connection
check box and select an option other than
Default from the Action on table list, a commit statement
will be generated automatically before the data insert/update/delete
operation.

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Die on error

This check box is selected by default. Clear the check box to skip the
row on error and complete the process for error-free rows. If needed,
you can retrieve the rows on error via a Row > Rejects
link.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database connection you are
creating. The properties are separated by semicolon and each property is a key-value
pair, for example, encryption=1;clientname=Talend.

This field is not available if the Use an existing
connection
check box is selected.

Use Batch

Select this check box to activate the batch mode for data processing.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control
the number of connections that stay open simultaneously. The default values given to the
following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number
    of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same
    time.

  • Max waiting time (ms): enter the maximum amount of time
    at the end of which the response to a demand for using a connection should be returned by
    the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number
    of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number
    of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The
following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval
    (in milliseconds) at the end of which the component checks the status of the connections and
    destroys the idle ones.

  • Min idle time for a connection to be eligible to
    eviction
    : enter the time interval (in milliseconds) at the end of which the idle
    connections are destroyed.

  • Soft min idle time for a connection to be eligible to
    eviction
    : this parameter works the same way as Min idle
    time for a connection to be eligible to eviction
    but it keeps the minimum number
    of idle connections, the number you define in the Min number of idle
    connections
    field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component should use a tOracleConfiguration component present in the same Job to connect to
Oracle. You need to select the Use an existing
connection
check box and then select the tOracleConfiguration component to be used.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x