July 30, 2023

tPostgresPlusOutputBulkExec – Docs for ESB 7.x

tPostgresPlusOutputBulkExec

Improves performance during Insert operations to a PostgresPlus
database.

The tPostgresPlusOutputBulkExec executes the Insert
action on the data provided.

The tPostgresplusOutputBulk and tPostgresplusBulkExec components are generally used together as part of
a two step process. In the first step, an output file is generated. In the second step,
this file is used in the INSERT operation used to feed a database. These two steps are
fused together in the tPostgresPlusOutputBulkExec
component.

tPostgresPlusOutputBulkExec Standard properties

These properties are used to configure tPostgresPlusOutputBulkExec running in the Standard Job framework.

The Standard
tPostgresPlusOutputBulkExec component belongs to the Databases family.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property type

Either Built-in or
Repository
.

 

Built-in: No property data stored
centrally.

 

Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.

DB Version

List of database versions.

Host

Database server IP address.

Currently, only localhost,
127.0.0.1 or the exact IP
address of the local machine is allowed for proper functioning. In
other words, the database server must be installed on the same
machine where the Studio is installed or where the Job using
tPostgresPlusOutputBulkExec is
deployed.

Port

Listening port number of DB server.

Database

Name of the database

Schema

Exact name of the schema.

Username and
Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

Name of the table to be written. Note that only one table can be
written at a time and that the table must exist for the insert
operation to succeed.

Action on table

On the table defined, you can perform one of the following
operations:

None: No operation is carried
out.

Drop and create a table: The table
is removed and created again.

Create a table: The table does not
exist and gets created.

Create a table if not exists: The
table is created if it does not exist.

Clear a table: The table content is
deleted.

File Name

Name of the file to be generated and loaded.

Warning:

This file is generated on the machine specified by the URI in
the Host field so it should be
on the same machine as the database server.

Schema and Edit
Schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Advanced settings

Action

Select the action to be carried out

Bulk insert
Bulk update Depending on the action selected, the
required information varies.

File type

Select the type of file being handled.

Null string

String displayed to indicate that the value is null.

Row separator

String (ex: ”
“on Unix) to distinguish rows.

Field terminated by

Character, string or regular expression to separate fields.

Text enclosure

Character used to enclose text.

tStat
Catcher Statistics

Select this check box to collect log data at the component
level.

Usage

Usage rule

This component is mainly used when no particular transformation is
required on the data to be loaded onto the database.

Limitation

The database server must be installed on the same machine where
the Studio is installed or where the Job using tPostgresPlusOutputBulkExec is deployed, so that the
component functions properly.

Related scenarios

For use cases in relation with tPostgresPlusOutputBulkExec, see the following scenarios:


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x