July 30, 2023

tVerticaBulkExec – Docs for ESB 7.x

tVerticaBulkExec

Loads data into a Vertica database table from a local file using the Vertica COPY
SQL statement.

For more information about the Vertica COPY SQL
statement, see COPY.

The tVerticaOutputBulk
component and the tVerticaBulkExec component are generally used together as
parts of a two step process. In the first step, an output file is generated. In the second
step, the file is used in a bulk load operation to feed a database. These two steps are fused
together in the tVerticaOutputBulkExec component. The advantage of using
two separate components is that the data can be transformed before it is loaded into the
database.

tVerticaBulkExec Standard properties

These properties are used to configure tVerticaBulkExec running in the Standard Job framework.

The Standard
tVerticaBulkExec component belongs to the Databases family.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property Type

Select the way the connection details
will be set.

  • Built-In: The connection details will be set
    locally for this component. You need to specify the values for all
    related connection properties manually.

  • Repository: The connection details stored
    centrally in Repository > Metadata will be reused by this component. You need to click
    the […] button next to it and in the pop-up
    Repository Content dialog box, select the
    connection details to be reused, and all related connection
    properties will be automatically filled in.

DB Version

Select the version of the database.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

When a Job contains the parent Job and the child Job, if you need to
share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection to be shared
    in the Basic settings view of the connection
    component which creates that very database connection.

  2. In the child level, use a dedicated connection component to read
    that registered database connection.

For an example about how to share a database connection across Job
levels, see

Talend Studio
User Guide
.

Host

The IP address or hostname of the database.

Port

The listening port number of the database.

Database

The name of the database.

Schema

The schema of the database.

Username and Password

The database user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

The name of the table into
which data will be written.

Action on table

Select an operation to be performed on the table defined.

  • Default: No operation is carried out.

  • Drop and create table: The table is removed
    and created again.

  • Create table: The table does not exist and
    gets created.

  • Create table if does not exist: The table is
    created if it does not exist.

  • Drop table if exist and create: The table is
    removed if it already exists and created again.

  • Clear table: The table content is
    deleted. You have the possibility to rollback the operation.

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

  • Built-In: You create and store the schema locally for this component
    only.

  • Repository: You have already created the schema and stored it in the
    Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

Click Edit
schema
to make changes to the schema.

Note: If you
make changes, the schema automatically becomes built-in.
  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Use schema columns for Copy

Select this check box to use the column option in the COPY statement so that you
can restrict the load to one or more specified columns in the table. For more
information, see the Vertica COPY SQL Statement.

File Name

The path to the file from which data will be loaded.

The file should be located on the same machine where the Studio
is installed or where the Job using this component is deployed.

This property is available only when there is no input flow.

Compression mode

Select the compression mode for the file from which data will be loaded.

This property is available only when
you are using Vertica 6.0 and later.

Advanced settings

Additional JDBC Parameters

Specify additional JDBC parameters for the
database connection created.

This property is not available when the Use an existing connection
check box in the Basic settings view is selected.

Action on data

Select an action that will be performed on the data of the table defined.

  • Bulk insert: Insert multiple rows into the table at once
    instead of doing single row inserts. If duplicates are found, the Job stops.

  • Bulk update: Make simultaneous updates to multiple rows.

Stream name

The stream name of a load, which helps identify a particular load.

This property is available only when
you are using Vertica 6.0 and later.

Write to ROS (Read Optimized Store)

Select this check box to store data in a physical storage area, in order to optimize
the reading, as the data is compressed and pre-sorted.

Exit Job on no rows loaded

The Job automatically stops if no row has
been loaded.

Missing columns as null

Select this check box to insert NULL values for the missing columns when there is
insufficient data to match the columns specified in the schema.

This property is available only when
you are using Vertica 6.0 and later.

Skip Header

Select this check box and in the field displayed next to it, specify the number of
records to skip in the file.

This property is available only when
you are using Vertica 6.0 and later.

Record terminator

Select this check box and in the field displayed next to it, specify the literal
character string used to indicate the end of each record in the file.

This property is available only when
you are using Vertica 6.0 and later.

Enclosed by character

Select this check box to set the character within which data is enclosed.

This property is available only when
you are using Vertica 6.0 and later.

Escape char

Select this check box and in the field displayed specify the character to be escaped
when loading data into Vertica. By default, the check box is selected and the default
escape character is .

Fields terminated by

The character, string or regular expression to
separate fields.

Null String

The string displayed to indicate that the value is null.

Reject not fitted values

Select this check box to reject data rows of type char, varchar, binary, and varbinary
if they do not fit the target table.

This property is available only when
you are using Vertica 6.0 and later.

Maximum number of rejected records

Select this check box and in the field displayed next to it, specify the maximum number
of records that can be rejected before a load fails.

This property is available only when
you are using Vertica 6.0 and later.

Stop and rollback if any row is rejected

Select this check box to stop and roll back a load without loading any data if any row
is rejected.

This property is available only when
you are using Vertica 6.0 and later.

Don’t commit

Select this check box to perform a bulk load transaction without committing the results
automatically. This is useful if you want to execute multiple bulk loads in a single
transaction.

This property is available only when
you are using Vertica 6.0 and later.

Rejected data file

Specify the file into which rejected rows will be written.

This property is available only when
Bulk insert is selected from the Action on
data
drop-down list.

Exception log file

Specify the file into which the exception log will be written. This log explains why
each rejected row was rejected.

This property is available only when
Bulk insert is selected from the Action on
data
drop-down list.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at the Job level
as well as at each component level.

Global Variables

ACCEPTED_ROW_NUMBER

The number of rows loaded into the database. This is an After variable and it returns
an integer.

REJECTED_ROW_NUMBER

The number of rows rejected. This is an After variable and it returns an integer.

ERROR_MESSAGE

The error message generated by the component when an error occurs. This is an After
variable and it returns a string.

Usage

Usage rule

Talend Studio and the
Vertica database create very fast and affordable data warehouse and data mart applications.
For more information about how to configure
Talend Studio
to connect to Vertica, see Talend and HP Vertica Tips and
Techniques
.

You can use this component in either of the following two ways to write data
into Vertica.

  • It can be used as a Standalone component of a subJob to
    write data into Vertica from a file generated by a tVerticaOutputBulk component.

  • You can link a tFileInputRaw component to it via a Row > Main connection to feed data into Vertica. In this way, the
    tFileInputRaw component should be in the
    Stream the file mode and there should be only
    one column of Object type defined in its schema.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x