August 17, 2023

tGreenplumGPLoad – Docs for ESB 5.x



This component invokes Greenplum’s gpload utility to insert records into a Greenplum
database. This component can be used either in standalone mode, loading from an existing
data file, or connected to an input flow to load data from the connected component.

tGreenplumGPLoad properties

Component family




tGreenplumGPLoad inserts data
into a Greenplum database table using Greenplum’s gpload


This component is used to bulk load data into a Greenplum table
either from an existing data file, an input flow, or directly from a
data flow in streaming mode through a named-pipe.

Basic settings

Property type

Either Built-in or

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.



Built-in: No property data stored



Repository: Select the repository
file in which the properties are stored. The fields that follow are
completed automatically using the data retrieved.



Database server IP address.



Listening port number of the DB server.



Name of the Greenplum database.



Exact name of the schema.


Username and

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.



Name of the table into which the data is to be inserted.


Action on table

On the table defined, you can perform one of the following
operations before loading the data:

None: No operation is carried

Clear table: The table content is
deleted before the data is loaded.

Create table: The table does not
exist and gets created.

Create table if not exists: The
table is created if it does not exist.

Drop and create table: The table is
removed and created again.

Drop table if exists and create:
The table is removed if it already exists and created again.

Truncate table: The table content
is deleted. You do not have the possibility to rollback the


Action on data

On the data of the table defined, you can perform:

Insert: Add new entries to the
table. If duplicates are found, Job stops.

Update: Make changes to existing

Merge: Updates or adds data to the


It is necessary to specify at least one
column as a primary key on which the Update and Merge operations are based. You can do that
by clicking Edit Schema and
selecting the check box(es) next to the column(s) you want
to set as primary key(s). To define the Update/Merge options, select in the Match Column column the check
boxes corresponding to the column names that you want to use
as a base for the Update
and Merge operations, and
select in the Update Column
column the check boxes corresponding to the column names
that you want to update. To define the Update condition, type in the condition that
will be used to update the data.


Schema and Edit schema

A schema is a row description. It defines the number of fields to be processed and passed on
to the next component. The schema is either Built-In or
stored remotely in the Repository.

Since version 5.6, both the Built-In mode and the Repository mode are
available in any of the Talend solutions.



Built-In: You create and store the schema locally for this
component only. Related topic: see Talend Studio
User Guide.



Repository: You have already created the schema and
stored it in the Repository. You can reuse it in various projects and Job designs. Related
topic: see Talend Studio User Guide.


Click Edit schema to make changes to the schema. If the
current schema is of the Repository type, three options are

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this option
    to change the schema to Built-in for local

  • Update repository connection: choose this option to change
    the schema stored in the repository and decide whether to propagate the changes to
    all the Jobs upon completion. If you just want to propagate the changes to the
    current Job, you can select No upon completion and
    choose this schema metadata again in the [Repository


Data file

Full path to the data file to be used. If this component is used
in standalone mode, this is the name of an existing data file to be
loaded into the database. If this component is connected with an
input flow, this is the name of the file to be generated and written
with the incoming data to later be used with gpload to load into the
database. This field is hidden when the Use
check box is selected.


Use named-pipe

Select this check box to use a named-pipe. This option is only
applicable when the component is connected with an input flow. When
this check box is selected, no data file is generated and the data
is transferred to gpload through a named-pipe. This option greatly
improves performance in both Linux and Windows.


This component on named-pipe mode uses a JNI interface to
create and write to a named-pipe on any Windows platform.
There­fore the path to the associated JNI DLL must be
con­figured inside the java library path. The component comes
with two DLLs for both 32 and 64 bit operating systems that are
automatically provided in the Studio with the component.


Named-pipe name

Specify a name for the named-pipe to be used. Ensure that the name
entered is valid.


Die on error

This check box is selected by default. Clear the check box to skip
the row on error and complete the process for error-free rows. If
needed, you can retrieve the rows on error via a Row > Rejects link.

Advanced settings

Use existing control file (YAML formatted)

Select this check box to provide a control file to be used with
the gpload utility instead of specifying all the options explicitly
in the component. When this check box is selected, Data file and the other gpload related
options no longer apply. Refer to Greenplum’s gpload manual for
details on creating a control file.


Control file

Enter the path to the control file to be used, between double
quotation marks, or click […] and
browse to the control file. This option is passed on to the gpload
utility via the -f argument.


CSV mode

Select this check box to include CSV specific parameters such as
Escape char and Text enclosure.


Field separator

Character, string, or regular expression used to separate


This is gpload’s delim argument. The
default value is |. To improve performance, use the default


Escape char

Character of the row to be escaped.


Text enclosure

Character used to enclose text.


Header (skips the first row of data file)

Select this check box to skip the first row of the data


Additional options

Set the gpload arguments in the corresponding table. Click
[+] as many times as required
to add arguments to the table. Click the Parameter field and choose among the arguments from
the list. Then click the corresponding Value field and enter a value between quotation



LOCAL_HOSTNAME: The host name or
IP address of the local machine on which gpload is running. If this
machine is configured with multiple network interface cards (NICs),
you can specify the host name or IP of each individual NIC to allow
network traffic to use all NICs simultaneously. By default, the
local machine’s primary host name or IP is used.



PORT (gpfdist port): The specific
port number that the gpfdist file distribution program should use.
You can also specify a PORT_RANGE
to select an available port from the specified range. If both
PORT and PORT_RANGE are defined, then PORT takes precedence. If neither PORT or PORT_RANGE is defined, an available port between
8000 and 9000 is selected by default. If multiple host names are
declared in LOCAL_HOSTNAME, this
port number is used for all hosts. This configuration is desired if
you want to use all NICs to load the same file or set of files in a
given directory location.



PORT_RANGE: Can be used instead
of PORT (gpfdist port) to specify a
range of port numbers from which gpload can choose an available port
for this instance of the gpfdist file distribution program.



NULL_AS: The string that
represents a null value. The default is N
(backslash-N) in TEXT mode, and an empty value with no quotation
marks in CSV mode. Any source data item that matches this string
will be considered a null value.



processes each specified column as though it were quoted and hence
not a NULL value. For the default null string in CSV mode (nothing
between two delimiters), this causes missing values to be evaluated
as zero-length strings.



ERROR_LIMIT (2 or higher):
Enables single row error isolation mode for this load operation.
When enabled and the error limit count is not reached on any
Greenplum segment instance during input processing, all good rows
will be loaded and input rows that have format errors will be
discarded or logged to the table specified in ERROR_TABLE if available. When the error limit is
reached, input rows that have format errors will cause the load
operation to abort. Note that single row error isolation only
applies to data rows with format errors, for example, extra or
missing attributes, attributes of a wrong data type, or invalid
client encoding sequences. Constraint errors, such as primary key
violations, will still cause the load operation to abort if
encountered. When this option is not enabled, the load operation
will abort on the first error encountered.



ERROR_TABLE: When ERROR_LIMIT is declared, specifies an
error table where rows with formatting errors will be logged when
running in single row error isolation mode. You can then examine
this error table to see error rows that were not loaded (if


Log file

Browse to or enter the access path to the log file in your



Define the encoding type manually in the field.


Specify gpload path

Select this check box to specify the full path to the gpload
executable. You must check this option if the gpload path is not
specified in the PATH environment variable.


Full path to gpload executable

Full path to the gpload executable on the machine in use. It is
advisable to specify the gpload path in the PATH environment
variable instead of selecting this option.


tStatCatcher Statistics

Select this check box to collect log data at the component

Global Variables 

NB_LINE: the number of rows processed. This is an After
variable and it returns an integer.

GPLOAD_OUTPUT: the output information when the gpload
utility is the executed. This is an After variable and it returns a string.

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
to access the variable list and choose the variable to use from it.

For further information about variables, see Talend Studio
User Guide.


This component is mainly used when no particular transformation is
required on the data to be loaded on to the database.

This component can be used as a standalone or an output


The activity of this component can be logged using the log4j feature. For more information on this feature, see Talend Studio User

For more information on the log4j logging levels, see the Apache documentation at


Due to license incompatibility, one or more JARs required to use this component are not
provided. You can install the missing JARs for this particular component by clicking the
Install button on the Component tab view. You can also find out and add all missing JARs easily on
the Modules tab in the Integration perspective
of your studio. For details, see
or the section describing how to configure the Studio in the Talend Installation and Upgrade

Related scenario

For a related use case, see Scenario: Inserting data in MySQL database.

Document get from Talend
Thank you for watching.
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x