July 30, 2023

tMysqlOutput – Docs for ESB 7.x

tMysqlOutput

Writes, updates, makes changes or suppresses entries in a database.

tMysqlOutput executes the action defined on the table
and/or on the data contained in the table, based on the flow incoming from the preceding
component in the Job.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tMysqlOutput Standard properties

These properties are used to configure tMysqlOutput running in the Standard Job framework.

The Standard
tMysqlOutput component belongs to the Databases family.

The component in this framework is available in all Talend
products
.

Note: This component is a specific version of a dynamic database
connector. The properties related to database settings vary depending on your database
type selection. For more information about dynamic database connectors, see Dynamic database components.

Basic settings

Database

Select a type of database from the list and click
Apply.

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

DB Version

Select the MySQL version you are using.

tMysqlOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio

User Guide.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection
    to be shared in the Basic
    settings
    view of the connection component which creates that very database
    connection.

  2. In the child level, use a dedicated connection
    component to read that registered database connection.

For an example about how to share a database connection
across Job levels, see

Talend Studio
User Guide
.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

Name of the table to be written. Note that only one table can be
written at a time

Action on table

On the table defined, you can perform one of the following
operations:

Default: No operation is carried
out.

Drop and create a table: The table is
removed and created again.

Create a table: The table does not
exist and gets created.

Create a table if not exists: The table
is created if it does not exist.

Drop a table if exists and create: The
table is removed if it already exists and created again.

Clear a table: The table content is
deleted.

Truncate table: The table content is
quickly deleted. However, you will not be able to rollback the
operation.

Action on data

On the data of the table defined, you can perform:

Insert: Add new entries to the table.
If duplicates are found, the job stops.

Update: Make changes to existing
entries.

Insert or update: Insert a new record. If
the record with the given reference already exists, an update would be made.

Update or insert: Update the record with the
given reference. If the record does not exist, a new record would be inserted.

Delete: Remove entries corresponding to
the input flow.

Replace: Add new entries to the table.
If an old row in the table has the same value as a new row for a PRIMARY
KEY or a UNIQUE index, the old row is deleted before the new row is
inserted.

Insert or update on duplicate key or unique
index
: Add entries if the inserted value does not exist
or update entries if the inserted value already exists and there is a
risk of violating a unique index or primary key.

Insert Ignore: Add only new rows to
prevent duplicate key errors.

Warning:

You must specify at least one
column as a primary key on which the Update and Delete operations are based. You can do that by
clicking Edit Schema and
selecting the check box(es) next to the column(s) you want to set as
primary key(s). For an advanced use, click the Advanced settings view where you can
simultaneously define primary keys for the update and delete
operations. To do that: Select the Use
field options
check box and then in the Key in update column, select the check
boxes next to the column name on which you want to base the update
operation. Do the same in the Key in
delete
column for the deletion operation.

Note:

The dynamic schema feature can be used in the
following modes: Insert;
Update; Insert or update; Update or insert; Delete.

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

This
component offers the advantage of the dynamic schema feature. This allows you to
retrieve unknown columns from source files or to copy batches of columns from a source
without mapping each column individually. For further information about dynamic schemas,
see
Talend Studio

User Guide.

This
dynamic schema feature is designed for the purpose of retrieving unknown columns of a
table and is recommended to be used for this purpose only; it is not recommended for the
use of creating tables.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Die on error

This check box is selected by default. Clear the check box to skip the
row in error and complete the process for error-free rows. If needed,
you can retrieve the rows in error via a Row > Rejects
link.

Specify a data source alias

Select this check box and specify the alias of a data source created on the
Talend Runtime
side to use the shared connection pool defined in the data source configuration. This
option works only when you deploy and run your Job in
Talend Runtime
.

Warning:

If you use the component’s own DB configuration, your data source connection will be
closed at the end of the component. To prevent this from happening, use a shared DB
connection with the data source alias specified.

This check box is not available when the Use an existing
connection
check box is selected.

Advanced settings

Use alternate
schema

Select this option to use a schema other than
the one specified by the component that establishes the database connection (that is,
the component selected from the Component list
drop-down list in Basic settings view). After
selecting this option, provide the name of the desired schema in the Schema field.

This option is available when Use an
existing connection
is selected in Basic
settings
view.

Additional JDBC parameters

Specify additional connection properties for the DB connection you are
creating. This option is not available if you have selected the
Use an existing connection check
box in the Basic settings.

Note:

You can press Ctrl+Space to
access a list of predefined global variables.

Extend Insert

Select this check box to carry out a bulk insert of a defined set of
lines instead of inserting lines one by one. The gain in system
performance is considerable.

Number of rows per insert: enter the
number of rows to be inserted per operation. Note that the higher the
value specified, the lower performance levels shall be due to the
increase in memory demands.

Note:

This option is not compatible with the Reject link. You should therefore clear the check
box if you are using a Row >
Rejects
link with this component.

Warning:

If you are using this component with
tMysqlLastInsertID, ensure
that the Extend Insert check
box in Advanced Settings is not
selected. Extend Insert allows
for batch loading, however, if the check box is selected, only
the ID of the last line of the last batch will be returned.

Use Batch

Select this check box to activate the batch mode for data processing.

Note:

This check box is available only when you have selected, the
Update or the Delete option in the Action on data field.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Commit every

Number of rows to be included in the batch before it is committed to
the DB. This option ensures transaction quality (but not rollback) and,
above all, a higher performance level.

Additional Columns

This option is not available if you have just created the DB table
(even if you delete it beforehand). This option allows you to call SQL
functions to perform actions on columns, provided that these are not
insert, update or delete actions, or actions that require
pre-processing.

 

Name: Type in the name of the schema
column to be altered or inserted.

 

SQL expression: Type in the SQL
statement to be executed in order to alter or insert the data in the
corresponding column.

 

Position: Select Before, Replace or After,
depending on the action to be performed on the reference column.

 

Reference column: Type in a reference
column that tMySqlOutput can use to
locate or replace the new column, or the column to be modified.

Use field options

Select this check box to customize a request, particularly if multiple
actions are being carried out on the data.

Use Hint Options

Select this check box to activate the hint configuration area which
helps you optimize a query’s execution. In this area, parameters
are:

HINT: specify the hint you need,
using the syntax /*+ */.

POSITION: specify where you put
the hint in a SQL statement.

SQL STMT: select the SQL statement
you need to use.

Debug query mode

Select this check box to display each step during processing entries
in a database.

Use duplicate key update mode insert

Updates the values of the columns specified, in the event of duplicate
primary keys.:

Column: Between double quotation marks,
enter the name of the column to be updated.

Value: Enter the action you want to
carry out on the column.

Note:

To use this option you must first of all select the Insert mode in the Action on data list found in the Basic Settings view.

tStatCatcher Statistics

Select this check box to collect log data at the component
level.

Enable parallel execution

Select this check box to perform high-speed data processing, by treating
multiple data flows simultaneously. Note that this feature depends on the database or
the application ability to handle multiple inserts in parallel as well as the number of
CPU affected. In the Number of parallel executions
field, either:

  • Enter the number of parallel executions desired.
  • Press Ctrl + Space and select the
    appropriate context variable from the list. For further information, see
    Talend Studio User Guide
    .

Note that when parallel execution is enabled, it is not possible to use global
variables to retrieve return values in a subjob.

Warning:

  • The Action on
    table
    field is not available with the
    parallelization function. Therefore, you must use a tCreateTable component if you
    want to create a table.
  • When parallel execution is enabled, it is not
    possible to use global variables to retrieve return values in a
    subjob.

Global Variables

Global Variables

NB_LINE: the number of rows read by an input component or
transferred to an output component. This is an After variable and it returns an
integer.

NB_LINE_UPDATED: the number of rows updated. This is an
After variable and it returns an integer.

NB_LINE_INSERTED: the number of rows inserted. This is an
After variable and it returns an integer.

NB_LINE_DELETED: the number of rows deleted. This is an
After variable and it returns an integer.

NB_LINE_REJECTED: the number of rows rejected. This is an
After variable and it returns an integer.

QUERY: the query statement processed. This is an After
variable and it returns a string.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

Usage

Usage rule

This component offers the flexibility benefit of the DB query and
covers all of the SQL queries possible.

This component must be used as an output component. It allows you to
carry out actions on a table or on the data of a table in a MySQL
database. It also allows you to create a reject flow using a Row > Rejects link to filter data in
error. For an example of tMySqlOutput
in use, see Retrieving data in error with a Reject link.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Inserting a column and altering data using tMysqlOutput

This Java scenario is a three-component job that aims at creating random data using a
tRowGenerator, duplicating a column to be altered
using the tMap component, and eventually altering the
data to be inserted based on an SQL expression using the tMysqlOutput component.

  • Drop the following components from the Palette onto the design workspace: tRowGenerator, tMap and
    tMySQLOutput.

  • Connect tRowGenerator, tMap, and tMysqlOutput using
    the Row Main link.

tMysqlOutput_2.png
  • In the design workspace, select tRowGenerator to display its Basic
    settings
    view.

tMysqlOutput_3.png
  • Click the Edit schema three-dot button to
    define the data to pass on to the tMap
    component, two columns in this scenario, name and
    random_date.

tMysqlOutput_4.png
  • Click OK to close the dialog box.

  • Click the RowGenerator Editor three-dot
    button to open the editor and define the data to be generated.

tMysqlOutput_5.png
  • Click in the corresponding Functions fields
    and select a function for each of the two columns, getFirstName for
    the first column and getrandomDate for the second column.

  • In the Number of Rows for Rowgenerator
    field, enter 10 to generate ten first name rows and click Ok to close the editor.

  • Double-click the tMap component to open the
    Map editor. The Map editor opens displaying the input metadata of the tRowGenerator component.

tMysqlOutput_6.png
  • In the Schema editor panel of the Map
    editor, click the plus button of the output table to add two rows and define the
    first as random_date and the second as
    random_date1.

tMysqlOutput_7.png

In this scenario, we want to duplicate the random_date column and
adapt the schema in order to alter the data in the output component.

  • In the Map editor, drag the random_date row from the
    input table to the random_date and random_date1
    rows in the output table.

tMysqlOutput_8.png
  • Click OK to close the editor.

  • In the design workspace, double-click the tMysqlOutput component to display its Basic settings view and set its parameters.

tMysqlOutput_9.png
  • Set Property Type to Repository and then click the three-dot button to open the
    Repository content dialog box and select
    the correct DB connection. The connection details display automatically in the
    corresponding fields.

    Note:

    If you have not stored the DB connection details in the Metadata entry in the Repository, select Built-in
    on the property type list and set the connection detail manually.

  • Click the three-dot button next to the Table
    field and select the table to be altered, Dates in this
    scenario.

  • On the Action on table list, select
    Drop table if exists and create, select
    Insert on the Action on
    data
    list.

  • If needed, click Sync columns to synchronize
    with the columns coming from the tMap
    component.

  • Click the Advanced settings tab to display
    the corresponding view and set the advanced parameters.

tMysqlOutput_10.png
  • In the Additional Columns area, set the
    alteration to be performed on columns.

    In this scenario, the One_month_later column replaces
    random_date_1. Also, the data itself gets altered using
    an SQL expression that adds one month to the randomly picked-up date of the
    random_date_1 column. ex: 2007-08-12 becomes
    2007-09-12.

    -Enter One_Month_Later in the Name cell.

    -In the SQL expression cell, enter the
    relevant addition script to be performed, "adddate(Random_date, interval 1
    month)"
    in this scenario.

    -Select Replace on the Position list.

    -Enter Random_date1 on the Reference column list.

Note:

For this job we duplicated the random_date_1 column in the DB
table before replacing one instance of it with the
One_Month_Later column. The aim of this workaround was to
be able to view upfront the modification performed.

  • Save your job and press F6 to execute
    it.

The new One_month_later column replaces the
random_date1 column in the DB table and adds one month to each
of the randomly generated dates.

tMysqlOutput_11.png

Updating data using tMysqlOutput

This Java scenario describes a two-component Job that updates data in a MySQL table
according to that in a delimited file.

  • Drop tFileInputDelimited and tMysqlOutput from the Palette onto the design workspace.

  • Connect the two components together using a Row
    Main
    link.

tMysqlOutput_12.png
  • Double-click tFileInputDelimited to display
    its Basic settings view and define the
    component properties.

  • From the Property Type list, select Repository if you have already stored the metadata of
    the delimited file in the Metadata node in the
    Repository tree view. Otherwise, select
    Built-In to define manually the metadata of
    the delimited file.

    For more information about storing metadata, see
    Talend Studio

    User Guide.

tMysqlOutput_13.png
  • In the File Name field, click the three-dot
    button and browse to the source delimited file that contains the modifications
    to propagate in the MySQL table.

    In this example, we use the customer_update file that
    holds four columns: id, CustomerName,
    CustomerAddress and idState. Some
    of the data in these four columns is different from that in the MySQL
    table.

tMysqlOutput_14.png
  • Define the row and field separators used in the source file in the
    corresponding fields.

  • If needed, set Header, Footer and Limit.

    In this example, Header is set to 1 since the
    first row holds the names of columns, therefore it should be ignored. Also, the
    number of processed lines is limited to 2000.

  • Click
    the three-dot button next to Edit Schema to
    open a dialog box where you can describe the data structure of the source
    delimited file that you want to pass to the component that follows.

tMysqlOutput_15.png
  • Select the Key check box(es) next to the
    column name(s) you want to define as key column(s).

Note:

It is necessary to define at least one column as a key column for the Job to be
executed correctly. Otherwise, the Job is automatically interrupted and an error
message displays on the console.

  • In the design workspace, double-click tMysqlOutput to open its Basic
    settings
    view where you can define its properties.

tMysqlOutput_16.png
  • Click Sync columns to retrieve the schema of
    the preceding component. If needed, click the three-dot button next to Edit schema to open a dialog box where you can check
    the retrieved schema.

  • From the Property Type list, select Repository if you have already stored the connection
    metadata in the Metadata node in the Repository tree view. Otherwise, select Built-In to define manually the connection
    information.

    For more information about storing metadata, see
    Talend Studio

    User Guide.

  • Fill in the database connection information in the corresponding
    fields.

  • In the Table field, enter the name of the
    table to update.

  • From the Action on table list, select the
    operation you want to perform, Default in this
    example since the table already exists.

  • From the Action on data list, select the
    operation you want to perform on the data, Update in this example.

  • Save your Job and press F6 to execute it.

tMysqlOutput_17.png

Using you DB browser, you can verify if the MySQL table, customers,
has been modified according to the delimited file.

In the above example, the database table has always the four columns
id, CustomerName,
CustomerAddress and idState, but certain
fields have been modified according to the data in the delimited file used.

Retrieving data in error with a Reject link

This scenario describes a four-component Job that carries out migration
from a customer file to a MySQL database table and redirects data in error towards a CSV
file using a Reject link.

tMysqlOutput_18.png
  • In the Repository, select
    the customer file metadata that you want to migrate and drop it onto the
    workspace. In the Components dialog box,
    select tFileInputDelimited and click
    OK. The component properties will be
    filled in automatically.

  • If you have not stored the information about your customer
    file under the Metadata node in the
    Repository.
    Drop a tFileInputDelimited component from the family
    File > Input, in the Palette,
    and fill in its properties manually in the Component tab.

  • From the Palette, drop a
    tMap from the Processing family onto the workspace.

  • In the Repository, expand
    the Metadata node, followed by the
    Db Connections node and select the
    connection required to migrate your data to the appropriate database. Drop it
    onto the workspace. In the Components
    dialog box, select tMysqlOutput and click
    OK. The database connection
    properties will be automatically filled in.

  • If you have not stored the database connection details under the
    Db Connections node in the Repository, drop a tMysqlOutput from the Databases family in the Palette and fill in its properties manually in the Component tab.

For more information, see
Talend Studio

User Guide.

  • From the Palette, select
    a tFileOutputDelimited from the
    File > Output family, and drop it onto the workspace.

  • Link the customers
    component to the tMap component, and the
    tMap and Localhost with a Row
    Main
    link. Name this second link out.

  • Link the Localhost to the
    tFileOutputDelimited using a
    Row > Reject link.

  • Double-click the customers component to display the Component view.

tMysqlOutput_19.png
  • In the Property Type
    list, select Repository and
    click the […] button in order to
    select the metadata containing the connection to your file. You can also select
    the Built-in mode and fill in the fields
    manually.

  • Click the […] button
    next to the File Name field, and fill in
    the path and the name of the file you want to use.

  • In the Row and Field Separator fields, type in between inverted
    commas the row and field separator used in the file.

  • In the Header, Footer and Limit fields, type in the number of headers and footers to
    ignore, and the number of rows to which processing should be limited.

  • In the Schema list, select
    Repository and click the […] button in order to select the schema of
    your file, if it is stored under the Metadata node in the Repository. You can also click the […] button next to the Edit
    schema
    field, and set the schema manually.

The schema is as follows:

tMysqlOutput_20.png
  • Double-click the tMap
    component to open its editor.

tMysqlOutput_21.png
  • Select the id, CustomerName, CustomerAddress, idSate, id2, RegTime and RegisterTime columns on the table on
    the left and drop them on the out table,
    on the right.

tMysqlOutput_22.png
  • In the Schema editor
    area, at the bottom of the tMap editor,
    in the right table, change the length of the CustomerName column to 28 to create an
    error. Thus, any data for which the length is greater than 28 will create
    errors, retrieved with the Reject
    link.

  • Click OK.

  • In the workspace, double-click the output Localhost component to display its Component view.

tMysqlOutput_23.png
  • In the Property Type list,
    select Repository and click the
    […] button to select the connection
    to the database metadata. The connection details will be automatically filled
    in. You can also select the Built-in mode
    and set the fields manually.

  • In the Table field, type
    in the name of the table to be created. In this scenario, we call it customers_data.

  • In the Action on data
    list, select the Create table
    option.

  • Click the Sync columns
    button to retrieve the schema from the previous component.

  • Make sure the Die on
    error
    check box isn’t selected, so that the Job can be executed
    despite the error you just created.

  • Click the Advanced
    settings
    tab of the Component view to set the advanced parameters of the
    component.

tMysqlOutput_24.png
  • Deselect the Extend
    Insert
    check box which enables you to insert rows in batch,
    because this option is not compatible with the Reject link.

  • Double-click the tFileOutputDelimited
    component to set its properties in the Component view.

tMysqlOutput_25.png
  • Click the […] button
    next to the File Name field to fill in
    the path and name of the output file.

  • Click the Sync columns
    button to retrieve the schema of the previous component.

  • Save your Job and press F6 to execute it.

tMysqlOutput_26.png

The data in error are sent to the delimited file, as well as the error
type met. Here, we have: Data truncation.

Writing dynamic columns from a source file to a database

This scenario applies only to subscription-based Talend products.

In this scenario, MySQL is used for demonstration purposes. You will read
dynamic columns from a source file, map them and then write them to a table in a MySQL database. By defining a dynamic column alongside known
column names, we can retrieve all of the columns from the source file, including the
unknown columns.

  • Drop a tFileInputDelimited, a
    tMap and a tMysqlOutput component onto the workspace.
tMysqlOutput_27.png
  • Link tFileInputDelimited to
    tMap using a Row
    > Main
    connection.
  • Link tMap to tMysqlOutput using a Row >
    *New Output* (Main)
    connection.
  • Double-click tFileInputDelimited to open its Basic
    Settings
    view in the Component
    tab.
tMysqlOutput_28.png
Warning:
The dynamic schema feature is only supported in Built-In mode.
  • Select Built-In from the
    Property Type list.
  • Click the […] button next
    to the File name/Stream field and browse to
    the input file.
  • Enter the characters you want to use as separators next to the
    Row Separator and Field Separator fields.
  • Click Edit Schema to define
    the source file schema.

    The Edit Schema dialog box
    opens.

tMysqlOutput_29.png
  • Add as many rows as required or delete rows using the tMysqlOutput_30.png and tMysqlOutput_31.png
    buttons.
  • Modify the order of the columns using the tMysqlOutput_32.png and
    tMysqlOutput_33.png
    buttons.
  • Under Column, enter the names
    of each known column on separate rows.
  • In the last row, under Column, enter a name for the dynamic column.
  • Under Type, click the field
    to define the type of data in the corresponding column.

    Click the arrow to select the correct data type.

Warning:
Under Type, the dynamic column
type must be set as Dynamic.
Warning:
The dynamic column must be defined in the last row of the schema.
  • Click OK to close the dialog
    box when you have finished defining the source schema.
  • Click tMap to open its
    Basic Settings view in the Component tab.
tMysqlOutput_34.png
  • Click […] next to
    Map Editor to map the columns from the
    source file.
tMysqlOutput_35.png
  • On the toolbar on top of the Output
    Panel
    on the top right of the window, click the tMysqlOutput_30.png button.

    The Add an Output schema
    dialog box appears.

tMysqlOutput_37.png
  • Next to New output, enter a
    name for the output schema.
  • Click OK to close the dialog
    box.
  • Using the Ctrl + click
    technique, highlight all off the column names in the input schema on the left and
    drop them onto the output schema.

    The columns dropped on the output columns retain their original
    values and they are automatically mapped on a one to one basis.

tMysqlOutput_38.png
  • In the output schema, click the relevant row under Expression if you want to use the Expression Builder to set advanced parameters for the
    corresponding column in the output.
  • Click the […] button which
    appears to open the Expression Builder and set
    the parameters as required.

For further information about using the Expression Builder, see
Talend Studio

User Guide.

Warning:
The dynamic column must be mapped on a one to one basis and cannot
undergo any transformations. It cannot be used in a filter expression or in a
variables section. It cannot be renamed in the output table and cannot be used as a
join condition.
  • Click OK to close the
    Map Editor.
  • Double click tMysqlOutput to
    set its Basic Settings in the Component tab.
tMysqlOutput_39.png
  • Select Built-in as the
    Property Type.
  • Select the DB Version from
    the corresponding list.
  • Next to Host, enter the
    database server IP address.
  • Next to Port, enter the
    listening port number of the database server.
  • Enter your authentication data in the Username and Password
    fields.
  • Next to Action on table,
    select the required action.
  • Next to Action on data,
    select the required action.
  • Set the Schema type
    as Built-in and click Edit schema to modify the schema if required.
  • Press F6 to run the Job.

    The table is written to the MySQL database along with the data and
    the column names of the previously unknown columns:

tMysqlOutput_40.png
Note: The Job can also be run in the Traces
Debug
mode, which allows you to view the rows as they are written to the
output file, in the workspace.

For further information about defining and mapping dynamic schemas, see
Talend Studio

User Guide.

For an example of how to write dynamic columns to an output file, see Writing dynamic columns from a database to an output file.

tMysqlOutput MapReduce properties (deprecated)

These properties are used to configure tMysqlOutput running in the MapReduce Job framework.

The MapReduce
tMysqlOutput component belongs to the Databases family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

The MapReduce framework is deprecated from Talend 7.3 onwards. Use Talend Jobs for Apache Spark to accomplish your integration tasks.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

DB Version

Select the version of the database to be used.

tMysqlOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio

User Guide.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

Name of the table to be written. Note that only one table can be
written at a time

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Die on error

This check box is selected by default. Clear the check box to skip the
row in error and complete the process for error-free rows. If needed,
you can retrieve the rows in error via a Row > Rejects
link.

Usage

Usage rule

In a
Talend
Map/Reduce Job, it is used as an end component and requires
a transformation component as input link. The other components used along with it must be
Map/Reduce components, too. They generate native Map/Reduce code that can be executed
directly in Hadoop.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs, and non Map/Reduce Jobs.

Hadoop Connection

You need to use the Hadoop Configuration tab in the
Run view to define the connection to a given Hadoop
distribution for the whole Job.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Map/Reduce version of this component yet.

tMysqlOutput properties for Apache Spark Batch

These properties are used to configure tMysqlOutput running in the Spark Batch Job framework.

The Spark Batch
tMysqlOutput component belongs to the Databases family.

This component can also be used to write data to a RDS Aurora or a RDS MySQL
database.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

DB Version

Select the MySQL version you are using.

When the database to be used is RDS Aurora, you need to select Mysql 5.

tMysqlOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio

User Guide.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

Name of the table to be written. Note that only one table can be
written at a time

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the database
connection you are creating.

Use Batch per partition

Select this check box to activate the batch mode for data processing.

Note:

This check box is available only when you have selected, the
Update or the Delete option in the Action on data field.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control
the number of connections that stay open simultaneously. The default values given to the
following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number
    of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same
    time.

  • Max waiting time (ms): enter the maximum amount of time
    at the end of which the response to a demand for using a connection should be returned by
    the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number
    of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number
    of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The
following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval
    (in milliseconds) at the end of which the component checks the status of the connections and
    destroys the idle ones.

  • Min idle time for a connection to be eligible to
    eviction
    : enter the time interval (in milliseconds) at the end of which the idle
    connections are destroyed.

  • Soft min idle time for a connection to be eligible to
    eviction
    : this parameter works the same way as Min idle
    time for a connection to be eligible to eviction
    but it keeps the minimum number
    of idle connections, the number you define in the Min number of idle
    connections
    field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component should use a tMysqlConfiguration component
present in the same Job to connect to MySQL. You need to select the Use an existing connection check box and then select the tMysqlConfiguration component to be used.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Batch Job, see Writing and reading data from MongoDB using a Spark Batch Job.

tMysqlOutput properties for Apache Spark Streaming

These properties are used to configure tMysqlOutput running in the Spark Streaming Job framework.

The Spark Streaming
tMysqlOutput component belongs to the Databases family.

This component can also be used to write data to a RDS Aurora or a RDS MySQL
database.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Property type

Either Built-In or Repository.

Built-In: No property data stored centrally.

Repository: Select the repository file where the
properties are stored.

DB Version

Select the version of the database to be used.

When the database to be used is RDS Aurora, you need to select Mysql 5.

tMysqlOutput_1.png

Click this icon to open a database connection wizard and store the
database connection parameters you set in the component Basic settings view.

For more information about setting up and storing database connection
parameters, see
Talend Studio

User Guide.

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Host

Database server IP address.

Port

Listening port number of DB server.

Database

Name of the database.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Table

Name of the table to be written. Note that only one table can be
written at a time

Action on table

On the table defined, you can perform one of the following
operations:

Default: No operation is carried
out.

Drop and create a table: The table is
removed and created again.

Create a table: The table does not
exist and gets created.

Create a table if not exists: The table
is created if it does not exist.

Drop a table if exists and create: The
table is removed if it already exists and created again.

Clear a table: The table content is
deleted.

Truncate table: The table content is
quickly deleted. However, you will not be able to rollback the
operation.

Schema and Edit
schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

 

Built-In: You create and store the schema locally for this component
only.

 

Repository: You have already created the schema and stored it in the
Repository. You can reuse it in various projects and Job designs.

When the schema to be reused has default values that are
integers or functions, ensure that these default values are not enclosed within
quotation marks. If they are, you must remove the quotation marks manually.

You can find more details about how to
verify default values in retrieved schema in Talend Help Center (https://help.talend.com).

 

Click Edit
schema
to make changes to the schema. If the current schema is of the Repository type, three options are available:

  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

Die on error

This check box is selected by default. Clear the check box to skip the
row in error and complete the process for error-free rows. If needed,
you can retrieve the rows in error via a Row > Rejects
link.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the DB connection you are
creating. This option is not available if you have selected the
Use an existing connection check
box in the Basic settings.

Note:

You can press Ctrl+Space to
access a list of predefined global variables.

Use Batch

Select this check box to activate the batch mode for data processing.

Note:

This check box is available only when you have selected, the
Update or the Delete option in the Action on data field.

Batch Size

Specify the number of records to be processed in each batch.

This field appears only when the Use batch mode
check box is selected.

Connection pool

In this area, you configure, for each Spark executor, the connection pool used to control
the number of connections that stay open simultaneously. The default values given to the
following connection pool parameters are good enough for most use cases.

  • Max total number of connections: enter the maximum number
    of connections (idle or active) that are allowed to stay open simultaneously.

    The default number is 8. If you enter -1, you allow unlimited number of open connections at the same
    time.

  • Max waiting time (ms): enter the maximum amount of time
    at the end of which the response to a demand for using a connection should be returned by
    the connection pool. By default, it is -1, that is to say, infinite.

  • Min number of idle connections: enter the minimum number
    of idle connections (connections not used) maintained in the connection pool.

  • Max number of idle connections: enter the maximum number
    of idle connections (connections not used) maintained in the connection pool.

Evict connections

Select this check box to define criteria to destroy connections in the connection pool. The
following fields are displayed once you have selected it.

  • Time between two eviction runs: enter the time interval
    (in milliseconds) at the end of which the component checks the status of the connections and
    destroys the idle ones.

  • Min idle time for a connection to be eligible to
    eviction
    : enter the time interval (in milliseconds) at the end of which the idle
    connections are destroyed.

  • Soft min idle time for a connection to be eligible to
    eviction
    : this parameter works the same way as Min idle
    time for a connection to be eligible to eviction
    but it keeps the minimum number
    of idle connections, the number you define in the Min number of idle
    connections
    field.

Usage

Usage rule

This component is used as an end component and requires an input link.

This component should use a tMysqlConfiguration component
present in the same Job to connect to MySQL. You need to select the Use an existing connection check box and then select the tMysqlConfiguration component to be used.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

For a scenario about how to use the same type of component in a Spark Streaming Job, see
Reading and writing data in MongoDB using a Spark Streaming Job.

Updating a database table using tMysqlOutput in a Big Data Streaming Job

This scenario describes a Big Data Streaming Job that updates data in specific
rows of a MySQL table once every five seconds.

Establishing the Job

  1. Create a Big Data Streaming Job.
  2. Drop tRowGenerator and
    tMysqlOutput from the Palette onto the design
    workspace.
  3. Connect the two components using a Row > Main connection.

    tMysqlOutput_44.png

    tHDFSConfiguration is used in this scenario by Spark to connect
    to the HDFS system where the jar files dependent on the Job are transferred.

    In the Spark
    Configuration
    tab in the Run
    view, define the connection to a given Spark cluster for the whole Job. In
    addition, since the Job expects its dependent jar files for execution, you must
    specify the directory in the file system to which these jar files are
    transferred so that Spark can access these files:

    • Yarn mode (Yarn client or Yarn cluster):

      • When using Google Dataproc, specify a bucket in the
        Google Storage staging bucket
        field in the Spark configuration
        tab.

      • When using HDInsight, specify the blob to be used for Job
        deployment in the Windows Azure Storage
        configuration
        area in the Spark
        configuration
        tab.

      • When using Altus, specify the S3 bucket or the Azure
        Data Lake Storage for Job deployment in the Spark
        configuration
        tab.
      • When using Qubole, add a
        tS3Configuration to your Job to write
        your actual business data in the S3 system with Qubole. Without
        tS3Configuration, this business data is
        written in the Qubole HDFS system and destroyed once you shut
        down your cluster.
      • When using on-premise
        distributions, use the configuration component corresponding
        to the file system your cluster is using. Typically, this
        system is HDFS and so use tHDFSConfiguration.

    • Standalone mode: use the
      configuration component corresponding to the file system your cluster is
      using, such as tHDFSConfiguration or
      tS3Configuration.

      If you are using Databricks without any configuration component present
      in your Job, your business data is written directly in DBFS (Databricks
      Filesystem).

    Prerequisite: ensure that the Spark cluster has been
    properly installed and is running.

Selecting the Spark mode

Depending on the Spark cluster to be used, select a Spark mode for your Job.

The Spark documentation provides an exhaustive list of Spark properties and
their default values at Spark Configuration. A Spark Job designed in the Studio uses
this default configuration except for the properties you explicitly defined in the
Spark Configuration tab or the components
used in your Job.

  1. Click Run to open its view and then click the
    Spark Configuration tab to display its view
    for configuring the Spark connection.
  2. Select the Use local mode check box to test your Job locally.

    In the local mode, the Studio builds the Spark environment in itself on the fly in order to
    run the Job in. Each processor of the local machine is used as a Spark
    worker to perform the computations.

    In this mode, your local file system is used; therefore, deactivate the
    configuration components such as tS3Configuration or
    tHDFSConfiguration that provides connection
    information to a remote file system, if you have placed these components
    in your Job.

    You can launch
    your Job without any further configuration.

  3. Clear the Use local mode check box to display the
    list of the available Hadoop distributions and from this list, select
    the distribution corresponding to your Spark cluster to be used.

    This distribution could be:

    • Databricks

    • Qubole

    • Amazon EMR

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • Cloudera

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Google Cloud
      Dataproc

      For this distribution, Talend supports:

      • Yarn client

    • Hortonworks

      For this distribution, Talend supports:

      • Yarn client

      • Yarn cluster

    • MapR

      For this distribution, Talend supports:

      • Standalone

      • Yarn client

      • Yarn cluster

    • Microsoft HD
      Insight

      For this distribution, Talend supports:

      • Yarn cluster

    • Cloudera Altus

      For this distribution, Talend supports:

      • Yarn cluster

        Your Altus cluster should run on the following Cloud
        providers:

        • Azure

          The support for Altus on Azure is a technical
          preview feature.

        • AWS

    As a Job relies on Avro to move data among its components, it is recommended to set your
    cluster to use Kryo to handle the Avro types. This not only helps avoid
    this Avro known issue but also
    brings inherent preformance gains. The Spark property to be set in your
    cluster is:

    If you cannot find the distribution corresponding to yours from this
    drop-down list, this means the distribution you want to connect to is not officially
    supported by
    Talend
    . In this situation, you can select Custom, then select the Spark
    version
    of the cluster to be connected and click the
    [+] button to display the dialog box in which you can
    alternatively:

    1. Select Import from existing
      version
      to import an officially supported distribution as base
      and then add other required jar files which the base distribution does not
      provide.

    2. Select Import from zip to
      import the configuration zip for the custom distribution to be used. This zip
      file should contain the libraries of the different Hadoop/Spark elements and the
      index file of these libraries.

      In
      Talend

      Exchange, members of
      Talend
      community have shared some ready-for-use configuration zip files
      which you can download from this Hadoop configuration
      list and directly use them in your connection accordingly. However, because of
      the ongoing evolution of the different Hadoop-related projects, you might not be
      able to find the configuration zip corresponding to your distribution from this
      list; then it is recommended to use the Import from
      existing version
      option to take an existing distribution as base
      to add the jars required by your distribution.

      Note that custom versions are not officially supported by

      Talend
      .
      Talend
      and its community provide you with the opportunity to connect to
      custom versions from the Studio but cannot guarantee that the configuration of
      whichever version you choose will be easy. As such, you should only attempt to
      set up such a connection if you have sufficient Hadoop and Spark experience to
      handle any issues on your own.

    For a step-by-step example about how to connect to a custom
    distribution and share this connection, see Hortonworks.

Configuring a Spark stream for your Apache Spark streaming Job

Define how often your Spark Job creates and processes micro batches.
  1. In the Batch size
    field, enter the time interval at the end of which the Job reviews the source
    data to identify changes and processes the new micro batches.
  2. If needs be, select the Define a streaming
    timeout
    check box and in the field that is displayed, enter the
    time frame at the end of which the streaming Job automatically stops
    running.

Configuring the connection to the file system to be used by Spark

Skip this section if you are using Google Dataproc or HDInsight, as for these two
distributions, this connection is configured in the Spark
configuration
tab.

  1. Double-click tHDFSConfiguration to open its Component view.

    Spark uses this component to connect to the HDFS system to which the jar
    files dependent on the Job are transferred.

  2. If you have defined the HDFS connection metadata under the Hadoop
    cluster
    node in Repository, select
    Repository from the Property
    type
    drop-down list and then click the
    […] button to select the HDFS connection you have
    defined from the Repository content wizard.

    For further information about setting up a reusable
    HDFS connection, search for centralizing HDFS metadata on Talend Help Center
    (https://help.talend.com).

    If you complete this step, you can skip the following steps about configuring
    tHDFSConfiguration because all the required fields
    should have been filled automatically.

  3. In the Version area, select
    the Hadoop distribution you need to connect to and its version.
  4. In the NameNode URI field,
    enter the location of the machine hosting the NameNode service of the cluster.
    If you are using WebHDFS, the location should be
    webhdfs://masternode:portnumber; WebHDFS with SSL is not
    supported yet.
  5. In the Username field, enter
    the authentication information used to connect to the HDFS system to be used.
    Note that the user name must be the same as you have put in the Spark configuration tab.

Configure the Job

  1. Open the Basic settings view of
    tRowGenerator.
  2. Define the schema for the table.

    In this case, two columns are added:

    • Num, type integer, set as the key column
    • Cities, type string and length 20
  3. Propagate the schema to the next component.
  4. In RowGenerator Editor dialog box, set functions as
    follows.

    • Num: Numeric.sequence, with both start
      value
      and step set to
      1
    • Cities: TalendDataGenerator.getUsCity()
    • Number of Rows for RowGenerator:
      10
    Note: These settings specify to generate 10 rows of data at a time, with the
    Num values from 1 to 10.

  5. Set Input repetition interval to
    5000.
  6. Open the Basic settings view of
    tMysqlOutput.
  7. Set the database table parameters. In this example, database is locally
    installed.

    Note: To make sure the table is updated properly, it is recommended to select
    None for the Action on
    table
    option and Update for the
    Action on data option.

Executing the Job

  1. Save the Job.
  2. Press F6 to execute it.

    The data in row 1 through row 10 of the database table is updated once
    every five seconds.

Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x