July 30, 2023

tELTMap – Docs for ESB 7.x

tELTMap

Uses the tables provided as input to feed the parameter in the built SQL statement.
The statement can include inner or outer joins to be implemented between tables or
between one table and its aliases.

The three ELT components are closely related, in terms of
their operating conditions. These components should be used to handle DB schemas to
generate Insert statements, including clauses, which are to be executed in the DB output
table defined.

Note that it is highly recommended to use
the ELT components for a specific type of database (if any) instead of the ELT
components. For example, for Teradata, it is recommended to use the tELTTeradataInput, tELTTeradataMap and tELTTeradataOutput components instead.

tELTMap Standard properties

These properties are used to configure tELTMap running in the
Standard Job framework.

The Standard
tELTMap component belongs to the ELT family.

The component in this framework is available in all Talend
products
.

Basic settings

Use an existing connection

Select this check box and in the Component List click the relevant connection component to
reuse the connection details you already defined.

Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the
connection created by the parent Job with the child Job, you have to:

  1. In the parent level, register the database connection
    to be shared in the Basic
    settings
    view of the connection component which creates that very database
    connection.

  2. In the child level, use a dedicated connection
    component to read that registered database connection.

For an example about how to share a database connection
across Job levels, see

Talend Studio
User Guide
.

ELT Map Editor

The ELT Map editor allows you to define the output schema and
make a graphical build of the SQL statement to be executed. The column names of
schema can be different from the column names in the database.

Style link

Select the way in which links are displayed.

Auto: By default, the links between the
input and output schemas and the Web service parameters are in the form of
curves.

Bezier curve: Links between the schema
and the Web service parameters are in the form of curve.

Line: Links between the schema and the
Web service parameters are in the form of straight lines.

This option slightly optimizes performance.

Property Type

Either Built-In or Repository.

  • Built-In: No property data stored centrally.

  • Repository: Select the repository file where the
    properties are stored.

JDBC URL

The JDBC URL of the database to be used. For
example, the JDBC URL for the Amazon Redshift database is jdbc:redshift://endpoint:port/database.

Driver JAR

Complete this table to load the driver JARs needed. To do
this, click the [+] button under the table to add
as many rows as needed, each row for a driver JAR, then select the cell and click the
[…] button at the right side of the cell to
open the Module dialog box from which you can select the driver JAR
to be used. For example, the driver jar RedshiftJDBC41-1.1.13.1013.jar for the Redshift database.

Class name

Enter the class name for the specified driver between double
quotation marks. For example, for the RedshiftJDBC41-1.1.13.1013.jar driver, the name to be entered is
com.amazon.redshift.jdbc41.Driver.

Username and Password

DB user authentication data.

To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.

Mapping

Specify the metadata mapping file
for the database to be used. The metadata mapping file is used for the data
type conversion between database and Java. For more information about the
metadata mapping, see the related documentation for Type mapping.

Advanced settings

Additional JDBC parameters

Specify additional connection properties for the DB connection you are
creating. This option is not available if you have selected the
Use an existing connection check
box in the Basic settings.

tStatCatcher Statistics

Select this check box to gather the Job processing metadata at a Job
level as well as at each component level.

Global Variables

Global Variables

ERROR_MESSAGE: the error message generated by the
component when an error occurs. This is an After variable and it returns a string. This
variable functions only if the Die on error check box is
cleared, if the component has this check box.

A Flow variable functions during the execution of a component while an After variable
functions after the execution of the component.

To fill up a field or expression with a variable, press Ctrl +
Space
to access the variable list and choose the variable to use from it.

For further information about variables, see
Talend Studio

User Guide.

Usage

Usage rule

tELTMap is used along with tELTInput and tELTOutput. Note that the Output link to be used with these
components must correspond strictly to the syntax of the table name.

Note:

Note that the ELT components do not handle actual data flow but
only schema information.

Dynamic settings

Click the [+] button to add a row in the table
and fill the Code field with a context
variable to choose your database connection dynamically from multiple
connections planned in your Job. This feature is useful when you need to
access database tables having the same data structure but in different
databases, especially when you are working in an environment where you
cannot change your Job settings, for example, when your Job has to be
deployed and executed independent of Talend Studio.

The Dynamic settings table is
available only when the Use an existing
connection
check box is selected in the Basic settings view. Once a dynamic parameter is
defined, the Component List box in the
Basic settings view becomes unusable.

For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic
settings
and context variables, see Talend Studio
User Guide.

Aggregating Snowflake data using context variables as table and connection names

This scenario shows you an example of aggregating Snowflake data from
two source tables STUDENT and TEACHER to one target table FULLINFO
using the ELT components. In this example, set all input and output table names and connection
names to context variables.

Creating the Job

tELTMap_1.png

  • A new Job has been created and the context variables
    SourceTableS with the value
    STUDENT, SourceTableT with
    the value TEACHER, and
    TargetTable with the value
    FULLINFO have been added to the Job. For more
    information about how to use context variables, see the related
    documentation about using contexts and variables.

  • The source table STUDENT with three columns,
    SID and TID of
    NUMBER(38,0) type and SNAME of VARCHAR(50) type, has
    been created in Snowflake, and the following data has been written into the
    table.

  • The source table TEACHER with three columns,
    TID of NUMBER(38,0) type and
    TNAME and TPHONE of
    VARCHAR(50) type, has been created in Snowflake, and the following data has
    been written into the table.

  1. Add a tSnowflakeConnection component, a
    tSnowflakeClose component, two
    tELTInput components, a
    tELTMap component, and a
    tELTOutput component to your Job.
  2. On the Basic setting view of the first
    tELTInput component, enter the name of the first
    source table in the Default Table Name field. In this
    example, it is the context variable
    context.SourceTableS.

    tELTMap_2.png

  3. Repeat step 2 to set the value of the default table name for
    the second tELTInput component and the
    tELTOutput component to context.SourceTableT and context.TargetTable respectively.
  4. Link the first tELTInput component to the
    tELTMap component using the Link > context.SourceTableS (Table) connection.
  5. Link the second tELTInput component to the
    tELTMap component using the Link > context.SourceTableT (Table) connection.
  6. Link the tELTMap component to the
    tELTOutput component using the Link > *New Output* (Table) connection. The link will be renamed automatically to
    context.TargetTable (Table).
  7. Link the tSnowflakeConnection component to the
    tELTMap component using a Trigger > On Subjob Ok connection.
  8. Link the tELTMap
    component to the tSnowflakeClose
    component.

Connecting to Snowflake

Configure the tSnowflakeConnection component to connect to
Snowflake.

  1. Double-click the tSnowflakeConnection component to open its Basic settings view.
  2. In the Account field, enter
    the account name assigned by Snowflake.
  3. In the Snowflake Region field, select the region where the
    Snowflake database locates.
  4. In the User Id and the
    Password fields, enter the authentication
    information accordingly.

    Note that this user ID is your user login name. If you do not know your user login
    name yet, ask the administrator of your Snowflake system for details.

  5. In the Warehouse field,
    enter the name of the data warehouse to be used in Snowflake.
  6. In the Schema field, enter
    the name of the database schema to be used.
  7. In the Database field, enter
    the name of the database to be used.

Configuring the input components

  1. Double-click the first tELTInput component to open its Basic
    settings
    view.
  2. Click the […] button
    next to Edit schema and in the schema
    dialog box displayed, define the schema by adding three columns, SID and TID of INT type and SNAME of VARCHAR type.
  3. Select Mapping Snowflake
    from the Mapping drop-down list.
  4. Repeat the previous steps to configure the second tELTInput component, and define its schema by
    adding three columns, TID of INT type and
    TNAME and TPHONE of VARCHAR type.

Configuring the output component

  1. Double-click the tELTOutput component to open the Basic settings view.
  2. Select Create table from the Action on
    table
    drop-down list to create the target table.
  3. Select the Table name from connection name is variable
    check box.
  4. Select Mapping Snowflake from the
    Mapping drop-down list.

Configuring the map component for aggregating Snowflake data

  1. Click the tELTMap component to open its Basic
    settings
    view.

    tELTMap_3.png

  2. Select the Use an existing connection check box and from
    the Component List displayed, select the connection
    component you have configured to open the Snowflake connection.
  3. Select Mapping Snowflake from the
    Mapping drop-down list.
  4. Click the […] button next to ELT Map
    Editor
    to open its map editor.
  5. Add the first input table context.SourceTableS by
    clicking the [+] button in the upper left corner of the
    map editor and then selecting the relevant table name from the drop-down list in
    the pop-up dialog box.
  6. Do the same to add the second input table
    context.SourceTableT.
  7. Drag the column TID from the first input table
    context.SourceTableS and drop it onto the
    corresponding column TID in the second input table
    context.SourceTableT.
  8. Drag all columns from the input table
    context.SourceTableS and drop them onto the output
    table context.TargetTable in the upper right panel.
  9. Do the same to drag two columns TNAME and
    TPHONE from the input table
    context.SourceTableT and drop them onto the bottom of
    the output table. When done, click OK to close the map
    editor.
  10. Click the Sync columns button on the Basic
    settings
    view of the tELTOutput component
    to set its schema.

Closing the Snowflake connection

Configure the tSnowflakeClose component to close the
connection to Snowflake.

  1. Double-click the tSnowflakeClose
    component to open the Component
    tab.
  2. From the Connection Component drop-down list, select the
    component that opens the connection you need to close,
    tSnowflakeConnection_1 in this example.

Executing the Job

  1. Press Ctrl + S to save the Job.
  2. Press F6 to execute the Job.

    tELTMap_4.png

    As shown above, Talend Studio
    executes the Job successfully and inserts eight rows into the target
    table.

    You can then create and run another Job to retrieve data from
    the target table by using the tSnowflakeInput component and the tLogRow component. You will find that the
    aggregated data are displayed on the console as shown in below
    screenshot.

    tELTMap_5.png

    For more information about how to retrieve data from
    Snowflake, see Writing data into and reading data from a Snowflake table.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x