August 16, 2023

tSqlRow – Docs for ESB 6.x

tSqlRow

Performs SQL queries over input datasets.

tSqlRow registers
input RDDs (Resilient Distributed Datasets) as tables and applies queries on these
tables.

Depending on the Talend solution you
are using, this component can be used in one, some or all of the following Job
frameworks:

tSqlRow properties for Apache Spark Batch

These properties are used to configure tSqlRow running in the Spark Batch Job framework.

The Spark Batch
tSqlRow component belongs to the Processing family.

The component in this framework is available only if you have subscribed to one
of the
Talend
solutions with Big Data.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

  • Built-In: You create and store the
    schema locally for this component only. Related topic: see
    Talend Studio

    User Guide.

  • Repository: You have already created
    the schema and stored it in the Repository. You can reuse it in various projects and
    Job designs. Related topic: see
    Talend Studio

    User Guide.

Click Edit schema to make changes to the
schema. Note that if you make changes, the schema automatically becomes built-in.

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

SQL context

Select the query languages you want tSqlRow to use.

  • SQL Spark Context: the Spark native query language.

  • SQL Hive Context: the Hive query language
    supported by Spark.

    In SQL Hive Context, tSqlRow does not allow you to use Hive metastore. If you need to
    read or write data to Hive metastore, use tHiveInput or tHiveOutput instead
    and in this situation, you need to design your Job differently.

    For further information about the Spark supported Hive query statements, see Supported Hive features.

Query

Enter your query paying particularly attention to properly sequence
the fields in order to match the schema definition.

The tSqlRow
component uses the label of its input link to name the registered table that stores the
datasets from the same input link. For example, if a input link is labeled to row1, this row1 is automatically
the name of the table in which you can perform queries.

Advanced settings

Register UDF jars

Add the Spark SQL or Hive SQL UDF (user-defined function) jars you want tSqlRow to use. If you do not want to call your UDF using its
FQCN (Fully-Qualified Class Name), you must define a function alias for this UDF in the
Temporary UDF
functions
table and use this alias. It is recommended to use the alias
approach, as an alias is often more practical to use to call a UDF from the query.

Once you add one row to this table, click it to display the […] button and then click this
button to display the jar import wizard. Through this wizard, import the UDF jar files you
want to use.

Temporary UDF functions

Complete this table to give each imported UDF class a temporary function name to be used
in the query in tSqlRow.

If you have selected SQL Spark Context from the SQL context list, the UDF output
type
column is displayed. In this column, you need to select the data type of the
output of the Spark SQL UDF to be used.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to, appears only
when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise
explicitly stated, a scenario presents only Standard Jobs,
that is to say traditional
Talend
data integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

tSqlRow properties for Apache Spark Streaming

These properties are used to configure tSqlRow running in the Spark Streaming Job framework.

The Spark Streaming
tSqlRow component belongs to the Processing family.

The component in this framework is available only if you have subscribed to Talend Real-time Big Data Platform or Talend Data
Fabric.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields (columns) to
be processed and passed on to the next component. The schema is either Built-In or stored remotely in the Repository.

  • Built-In: You create and store the
    schema locally for this component only. Related topic: see
    Talend Studio

    User Guide.

  • Repository: You have already created
    the schema and stored it in the Repository. You can reuse it in various projects and
    Job designs. Related topic: see
    Talend Studio

    User Guide.

Click Edit schema to make changes to the
schema. Note that if you make changes, the schema automatically becomes built-in.

  • View schema: choose this option to view the
    schema only.

  • Change to built-in property: choose this
    option to change the schema to Built-in for
    local changes.

  • Update repository connection: choose this
    option to change the schema stored in the repository and decide whether to propagate
    the changes to all the Jobs upon completion. If you just want to propagate the
    changes to the current Job, you can select No
    upon completion and choose this schema metadata again in the [Repository Content] window.

SQL context

Select the query languages you want tSqlRow to use.

  • SQL Spark Context: the Spark native query language.

  • SQL Hive Context: the Hive query language
    supported by Spark.

    In SQL Hive Context, tSqlRow does not allow you to use Hive metastore. If you need to
    read or write data to Hive metastore, use tHiveInput or tHiveOutput instead
    and in this situation, you need to design your Job differently.

    For further information about the Spark supported Hive query statements, see Supported Hive features.

Query

Enter your query paying particularly attention to properly sequence
the fields in order to match the schema definition.

The tSqlRow
component uses the label of its input link to name the registered table that stores the
datasets from the same input link. For example, if a input link is labeled to row1, this row1 is automatically
the name of the table in which you can perform queries.

Advanced settings

Register UDF jars

Add the Spark SQL or Hive SQL UDF (user-defined function) jars you want tSqlRow to use. If you do not want to call your UDF using its
FQCN (Fully-Qualified Class Name), you must define a function alias for this UDF in the
Temporary UDF
functions
table and use this alias. It is recommended to use the alias
approach, as an alias is often more practical to use to call a UDF from the query.

Once you add one row to this table, click it to display the […] button and then click this
button to display the jar import wizard. Through this wizard, import the UDF jar files you
want to use.

Temporary UDF functions

Complete this table to give each imported UDF class a temporary function name to be used
in the query in tSqlRow.

If you have selected SQL Spark Context from the SQL context list, the UDF output
type
column is displayed. In this column, you need to select the data type of the
output of the Spark SQL UDF to be used.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

You need to use the Spark Configuration tab in
the Run view to define the connection to a given
Spark cluster for the whole Job. In addition, since the Job expects its dependent jar
files for execution, you must specify the directory in the file system to which these
jar files are transferred so that Spark can access these files:

  • Yarn mode: when using Google
    Dataproc, specify a bucket in the Google Storage staging
    bucket
    field in the Spark
    configuration
    tab; when using other distributions, use a
    tHDFSConfiguration
    component to specify the directory.

  • Standalone mode: you need to choose
    the configuration component depending on the file system you are using, such
    as tHDFSConfiguration
    or tS3Configuration.

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x