July 30, 2023

tSqlRow – Docs for ESB 7.x

tSqlRow

Performs SQL queries over input datasets.

tSqlRow registers
input RDDs (Resilient Distributed Datasets) as tables and applies queries on these
tables.

Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:

tSqlRow properties for Apache Spark Batch

These properties are used to configure tSqlRow running in the Spark Batch Job framework.

The Spark Batch
tSqlRow component belongs to the Processing family.

The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

  • Built-In: You create and store the schema locally for this component
    only.

  • Repository: You have already created the schema and stored it in the
    Repository. You can reuse it in various projects and Job designs.

Click Edit
schema
to make changes to the schema.

Note: If you
make changes, the schema automatically becomes built-in.
  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

SQL context

Select the query languages you want tSqlRow to use.

  • SQL Spark Context: the Spark native query language.

  • SQL Hive Context: the Hive query language
    supported by Spark.

    In SQL Hive Context, tSqlRow does not allow you to use Hive metastore. If you need to
    read or write data to Hive metastore, use tHiveInput or tHiveOutput instead
    and in this situation, you need to design your Job differently.

    For further information about the Spark supported Hive query statements, see Supported Hive features.

Query

Enter your query paying particularly attention to properly sequence
the fields in order to match the schema definition.

The tSqlRow
component uses the label of its input link to name the registered table that stores the
datasets from the same input link. For example, if a input link is labeled to row1, this row1 is automatically
the name of the table in which you can perform queries.

Advanced settings

Register UDF jars

Add the Spark SQL or Hive SQL UDF (user-defined function) jars you want tSqlRow to use. If you do not want to call your UDF using its
FQCN (Fully-Qualified Class Name), you must define a function alias for this UDF in the
Temporary UDF
functions
table and use this alias. It is recommended to use the alias
approach, as an alias is often more practical to use to call a UDF from the query.

Once you add one row to this table, click it to display the […] button and then click this
button to display the jar import wizard. Through this wizard, import the UDF jar files you
want to use.

Temporary UDF functions

Complete this table to give each imported UDF class a temporary function name to be used
in the query in tSqlRow.

If you have selected SQL Spark Context from the SQL context list, the UDF output
type
column is displayed. In this column, you need to select the data type of the
output of the Spark SQL UDF to be used.

Use Timestamp format for Date type

Select the check box to output dates, hours, minutes and seconds contained in your
Date-type data. If you clear this check box, only years, months and days are
outputted.

The format used by Deltalake is yyyy-MM-dd HH:mm:ss.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Batch component Palette it belongs to,
appears only when you are creating a Spark Batch Job.

Note that in this documentation, unless otherwise explicitly stated, a
scenario presents only Standard Jobs, that is to
say traditional
Talend
data integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

tSqlRow properties for Apache Spark Streaming

These properties are used to configure tSqlRow running in the Spark Streaming Job framework.

The Spark Streaming
tSqlRow component belongs to the Processing family.

This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.

Basic settings

Schema and Edit schema

A schema is a row description. It defines the number of fields
(columns) to be processed and passed on to the next component. When you create a Spark
Job, avoid the reserved word line when naming the
fields.

  • Built-In: You create and store the schema locally for this component
    only.

  • Repository: You have already created the schema and stored it in the
    Repository. You can reuse it in various projects and Job designs.

Click Edit
schema
to make changes to the schema.

Note: If you
make changes, the schema automatically becomes built-in.
  • View schema: choose this
    option to view the schema only.

  • Change to built-in property:
    choose this option to change the schema to Built-in for local changes.

  • Update repository connection:
    choose this option to change the schema stored in the repository and decide whether
    to propagate the changes to all the Jobs upon completion. If you just want to
    propagate the changes to the current Job, you can select No upon completion and choose this schema metadata
    again in the Repository Content
    window.

SQL context

Select the query languages you want tSqlRow to use.

  • SQL Spark Context: the Spark native query language.

  • SQL Hive Context: the Hive query language
    supported by Spark.

    In SQL Hive Context, tSqlRow does not allow you to use Hive metastore. If you need to
    read or write data to Hive metastore, use tHiveInput or tHiveOutput instead
    and in this situation, you need to design your Job differently.

    For further information about the Spark supported Hive query statements, see Supported Hive features.

Query

Enter your query paying particularly attention to properly sequence
the fields in order to match the schema definition.

The tSqlRow
component uses the label of its input link to name the registered table that stores the
datasets from the same input link. For example, if a input link is labeled to row1, this row1 is automatically
the name of the table in which you can perform queries.

Advanced settings

Register UDF jars

Add the Spark SQL or Hive SQL UDF (user-defined function) jars you want tSqlRow to use. If you do not want to call your UDF using its
FQCN (Fully-Qualified Class Name), you must define a function alias for this UDF in the
Temporary UDF
functions
table and use this alias. It is recommended to use the alias
approach, as an alias is often more practical to use to call a UDF from the query.

Once you add one row to this table, click it to display the […] button and then click this
button to display the jar import wizard. Through this wizard, import the UDF jar files you
want to use.

Temporary UDF functions

Complete this table to give each imported UDF class a temporary function name to be used
in the query in tSqlRow.

If you have selected SQL Spark Context from the SQL context list, the UDF output
type
column is displayed. In this column, you need to select the data type of the
output of the Spark SQL UDF to be used.

Use Timestamp format for Date type

Select the check box to output dates, hours, minutes and seconds contained in your
Date-type data. If you clear this check box, only years, months and days are
outputted.

The format used by Deltalake is yyyy-MM-dd HH:mm:ss.

Usage

Usage rule

This component is used as an intermediate step.

This component, along with the Spark Streaming component Palette it belongs to, appears
only when you are creating a Spark Streaming Job.

Note that in this documentation, unless otherwise explicitly stated, a scenario presents
only Standard Jobs, that is to say traditional
Talend
data
integration Jobs.

Spark Connection

In the Spark
Configuration
tab in the Run
view, define the connection to a given Spark cluster for the whole Job. In
addition, since the Job expects its dependent jar files for execution, you must
specify the directory in the file system to which these jar files are
transferred so that Spark can access these files:

  • Yarn mode (Yarn client or Yarn cluster):

    • When using Google Dataproc, specify a bucket in the
      Google Storage staging bucket
      field in the Spark configuration
      tab.

    • When using HDInsight, specify the blob to be used for Job
      deployment in the Windows Azure Storage
      configuration
      area in the Spark
      configuration
      tab.

    • When using Altus, specify the S3 bucket or the Azure
      Data Lake Storage for Job deployment in the Spark
      configuration
      tab.
    • When using Qubole, add a
      tS3Configuration to your Job to write
      your actual business data in the S3 system with Qubole. Without
      tS3Configuration, this business data is
      written in the Qubole HDFS system and destroyed once you shut
      down your cluster.
    • When using on-premise
      distributions, use the configuration component corresponding
      to the file system your cluster is using. Typically, this
      system is HDFS and so use tHDFSConfiguration.

  • Standalone mode: use the
    configuration component corresponding to the file system your cluster is
    using, such as tHDFSConfiguration or
    tS3Configuration.

    If you are using Databricks without any configuration component present
    in your Job, your business data is written directly in DBFS (Databricks
    Filesystem).

This connection is effective on a per-Job basis.

Related scenarios

No scenario is available for the Spark Streaming version of this component
yet.


Document get from Talend https://help.talend.com
Thank you for watching.
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x