tSqlRow
Performs SQL queries over input datasets.
tSqlRow registers
input RDDs (Resilient Distributed Datasets) as tables and applies queries on these
tables.
Depending on the Talend
product you are using, this component can be used in one, some or all of the following
Job frameworks:
-
Spark Batch: see tSqlRow properties for Apache Spark Batch.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric. -
Spark Streaming: see tSqlRow properties for Apache Spark Streaming.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
tSqlRow properties for Apache Spark Batch
These properties are used to configure tSqlRow running in the Spark Batch Job framework.
The Spark Batch
tSqlRow component belongs to the Processing family.
The component in this framework is available in all subscription-based Talend products with Big Data
and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields
Click Edit
schema to make changes to the schema. Note: If you
make changes, the schema automatically becomes built-in.
|
SQL context |
Select the query languages you want tSqlRow to use.
|
Query |
Enter your query paying particularly attention to properly sequence The tSqlRow |
Advanced settings
Register UDF jars |
Add the Spark SQL or Hive SQL UDF (user-defined function) jars you want tSqlRow to use. If you do not want to call your UDF using its Once you add one row to this table, click it to display the […] button and then click this |
Temporary UDF functions |
Complete this table to give each imported UDF class a temporary function name to be used If you have selected SQL Spark Context from the SQL context list, the UDF output |
Use Timestamp format for Date type |
Select the check box to output dates, hours, minutes and seconds contained in your The format used by Deltalake is |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Batch component Palette it belongs to, Note that in this documentation, unless otherwise explicitly stated, a |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
For a related scenario, see Performing download analysis using a Spark Batch Job.
tSqlRow properties for Apache Spark Streaming
These properties are used to configure tSqlRow running in the Spark Streaming Job framework.
The Spark Streaming
tSqlRow component belongs to the Processing family.
This component is available in Talend Real Time Big Data Platform and Talend Data Fabric.
Basic settings
Schema and Edit schema |
A schema is a row description. It defines the number of fields
Click Edit
schema to make changes to the schema. Note: If you
make changes, the schema automatically becomes built-in.
|
SQL context |
Select the query languages you want tSqlRow to use.
|
Query |
Enter your query paying particularly attention to properly sequence The tSqlRow |
Advanced settings
Register UDF jars |
Add the Spark SQL or Hive SQL UDF (user-defined function) jars you want tSqlRow to use. If you do not want to call your UDF using its Once you add one row to this table, click it to display the […] button and then click this |
Temporary UDF functions |
Complete this table to give each imported UDF class a temporary function name to be used If you have selected SQL Spark Context from the SQL context list, the UDF output |
Use Timestamp format for Date type |
Select the check box to output dates, hours, minutes and seconds contained in your The format used by Deltalake is |
Usage
Usage rule |
This component is used as an intermediate step. This component, along with the Spark Streaming component Palette it belongs to, appears Note that in this documentation, unless otherwise explicitly stated, a scenario presents |
Spark Connection |
In the Spark
Configuration tab in the Run view, define the connection to a given Spark cluster for the whole Job. In addition, since the Job expects its dependent jar files for execution, you must specify the directory in the file system to which these jar files are transferred so that Spark can access these files:
This connection is effective on a per-Job basis. |
Related scenarios
No scenario is available for the Spark Streaming version of this component
yet.