tImpalaRow
Acts on the actual DB structure or on the data (although without handling
data).
The SQLBuilder tool helps you write your Impala SQL statements easily.
tImpalaRow is the dedicated component for this database. It
executes the Impala SQL query stated in the specified database. The Row suffix means the
component implements a flow in the Job design although it does not provide output.
tImpalaRow Standard properties
These properties are used to configure tImpalaRow running in the Standard Job framework.
The Standard
tImpalaRow component belongs to the Big Data family.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Property type |
Either Built-in or Repository. |
 |
Built-in: No property data stored |
 |
Repository : Select the repository file in which the properties are stored. The fields that follow are completed automatically using the data retrieved. |
Use an existing connection |
Select this check box and in the Component List click the relevant connection component to Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:
For an example about how to share a database connection |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
Impala version |
Select the version of the Hadoop distribution you are using. The available |
Host |
Database server IP address. |
Port |
Listening port number of DB server. |
Database |
Fill this field with the name of the database. |
Username |
DB user authentication data. |
Use kerberos authentication |
If you are accessing an Impala system running with Kerberos security,
select this check box and then enter the Kerberos principal of this Impala system.
This check box is available depending on the Hadoop distribution you are |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-in: The schema is created |
 |
Repository: The schema already |
Table Name |
Name of the table to be processed. |
Query type |
Either Built-in or Repository. |
 |
Built-in: Fill in manually the |
 |
Repository: Select the relevant |
Guess Query |
Click the Guess Query button to |
Query |
Enter your DB query paying particularly attention to properly |
Die on error |
This check box is selected by default. Clear the check box to skip |
Advanced settings
Propagate QUERY’s recordset |
Select this check box to insert the result of the query into a Note:
This option allows the component to have a different schema |
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component offers the benefit of flexible DB queries and |
Dynamic settings |
Click the [+] button to add a row in the table The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Related scenarios
For related topics, see: