tImpalaInput
Executes the select queries to extract the corresponding data and sends the data to
the component that follows.
tImpalaInput is the dedicated component to the Impala
database (the Impala data warehouse system). It executes the given Impala SQL query in
order to extract the data of interest from Impala. It provides the SQLBuilder tool to
help you write your Impala SQL statements easily.
tImpalaInput Standard properties
These properties are used to configure tImpalaInput running in the Standard Job framework.
The Standard
tImpalaInput component belongs to the Big Data family.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Property type |
Either Built-in or Repository. |
 |
Built-in: No property data stored |
 |
Repository : Select the repository file in which the properties are stored. The fields that follow are completed automatically using the data retrieved. |
Use an existing connection |
Select this check box and in the Component List click the relevant connection component to Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:
For an example about how to share a database connection |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
Impala version |
Select the version of the Hadoop distribution you are using. The available |
Host |
Database server IP address. |
Port |
Listening port number of DB server. |
Database |
Fill this field with the name of the database. |
Username |
DB user authentication data. |
Use kerberos authentication |
If you are accessing an Impala system running with Kerberos security,
select this check box and then enter the Kerberos principal of this Impala system.
This check box is available depending on the Hadoop distribution you are |
Schema and Edit |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-in: The schema is created |
 |
Repository: The schema already |
Table Name |
Name of the table to be processed. |
Query type |
Either Built-in or Repository. |
 |
Built-in: Fill in manually the |
 |
Repository: Select the relevant |
Guess Query |
Click the Guess Query button to |
Guess schema |
Click this button to retrieve the schema from the table. |
Query |
Enter your DB query paying particularly attention to properly |
Advanced settings
Trim all the String/Char columns |
Select this check box to remove leading and trailing whitespace |
Trim column |
Remove leading and trailing whitespace from defined Note:
Clear the Trim all the String/Char |
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component offers the benefit of flexible DB queries and covers all possible Impala |
Dynamic settings |
Click the [+] button to add a row in the table The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Related scenarios
For a scenario about how an input component is used in a Job, see Writing columns from a MySQL database to an output file using tMysqlInput.