tImpalaLoad
Writes data of different formats into a given Impala table or to export data from
an Impala table to a directory.
tImpalaLoad connects to a given Impala database and
copies or moves data into an existing Impala table or a directory you specify.
tImpalaLoad Standard properties
These properties are used to configure tImpalaLoad running in the Standard Job framework.
The Standard
tImpalaLoad component belongs to the Big Data family.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Property type |
Either Built-in or Repository. |
 |
Built-in: No property data stored |
 |
Repository: Select the repository |
Use an existing connection |
Select this check box and in the Component List click the relevant connection component to Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:
For an example about how to share a database connection |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
Impala version |
Select the version of the Hadoop distribution you are using. The available |
Host |
Database server IP address. |
Port |
Listening port number of DB server. |
Database |
Fill this field with the name of the database. |
Username |
DB user authentication data. |
Use kerberos authentication |
If you are accessing an Impala system running with Kerberos security,
select this check box and then enter the Kerberos principal of this Impala system.
This check box is available depending on the Hadoop distribution you are |
Load action |
Select the action you need to carry for writing data into the
|
Target type |
This drop-down list appears only when you have selected INSERT from the Load action list. Select from this list the type of the location you need to write
|
Action |
Select whether you want to OVERWRITE the old data already existing in the |
Table name |
Enter the name of the Hive table you need to write data in. Note that with the INSERT action, |
File path |
Enter the directory you need to read data from. |
Query |
This field appears when you have selected INSERT from the Load Enter the appropriate query for selecting the data to be exported to the specified |
Set partitions |
Select this check box to use the Impala For example, enter contry=’US’, Also, it is recommended to select the Create |
Die on error |
Select this check box to kill the Job when an error occurs. |
Advanced settings
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component works standalone. |
Dynamic settings |
Click the [+] button to add a row in the table The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Related scenario
This component is used in the similar way as a tHiveLoad component is. For further information, see Creating a partitioned Hive table.