tHiveRow
Acts on the actual DB structure or on the data without handling data itself,
depending on the nature of the query and the database.
tHiveRow executes the HiveQL query stated in the specified database. The
row suffix means the component implements a flow in the Job design although it does not
provide output.
The SQLBuilder tool helps you write your HiveQL statements easily.
This component can also perform queries in a HBase database once the
Store by HBase check box is available and you have selected this
check box.
tHiveRow Standard properties
These properties are used to configure tHiveRow running in the Standard Job
framework.
The Standard
tHiveRow component belongs to the Big Data and the Databases families.
The component in this framework is available in all Talend
products.
Basic settings
- When you use this component with Qubole on AWS:
API Token Click the … button next to the
API Token field to enter the
authentication token generated for the Qubole user account
to be used. For further information about how to obtain this
token, see Manage Qubole
account from the Qubole documentation.This
token allows you to specify the user account you want to
use to access Qubole. Your Job automatically uses
the rights and permissions granted to this user account
in Qubole.Cluster label Select the Cluster label check
box and enter the name of the Qubole cluster to be used. If
leaving this check box clear, the default cluster is
used.If you need details about your default cluster,
ask the administrator of your Qubole service. You can
also read this article
from the Qubole documentaiton to find more information
about configuring a default Qubole cluster.Change API endpoint Select the Change API endpoint
check box and select the region to be used. If leaving this
check box clear, the default region is used.For further
information about the Qubole Endpoints supported on
QDS-on-AWS, see Supported Qubole
Endpoints on Different Cloud
Providers. -
When you use this component with Google Dataproc:
Project identifier
Enter the ID of your Google Cloud Platform project.
If you are not certain about your project ID, check it in the Manage
Resources page of your Google Cloud Platform services.Cluster identifier
Enter the ID of your Dataproc cluster to be used.
Region From this drop-down list, select the Google Cloud region to
be used.Google Storage staging bucket As a Talend Job expects its
dependent jar files for execution, specify the Google Storage directory to
which these jar files are transferred so that your Job can access these
files at execution.The directory to be entered must end with a slash (/). If not existing, the
directory is created on the fly but the bucket to be used must already
exist.Database
Fill this field with the name of the database.
Provide Google Credentials in file
Leave this check box clear, when you
launch your Job from a given machine in which Google Cloud SDK has been
installed and authorized to use your user account credentials to access
Google Cloud Platform. In this situation, this machine is often your
local machine.When you launch your Job from a remote
machine, such as a Jobserver, select this check box and in the
Path to Google Credentials file field that is
displayed, enter the directory in which this JSON file is stored in the
Jobserver machine.For further information about this Google
Credentials file, see the administrator of your Google Cloud Platform or
visit Google Cloud Platform Auth
Guide. -
When you use this component with HDInsight:
WebHCat configuration
Enter the address and the authentication information of the Microsoft HD
Insight cluster to be used. For example, the address could be
your_hdinsight_cluster_name.azurehdinsight.net and the
authentication information is your Azure account name: ychen.
The Studio uses this service to submit the Job to the HD Insight cluster.In the Job result folder field, enter
the location in which you want to store the execution result of a Job in the Azure
Storage to be used.HDInsight
configuration- The Username is the one defined when
creating your cluster. You can find it in the SSH
+ Cluster login blade of your cluster. - The Password is defined when creating your HDInsight
cluster for authentication to this cluster.
Windows Azure Storage
configurationEnter the address and the authentication information of the Azure Storage
account to be used. In this configuration, you do not define where to read or write
your business data but define where to deploy your Job only. Therefore always use
the Azure
Storage
system for this configuration.In the Container field, enter the name
of the container to be
used. You can
find the available containers in the Blob blade of the Azure
Storage account to be used.In the Deployment Blob field, enter the
location in which you want to store the current Job and its dependent libraries in
this Azure Storage account.In the Hostname field, enter the
Primary Blob Service Endpoint of your Azure Storage account without the https:// part. You can find this endpoint in the Properties blade of this storage account.In the Username field, enter the name of the Azure Storage account to be used.
In the Password field, enter the access key of the Azure Storage account to be used. This key can be found in the Access keys blade of this storage account.
Database
Fill this field with the name of the database.
- The Username is the one defined when
-
When you use the other distributions:
Connection mode
Select a connection mode from the list. The
options vary depending on the distribution you are
using.Hive server
Select the Hive server through which you want the Job using this component
to execute queries on Hive.This Hive server list is available
only when the Hadoop distribution to be used such as HortonWorks Data Platform V1.2.0 (Bimota) supports HiveServer2. It
allows you to select HiveServer2 (Hive 2), the
server that better support concurrent connections of multiple clients than
HiveServer (Hive 1).For further information about HiveServer2, see https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2.
Host
Database server IP address.
Port
Listening port number of DB server.
Database
Fill this field with the name of the
database.Note:This field is not available when you
select Embedded
from the Connection
mode list.Username and
PasswordDB user authentication data.
To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.Use kerberos authentication
If you are accessing a Hive Metastore running with Kerberos security,
select this check box and then enter the relevant parameters in the fields that
appear.-
If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
MapR ticket authentication configuration in addition or as an alternative by following
the explanation in Connecting to a security-enabled MapR.Keep in mind that this configuration generates a new MapR security ticket for the username
defined in the Job in each execution. If you need to reuse an existing ticket issued for the
same username, leave both the Force MapR ticket
authentication check box and the Use Kerberos
authentication check box clear, and then MapR should be able to automatically
find that ticket on the fly.
The values of the following parameters can be found in the hive-site.xml file of the Hive system to be used.-
Hive principal uses the value
of hive.metastore.kerberos.principal. This is
the service principal of the Hive Metastore. -
HiveServer2 local user
principal uses the value of hive.server2.authentication.kerberos.principal. -
HiveServer2 local user keytab
uses the value of hive.server2.authentication.kerberos.keytab -
Metastore URL uses the value of
javax.jdo.option.ConnectionURL. This is the
JDBC connection string to the Hive Metastore. -
Driver class uses the value of
javax.jdo.option.ConnectionDriverName. This
is the name of the driver for the JDBC connection. -
Username uses the value of javax.jdo.option.ConnectionUserName. This, as
well as the Password parameter, is the user credential for connecting to
the Hive Metastore. -
Password uses the value of javax.jdo.option.ConnectionPassword.
For the other parameters that are displayed, please consult the Hadoop
configuration files they belong to. For example, the Namenode
principal can be found in the hdfs-site.xml file
or the hdfs-default.xml file of the distribution you are
using.This check box is available depending on the Hadoop distribution you are
connecting to.Use a keytab to authenticate Select the Use a keytab to authenticate
check box to log into a Kerberos-enabled system using a given keytab file. A keytab
file contains pairs of Kerberos principals and encrypted keys. You need to enter the
principal to be used in the Principal field and
the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in
which your Job actually runs, for example, on a Talend Jobserver.Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.Use SSL encryption
Select this check box to enable the SSL or TLS encrypted connection.
Then in the fields that are displayed, provide the authentication
information:-
In the Trust store
path field, enter the path, or browse to the TrustStore
file to be used. By default, the supported TrustStore types are JKS and PKCS 12. -
To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.
This feature is available only to the HiveServer2 in the Standalone mode of the following distributions:-
Hortonworks Data Platform 2.0 +
-
Cloudera CDH4 +
-
Pivotal HD 2.0 +
-
Amazon EMR 4.0.0 +
Set Resource Manager
Select this check box and in the displayed field, enter the location of the
ResourceManager of your distribution. For example, tal-qa114.talend.lan:8050.Then you can continue to set the following parameters depending on the
configuration of the Hadoop cluster to be used (if you leave the check box of a
parameter clear, then at runtime, the configuration about this parameter in the
Hadoop cluster to be used will be ignored ):-
Select the Set resourcemanager
scheduler address check box and enter the Scheduler address in
the field that appears. -
Select the Set jobhistory
address check box and enter the location of the JobHistory
server of the Hadoop cluster to be used. This allows the metrics information of
the current Job to be stored in that JobHistory server. -
Select the Set staging
directory check box and enter this directory defined in your
Hadoop cluster for temporary files created by running programs. Typically, this
directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files
such as yarn-site.xml or mapred-site.xml of your distribution. -
Allocate proper memory volumes to the Map and the Reduce
computations and the ApplicationMaster
of YARN by selecting the Set memory
check box in the Advanced settings
view. -
Select the Set Hadoop
user check box and enter the user name under which you
want to execute the Job. Since a file or a directory in Hadoop has its
specific owner with appropriate read or write rights, this field allows
you to execute the Job directly under the user name that has the
appropriate rights to access the file or directory to be processed. -
Select the Use datanode hostname check box to allow the
Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname
property to true. When connecting to a
S3N filesystem, you must select this check box.
For further information about these parameters, see the documentation or
contact the administrator of the Hadoop cluster to be used.For further information about the Hadoop Map/Reduce framework, see the
Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org.Set NameNode URI
Select this check box and in the displayed field, enter the URI of the
Hadoop NameNode, the master node of a Hadoop system. For example, assuming that you
have chosen a machine called masternode as the NameNode, then
the location is hdfs://masternode:portnumber. If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; WebHDFS with SSL is not
supported yet.For further information about the Hadoop Map/Reduce framework, see the
Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org. -
Property |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
Use an existing |
Select this check box and in the Component List click the relevant connection component to Note: When a Job contains the parent Job and the child Job, if you
need to share an existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:
For an example about how to share a database connection |
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
Hive version |
Select the version of the Hadoop distribution you are using. The available |
Execution engine |
Select this check box and from the drop-down list, select the framework you need to use to This list is available only when you are using the Embedded mode for the Hive connection and the distribution you are working
with is:
Before using Tez, ensure that the Hadoop cluster you are using supports Tez. You will need to configure the access to the relevant Tez libraries via the Advanced settings view of this component. For further information about Hive on Tez, see Apache’s related documentation in https://cwiki.apache.org/confluence/display/Hive/Hive+on+Tez. Some examples are presented there to show how Tez can be used to gain performance over MapReduce. |
Schema and Edit Schema |
A schema is a row description. It defines the number of fields Click Edit
|
 |
Built-in: The schema is |
 |
Repository: The schema |
Table Name |
Name of the table to be processed. |
Query type |
Either Built-in or Repository. |
 |
Built-in: Fill in |
 |
Repository: Select the |
Guess Query |
Click the Guess |
This query uses Parquet |
When available, select this check box to indicate that the table to be handled Note that when the file format to be used is PARQUET, you might be prompted to find the specific
PARQUET jar file and install it into the Studio.
This jar file can be downloaded from Apache’s site. |
Query |
Enter your DB query paying particularly For further information about the Hive query language, see https://cwiki.apache.org/confluence/display/Hive/LanguageManual. Note: Compressed data in the form of Gzip or Bzip2 can be processed through the query
statements. For details, see https://cwiki.apache.org/confluence/display/Hive/CompressedStorage. Hadoop provides different compression formats that help reduce the space |
Die on error |
This check box is selected by default. Clear the |
Store by HBase |
Select this check box to display the parameters to be set to allow the Hive components to
access HBase tables:
For further information about this access involving Hive and HBase, see Apache’s Hive |
Zookeeper quorum |
Type in the name or the URL of the Zookeeper service you use to coordinate the transaction |
Zookeeper client port |
Type in the number of the client listening port of the Zookeeper service you are |
Define the jars to register for |
Select this check box to display the Register jar for |
Register jar for HBase |
Click the [+] button to add rows to this table, then, in the Jar name column, select the jar file(s) to be registered and in the |
Advanced settings
Tez lib |
Select how the Tez libraries are accessed:
|
Temporary path |
If you do not want to set the Jobtracker and the NameNode when you execute |
Propagate QUERY’s recordset |
Select this check box to insert the result of the query Note:
This option allows the component to have a different |
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
Hive properties |
Talend Studio uses a default configuration for its engine to
perform operations in a Hive database. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones. For further information for Hive dedicated properties, see https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration.
|
Mapred job map memory mb and |
You can tune the map and reduce computations by In that situation, you need to enter the values you need in the Mapred |
Path separator in |
Leave the default value of the Path separator in |
tStatCatcher |
Select this check box to collect log |
Global Variables
Global Variables |
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component offers the benefit of flexible DB queries tHiveRow can capture
the Application_ID values and write them in the Job logs once you have activated Log4j and set the Log4j output level to Info for your Job involving tHiveRow.
If the Studio used to connect to a Hive database is operated on Windows, |
Dynamic settings |
Click the [+] button to add a row in the table The Dynamic settings table is For examples on using dynamic parameters, see Reading data from databases through context-based dynamic connections and Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Connecting to a security-enabled MapR
When designing a Job, set up the authentication configuration in the component you are using depending on how your MapR cluster is secured.
MapR supports the two following methods of authenticating a user and
generating a MapR security ticket for this user: a username/password pair and
Kerberos.
For further information about the MapR security mechanism, see MapR security
architecture.
For a scenario about how to secure a MapR cluster, see Getting
started with MapR security.
-
When your MapR cluster is secured with Kerberos only, you only need
to set up the typical Hadoop Kerberos configuration for your Job in the
Studio. -
When your MapR cluster is secured with both the Kerberos mechanism
and the MapR ticket security mechanism, you need to accordingly set up the
configuration for both of them in your Job in the Studio.For details about how to configure the MapR ticket security
mechanism in the Studio, see Setting up the MapR ticket authentication. -
When your MapR cluster is secured with the MapR ticket security
mechanism only, proceed as explained in Setting up the MapR ticket authentication to
set up the MapR authentication configuration for your Job in the Studio.
For an example of how to configure Kerberos
authentication for a Talend Job, see How to use Kerberos in Talend Studio with Big Data.
Although this example uses Cloudera for demonstration, the operations it describes are
generic and thus applicable to MapR as well.
Setting up the MapR ticket authentication
-
The MapR distribution you are using is from version 4.0.1
onwards and you have selected it as the cluster to connect to in the
component to be configured. -
The MapR cluster has been properly installed and is
running. -
Ensure that you have installed the MapR client in the machine where the Studio is,
and added the MapR client library to the PATH variable of that machine. According
to MapR’s documentation, the library or libraries of a MapR client corresponding to
each OS version can be found under MAPR_INSTALL
hadoophadoop-VERSIONlib
ative. For example, the library for
Windows is lib
ativeMapRClient.dll in the MapR
client jar file. For further information, see the following link from MapR: http://www.mapr.com/blog/basic-notes-on-configuring-eclipse-as-a-hadoop-development-environment-for-mapr.Without adding the specified library or libraries, you may encounter the following
error:no MapRClient in java.library.path
. -
This section explains only the authentication parameters to be
used to connect to MapR. You still need to define the other parameters
required by your Job.For further information, see the documentation about each
component you are using.
In a Standard Job, you need to set up this configuration in the Basic settings tab of a Hadoop-related component to be used by
your Job.
If you are designing a Spark Job, you need
to do this configuration in the Spark configuration tab of the
Job.
In the tab, you need to proceed as follows:
-
Select the Force MapR ticket authentication check box to
display the related parameters to be defined. -
In the Username field, enter the username to be authenticated
and in the Password field, specify the password
used by this user.To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.A MapR security ticket is generated for this user by MapR and stored in the machine where the
Job you are configuring is executed. -
If the Group field is available in this tab, you need to
enter the name of the group to which the user to be authenticated
belongs. -
In the Cluster name field, enter the name of the MapR cluster
you want to use this username to connect to.This cluster name can be found in the mapr-clusters.conf file located in /opt/mapr/conf of the cluster. -
In the Ticket duration field, enter the length of time (in
seconds) during which the ticket is valid.
Setting the environment variable for a custom MapR ticket location (optional)
If the default MapR ticket location, /tmp/maprticket_<uid>,
has been changed, set MAPR_TICKETFILE_LOCATION environment variable
accordingly in the machine in which your Job is executed.
As MapR does not provide any API to specify a MapR ticket, setting the environment
variable is the only way to use a custom MapR ticket location in your Job. For further
information about this issue, see this post from the MapR forum.
This procedure is necessary only when you are storing the MapR tickets in
a custom location. If you use the default MapR ticket location, skip this procedure.
Setting the environment variable for a custom MapR ticket location on Mac
(optional)
-
In the machine in which your Job is executed, add these lines to
~/.bashrc
:12export MAPR_TICKETFILE_LOCATION=/Users/$USER/maprticket_$UIDlaunchctl setenv MAPR_TICKETFILE_LOCATION /Users/$USER/maprticket_$UID -
Shutdown your Studio if it is open and each and every time you boot your Mac
workstation, open a terminal session before starting the Studio.
Setting the environment variable for a custom MapR ticket location on other operating
systems (optional)
location and you are not using Mac to run your Studio. If you use the default MapR ticket
location, skip this procedure.
-
In the machine in which your Job is executed, run the following command in a
commandline terminal to set the MAPR_TICKETFILE_LOCATION
variable in memory.1set MAPR_TICKETFILE_LOCATION=<your_custom_location> -
Shutdown your Studio if it is open and use the same terminal to restart your Studio.
If you use a Talend JobServer to run your Job, use
the same terminal to restart this JobServer.This way, your Job retrieves this custom location from memory.
Using a custom MapR security configuration in the mapr.login.conf
file (optional)
If the default security configuration of your MapR cluster has been
changed, you need to configure the Job to be executed to take this custom security
configuration into account.
MapR specifies its security configuration in the mapr.login.conf file located in /opt/mapr/conf of the
cluster. For further information about this configuration file and the Java service it uses
behind, see mapr.login.conf and JAAS.
If no change has been made in the mapr.login.conf file, skip this procedure.
To configure your Job, you need to define the related
parameters in the Basic settings tab and the
Advanced settings tab of the Component view of the component you want your Job to use
to connect to MapR.
If you are using a MapReduce Job, you need to define the related parameters in the Hadoop configuration
tab of the Job.
If you are designing a Spark
Job, you need to do the related configuration in the Spark
configuration tab of the Job.
Proceed as follows to do the configuration:
-
Verify what has been changed about this mapr.login.conf
file.You should be able to obtain the related information from the administrator or the developer of your MapR cluster. -
If the location of the MapR configuration files has been changed to somewhere else in the
cluster, that is to say, the MapR Home directory has been changed, select
the Set the MapR Home directory check box
and enter the new Home directory. Otherwise, leave this check box clear and
the default Home directory is used. -
If the login module to be used in the mapr.login.conf file
has been changed, select the Specify the Hadoop
login configuration check box and enter the module to be called
from the mapr.login.conf file. Otherwise,
leave this check box clear and the default login module is used.For example, enter kerberos to call the hadoop_kerberos module or hybrid to call the hadoop_hybrid module.
Related scenarios
For related topics, see:
You need to keep in mind the parameters required by Hadoop, such as NameNode and
Jobtracker, when configuring this component since the component needs to connect to a
Hadoop distribution.