tHiveConnection
Establishes a Hive connection to be reused by other Hive components in your
Job.
tHiveConnection opens a
connection to a Hive database.
tHiveConnection Standard properties
These properties are used to configure tHiveConnection running in the Standard Job framework.
The Standard
tHiveConnection component belongs to the Big Data, the Databases and the ELT families.
The component in this framework is available in all Talend
products.
Basic settings
- When you use this component with Qubole on AWS:
API Token Click the … button next to the
API Token field to enter the
authentication token generated for the Qubole user account
to be used. For further information about how to obtain this
token, see Manage Qubole
account from the Qubole documentation.This
token allows you to specify the user account you want to
use to access Qubole. Your Job automatically uses
the rights and permissions granted to this user account
in Qubole.Cluster label Select the Cluster label check
box and enter the name of the Qubole cluster to be used. If
leaving this check box clear, the default cluster is
used.If you need details about your default cluster,
ask the administrator of your Qubole service. You can
also read this article
from the Qubole documentaiton to find more information
about configuring a default Qubole cluster.Change API endpoint Select the Change API endpoint
check box and select the region to be used. If leaving this
check box clear, the default region is used.For further
information about the Qubole Endpoints supported on
QDS-on-AWS, see Supported Qubole
Endpoints on Different Cloud
Providers. -
When you use this component with Google Dataproc:
Project identifier
Enter the ID of your Google Cloud Platform project.
If you are not certain about your project ID, check it in the Manage
Resources page of your Google Cloud Platform services.Cluster identifier
Enter the ID of your Dataproc cluster to be used.
Region From this drop-down list, select the Google Cloud region to
be used.Google Storage staging bucket As a Talend Job expects its
dependent jar files for execution, specify the Google Storage directory to
which these jar files are transferred so that your Job can access these
files at execution.The directory to be entered must end with a slash (/). If not existing, the
directory is created on the fly but the bucket to be used must already
exist.Database
Fill this field with the name of the database.
Provide Google Credentials in file
Leave this check box clear, when you
launch your Job from a given machine in which Google Cloud SDK has been
installed and authorized to use your user account credentials to access
Google Cloud Platform. In this situation, this machine is often your
local machine.When you launch your Job from a remote
machine, such as a Jobserver, select this check box and in the
Path to Google Credentials file field that is
displayed, enter the directory in which this JSON file is stored in the
Jobserver machine.For further information about this Google
Credentials file, see the administrator of your Google Cloud Platform or
visit Google Cloud Platform Auth
Guide. -
When you use this component with HDInsight:
WebHCat configuration
Enter the address and the authentication information of the Microsoft HD
Insight cluster to be used. For example, the address could be
your_hdinsight_cluster_name.azurehdinsight.net and the
authentication information is your Azure account name: ychen.
The Studio uses this service to submit the Job to the HD Insight cluster.In the Job result folder field, enter
the location in which you want to store the execution result of a Job in the Azure
Storage to be used.HDInsight
configuration- The Username is the one defined when
creating your cluster. You can find it in the SSH
+ Cluster login blade of your cluster. - The Password is defined when creating your HDInsight
cluster for authentication to this cluster.
Windows Azure Storage
configurationEnter the address and the authentication information of the Azure Storage
account to be used. In this configuration, you do not define where to read or write
your business data but define where to deploy your Job only. Therefore always use
the Azure
Storage
system for this configuration.In the Container field, enter the name
of the container to be
used. You can
find the available containers in the Blob blade of the Azure
Storage account to be used.In the Deployment Blob field, enter the
location in which you want to store the current Job and its dependent libraries in
this Azure Storage account.In the Hostname field, enter the
Primary Blob Service Endpoint of your Azure Storage account without the https:// part. You can find this endpoint in the Properties blade of this storage account.In the Username field, enter the name of the Azure Storage account to be used.
In the Password field, enter the access key of the Azure Storage account to be used. This key can be found in the Access keys blade of this storage account.
Database
Fill this field with the name of the database.
- The Username is the one defined when
-
When you use the other distributions:
Connection mode
Select a connection mode from the list. The
options vary depending on the distribution you are
using.Hive server
Select the Hive server through which you want the Job using this component
to execute queries on Hive.This Hive server list is available
only when the Hadoop distribution to be used such as HortonWorks Data Platform V1.2.0 (Bimota) supports HiveServer2. It
allows you to select HiveServer2 (Hive 2), the
server that better support concurrent connections of multiple clients than
HiveServer (Hive 1).For further information about HiveServer2, see https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2.
Host
Database server IP address.
Port
Listening port number of DB server.
Database
Fill this field with the name of the
database.Note:This field is not available when you
select Embedded
from the Connection
mode list.Username and
PasswordDB user authentication data.
To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.Use kerberos authentication
If you are accessing a Hive Metastore running with Kerberos security,
select this check box and then enter the relevant parameters in the fields that
appear.-
If this cluster is a MapR cluster of the version 5.0.0 or later, you can set the
MapR ticket authentication configuration in addition or as an alternative by following
the explanation in Connecting to a security-enabled MapR.Keep in mind that this configuration generates a new MapR security ticket for the username
defined in the Job in each execution. If you need to reuse an existing ticket issued for the
same username, leave both the Force MapR ticket
authentication check box and the Use Kerberos
authentication check box clear, and then MapR should be able to automatically
find that ticket on the fly.
The values of the following parameters can be found in the hive-site.xml file of the Hive system to be used.-
Hive principal uses the value
of hive.metastore.kerberos.principal. This is
the service principal of the Hive Metastore. -
HiveServer2 local user
principal uses the value of hive.server2.authentication.kerberos.principal. -
HiveServer2 local user keytab
uses the value of hive.server2.authentication.kerberos.keytab -
Metastore URL uses the value of
javax.jdo.option.ConnectionURL. This is the
JDBC connection string to the Hive Metastore. -
Driver class uses the value of
javax.jdo.option.ConnectionDriverName. This
is the name of the driver for the JDBC connection. -
Username uses the value of javax.jdo.option.ConnectionUserName. This, as
well as the Password parameter, is the user credential for connecting to
the Hive Metastore. -
Password uses the value of javax.jdo.option.ConnectionPassword.
For the other parameters that are displayed, please consult the Hadoop
configuration files they belong to. For example, the Namenode
principal can be found in the hdfs-site.xml file
or the hdfs-default.xml file of the distribution you are
using.This check box is available depending on the Hadoop distribution you are
connecting to.Use a keytab to authenticate Select the Use a keytab to authenticate
check box to log into a Kerberos-enabled system using a given keytab file. A keytab
file contains pairs of Kerberos principals and encrypted keys. You need to enter the
principal to be used in the Principal field and
the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in
which your Job actually runs, for example, on a Talend Jobserver.Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.Use SSL encryption
Select this check box to enable the SSL or TLS encrypted connection.
Then in the fields that are displayed, provide the authentication
information:-
In the Trust store
path field, enter the path, or browse to the TrustStore
file to be used. By default, the supported TrustStore types are JKS and PKCS 12. -
To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.
This feature is available only to the HiveServer2 in the Standalone mode of the following distributions:-
Hortonworks Data Platform 2.0 +
-
Cloudera CDH4 +
-
Pivotal HD 2.0 +
-
Amazon EMR 4.0.0 +
Set Resource Manager
Select this check box and in the displayed field, enter the location of the
ResourceManager of your distribution. For example, tal-qa114.talend.lan:8050.Then you can continue to set the following parameters depending on the
configuration of the Hadoop cluster to be used (if you leave the check box of a
parameter clear, then at runtime, the configuration about this parameter in the
Hadoop cluster to be used will be ignored ):-
Select the Set resourcemanager
scheduler address check box and enter the Scheduler address in
the field that appears. -
Select the Set jobhistory
address check box and enter the location of the JobHistory
server of the Hadoop cluster to be used. This allows the metrics information of
the current Job to be stored in that JobHistory server. -
Select the Set staging
directory check box and enter this directory defined in your
Hadoop cluster for temporary files created by running programs. Typically, this
directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files
such as yarn-site.xml or mapred-site.xml of your distribution. -
Allocate proper memory volumes to the Map and the Reduce
computations and the ApplicationMaster
of YARN by selecting the Set memory
check box in the Advanced settings
view. -
Select the Set Hadoop
user check box and enter the user name under which you
want to execute the Job. Since a file or a directory in Hadoop has its
specific owner with appropriate read or write rights, this field allows
you to execute the Job directly under the user name that has the
appropriate rights to access the file or directory to be processed. -
Select the Use datanode hostname check box to allow the
Job to access datanodes via their hostnames. This actually sets the dfs.client.use.datanode.hostname
property to true. When connecting to a
S3N filesystem, you must select this check box.
For further information about these parameters, see the documentation or
contact the administrator of the Hadoop cluster to be used.For further information about the Hadoop Map/Reduce framework, see the
Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org.Set NameNode URI
Select this check box and in the displayed field, enter the URI of the
Hadoop NameNode, the master node of a Hadoop system. For example, assuming that you
have chosen a machine called masternode as the NameNode, then
the location is hdfs://masternode:portnumber. If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; WebHDFS with SSL is not
supported yet.For further information about the Hadoop Map/Reduce framework, see the
Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org. -
Property type |
Either Built-In or Repository. Built-In: No property data stored centrally.
Repository: Select the repository file where the |
||
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
||
Hive version |
Select the version of the Hadoop distribution you are using. The available |
||
Inspect the classpath for |
Select this check box to allow the component to check the configuration In this situation, the fields or options used to configure Hadoop If you want to use certain parameters such as the Kerberos parameters but
these parameters are not included in these Hadoop configuration files, you need to create a file called talend-site.xml and put this file into the same directory defined with $HADOOP_CONF_DIR. This talend-site.xml file should read as follows:
The parameters read from these configuration files override the default Note that this option is available only in Hive |
||
Use or register a shared DB Connection |
Select this check box to share your This option is |
||
Execution engine |
Select this check box and from the drop-down list, select the framework you need to use to This list is available only when you are using the Embedded mode for the Hive connection and the distribution you are working
with is:
Before using Tez, ensure that the Hadoop cluster you are using supports Tez. You will need to configure the access to the relevant Tez libraries via the Advanced settings view of this component. For further information about Hive on Tez, see Apache’s related documentation in https://cwiki.apache.org/confluence/display/Hive/Hive+on+Tez. Some examples are presented there to show how Tez can be used to gain performance over MapReduce. |
||
Store by HBase |
Select this check box to display the parameters to be set to allow the Hive components to
access HBase tables:
For further information about this access involving Hive and HBase, see Apache’s Hive |
||
Zookeeper quorum |
Type in the name or the URL of the Zookeeper service you use to coordinate the transaction |
||
Zookeeper client port |
Type in the number of the client listening port of the Zookeeper service you are |
||
Define the jars to register for |
Select this check box to display the Register jar for |
||
Register jar for HBase |
Click the [+] button to add rows to this table, then, in the Jar name column, select the jar file(s) to be registered and in the |
Advanced settings
Tez lib |
Select how the Tez libraries are accessed:
|
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
Hive properties |
Talend Studio uses a default configuration for its engine to
perform operations in a Hive database. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones. For further information for Hive dedicated properties, see https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration.
|
Mapred job map memory mb and |
You can tune the map and reduce computations by In that situation, you need to enter the values you need in the Mapred |
Path separator in server |
Leave the default value of the Path separator in |
tStatCatcher Statistics |
Select this check box to collect the log data at a component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component is generally used with other Hive components, If the Studio used to connect to a Hive database is operated on Windows, |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Connecting to a custom Hadoop distribution
As explained in the properties table, when you select the Custom option from the Distribution
drop-down list, you are connecting to a Hadoop distribution different from any of the
Hadoop distributions provided on that Distribution list
in the Studio.
After selecting this Custom option, click the
button to display the Import custom
definition dialog box and proceed as follows:
-
Depending on your situation, select Import from existing
version or Import from zip to
configure the custom Hadoop distribution to be connected to.-
If you have the zip file of the custom Hadoop distribution you need to
connect to, select Import from zip.
Talend
community provides this kind of zip
files that you can download from http://www.talendforge.org/exchange/index.php. -
Otherwise, select Import from existing
version to import an officially supported Hadoop
distribution as base so as to customize it by following the wizard.
Note that the check boxes in the wizard allow you to select the Hadoop
element(s) you need to import. All the check boxes are not always displayed in
your wizard depending on the context in which you are creating the connection.
For example, if you are creating this connection for a Hive component, then only
the Hive check box appears. -
-
Whether you have selected Import from existing
version or Import from zip,
verify that each check box next to the Hadoop element you need to import has
been selected.. -
Click OK and then in the pop-up warning, click Yes to accept overwriting any custom setup of jar
files previously implemented.Once done, the Custom Hadoop version
definition dialog box becomes active.This dialog box lists the Hadoop elements and their jar files you are
importing. -
If you have selected Import from zip, click
OK to validate the imported
configuration.If you have selected Import from existing version as base, you should
still need to add more jar files to customize that version. Then from the tab of
the Hadoop element you need to customize, for example, the HDFS/HCatalog tab, click the
[+] button to open the
Select libraries dialog
box. -
Select the External libraries option to open
its view. - Browse to and select any jar file you need to import.
-
Click OK to validate the changes and to close
the Select libraries dialog box.Once done, the selected jar file appears on the list in the tab of the Hadoop
element being configured.Note that if you need to share the custom Hadoop setup with another Studio,
you can export this custom connection from the Custom
Hadoop version definition window using thebutton.
-
In the Custom Hadoop version definition
dialog box, click OK to validate the customized
configuration. This brings you back to the Distribution list in the Basic
settings view of the component.
Now that the configuration of the custom Hadoop version has been set up and you are
back to the Distribution list, you are able to continue
to enter other parameters required by the connection.
If the custom Hadoop version you need to connect to contains YARN and you want to use
it, select the Use YARN check box next to the Distribution list.
A video is available in the following link to demonstrate, by taking HDFS as example, how
to set up the connection to a custom Hadoop cluster, also referred to as an unsupported
Hadoop distribution: How to add an unsupported Hadoop
distribution to the Studio.
Creating a partitioned Hive table
This scenario illustrates how to use tHiveConnection,
tHiveCreateTable and tHiveLoad to create a partitioned Hive table and write data in
it.
Note that tHiveCreateTable and tHiveLoad are available only when you are using one of the
Talend
solutions with Big Data.
reading as follows:
1 2 3 4 5 6 7 8 9 10 11 |
1;Lyndon;Fillmore;21-05-2008;US 2;Ronald;McKinley;15-08-2008 3;Ulysses;Roosevelt;05-10-2008 4;Harry;Harrison;23-11-2007 5;Lyndon;Garfield;19-07-2007 6;James;Quincy;15-07-2008 7;Chester;Jackson;26-02-2008 8;Dwight;McKinley;16-07-2008 9;Jimmy;Johnson;23-12-2007 10;Herbert;Fillmore;03-04-2008 |
The information contains some employees’ names and the dates when they are registered
in a HR system. Since these employees work for the US subsidiary of the company, you
will create a US partition for this sample data.
Before starting to replicate this scenario, ensure that you have appropriate rights
and permissions to access the Hive database to be used.
Note that if you are using the Windows operating system, you have to create a
tmp folder at the root of the disk where the
Studio is installed.
Then proceed as follows:
Linking the components
-
In the
Integration
perspective
of the Studio, create an empty Job from the Job
Designs node in the Repository tree view.For further information about how to create a Job, see the chapter
describing how to designing a Job in
Talend Studio
User Guide. - Drop tHiveConnection, tHiveCreateTable and tHiveLoad onto the workspace.
-
Connect them using the Trigger > On Subjob
OK link.
Configuring the connection to Hive
-
Double-click tHiveConnection to open its
Component view. -
From the Property type list, select
Built-in. If you have created the
connection to be used in Repository, then
select Repository, click thebutton to open the Repository
content dialog box and select that connection. This way, the
Studio will reuse that set of connection information for this Job.For further information about how to create a Hadoop connection in
Repository, see the chapter describing the Hadoop
cluster node of the
Talend Open Studio for Big Data Getting Started Guide
. -
In the Version area, select the Hadoop
distribution to be used and its version. If you cannot find from the list
the distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not
officially supported in the Studio.For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution. -
In the Connection area, enter the
connection parameters to the Hive database to be used. -
In the Name node field,
enter the location of the master node, the NameNode, of the distribution to be
used. For example, talend-hdp-all:50300. If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; WebHDFS with SSL is not
supported yet.
-
In the Job tracker field, enter the
location of the JobTracker of your distribution. For example, hdfs://talend-hdp-all:8020.Note that the notion Job in this term JobTracker designates the MR or the
MapReduce jobs described in Apache’s documentation on http://hadoop.apache.org/.
Configuring tHiveConnection
-
Double-click tHiveConnection to open its
Component view. -
From the Property type list, select
Built-in. If you have created the
connection to be used in Repository, then
select Repository, click thebutton to open the Repository
content dialog box and select that connection. This way, the
Studio will reuse that set of connection information for this Job.For further information about how to create a Hadoop connection in
Repository, see the chapter describing the Hadoop
cluster node of the
Talend Open Studio for Big Data Getting Started Guide
. -
In the Version area, select the Hadoop
distribution to be used and its version. If you cannot find from the list
the distribution corresponding to yours, select Custom so as to connect to a Hadoop distribution not
officially supported in the Studio.For a step-by-step example about how to use this Custom option, see Connecting to a custom Hadoop distribution. -
In the Connection area, enter the
connection parameters to the Hive database to be used. -
In the Name node field, enter
the location of the master node, the NameNode, of the distribution to be used. For
example, talend-hdp-all:50300. If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; WebHDFS with SSL is not
supported yet. -
In the Job tracker field, enter the
location of the JobTracker of your distribution. For example, hdfs://talend-hdp-all:8020.Note that the notion Job in this term JobTracker designates the MR or the
MapReduce jobs described in Apache’s documentation on http://hadoop.apache.org/.
Creating the Hive table
Defining the schema
-
Double-click tHiveCreateTable to open its
Component view. -
Select the Use an existing connection
check box and from Component list, select
the connection configured in the tHiveConnection component you are using for this Job. -
Click the
button next to Edit
schema to open the schema editor. -
Click the
button four times to add four rows and in the Column column, rename them to Id, FirstName, LastName and
Reg_date, respectively.Note that you cannot use the Hive reserved keywords to name the columns,
such as location or date. -
In the Type column, select the type of
the data in each column. In this scenario, Id is of the Integer type,
Reg_date is of the Date type and the others are of the String type. -
In the DB type column, select the Hive
type of each column corresponding to their data types you have defined. For
example, Id is of INT and Reg_date is of
TIMESTAMP. -
In the Data pattern column, define the
pattern corresponding to that of the raw data. In this example, use the
default one. -
Click OK to validate these
changes.
Defining the table settings
-
In Table name field, enter the name of
the Hive table to be created. In this scenario, it is employees. -
From the Action on table list, select
Create table if not exists. -
From the Format list, select the data
format that this Hive table in question is created for. In this scenario, it
is TEXTFILE. -
Select the Set partitions check box to
add the US partition as explained at the
beginning of this scenario. To define this partition, click thebutton next to Edit
schema that appears. -
Leave the Set file location check box
clear to use the default path for Hive table. -
Select the Set Delimited row format check
box to display the available options of row format. -
Select the Field check box and enter a
semicolon (;) as field separator in the
field that appears. -
Select the Line check box and leave the
default value as line separator.
Writing data to the table
-
Double-click tHiveLoad to open its
Component view. -
Select the Use an existing connection
check box and from Component list, select
the connection configured in the tHiveConnection component you are using for this Job. -
From the Load action field, select
LOAD to write data from the file
holding the sample data that is presented at the beginning of this
scenario. -
In the File path field, enter the
directory where the sample data is stored. In this example, the data is
stored in the HDFS system to be used.In the real-world practice, you can
use tHDFSOutput to write data into the HDFS
system and you need to ensure that the Hive application has the appropriate
rights and permissions to read or even move the data.For further information about tHDFSOutput, see tHDFSOutput.
for further
information about the related rights and permissions, see the documentation
or contact the administrator of the Hadoop cluster to be used.Note if you need to read data from a local file system other than the HDFS
system, ensure that the data to be read is stored in the local file system
of the machine in which the Job is run and then select the Local check box in this Basic settings view. For example, when the connection mode
to Hive is Standalone, the Job is run in
the machine where the Hive application is installed and thus the data should
be stored in that machine. -
In the Table name field, enter the name
of the target table you need to load data in. In this scenario, it is
employees. -
From the Action on file list, select
APPEND. -
Select the Set partitions check box and
in the field that appears, enter the partition you need to add data to. In
this scenario, this partition is country=’US’.
Configuring tHiveLoad
-
Double-click tHiveLoad to open its
Component view. -
Select the Use an existing connection
check box and from Component list, select
the connection configured in the tHiveConnection component you are using for this Job. -
From the Load action field, select
LOAD to write data from the file
holding the sample data that is presented at the beginning of this
scenario. -
In the File path field, enter
the directory where the sample data is stored. In this example, the data is
stored in the HDFS system to be used. In the real-world
practice, you can use tHDFSOutput to write
data into the HDFS system and you need to ensure that the Hive application has
the appropriate rights and permissions to read or even move the data.
For further information about the related rights and permissions,
see the documentation or contact the administrator of the Hadoop cluster to be
used.Note if you need to read data from a local file system other than the HDFS
system, ensure that the data to be read is stored in the local file system
of the machine in which the Job is run and then select the Local check box in this Basic settings view. For example, when the connection mode
to Hive is Standalone, the Job is run in
the machine where the Hive application is installed and thus the data should
be stored in that machine. -
In the Table name field, enter the name
of the target table you need to load data in. In this scenario, it is
employees. -
From the Action on file list, select
APPEND. -
Select the Set partitions check box and
in the field that appears, enter the partition you need to add data to. In
this scenario, this partition is country=’US’.
Executing the Job
Then you can press F6 to run this Job.
Once done, the Run view is opened automatically,
where you can check the execution process.
You can as well verify the results in the web console of the Hadoop distribution
used.
If you need to obtain more details about the Job, it is recommended to use the web
console of the Jobtracker provided by the Hadoop distribution you are using.
Creating a JDBC Connection to Azure HDInsight Hive
This scenario illustrates how to use tHiveConnection, tHiveInput and
tHiveClose to create a JDBC Connection to
HDInsight Hive.
Prerequisites
Before starting to replicate this scenario, ensure that you have appropriate rights and permissions to access the Hive database to be used.
Configuring a DataBase Connection to Hive
HDInsight.
-
In the Repository view,
extend the Metadata drop-down menu. -
Click Db Connections, and then right-click Create Connection
. -
Give a name to your connection.
- Click Next.
-
Set up the connection configuration similarly to the following table:DB Type
Select Hive.
Hadoop Cluster
Select None.
Distribution
Select Horton Works.
HDInsight is leveraging Horton Works distribution on the backend. This
will allow you to use Horton Works libraries to connect to
HDInsighs.DB Type
Select Hive.
Version
Select Hortonworks Data Platform V2.6.0.3-8
[Built in].Hive Model
Select Standalone.
Login
Password
Server
Fill in the fields as required.
Port
Input 443.
You will be able to communicate through the proxy port since the
HDInsight cluster sits behind a proxy by default.DataBase
Leave default.
Additional JDBC Setting
Input transportMode=http;ssl=true;httpPath=/hive2, where:- transportMode=http sets the transport mode to HTTP instead of the default Hive JDBC transport mode.
- SSL=true enables SSL.
- httpPath=/hive2 sets the HTTP endpoint.
-
Click Test Connection to
ensure the Talend Studio connects
successfully to the cluster.
Building the Job
- From the Repository view of the Talend Studio, right-click Job Designs, and then click Create Standard Job.
- Give a name to your Job.
-
Click Finish.
-
Add a tPreJob component
to your workspace. -
Add a tHiveConnection
component to your workspace. -
Double click the tHiveConnection component and choose Repository as the Property Type and the Database Connection created above.
-
Right-click the tPreJob
component. -
Select Trigger > On Component Ok and connect the tPreJob to the tHiveConnection.
-
Add a tHiveInput
component to your workspace. -
Select it and check the box Use an
existing connection, then select the tHiveConnection component in the Component List drop-down menu. -
In the Query field,
input show tables to run a query
displaying the available tables in the database. -
Add a tLogRow component
to your workspace. - Right-click the tHiveInput component and select Row > Main.
-
Click the tLogRow
component to connect both components. They will display the information from the
query above. -
From the Component tab of the tLogRow, select Table (print values in celles of a table).
-
Add a tPostJob component
to your workspace. -
Add a tHiveClose
component to your workspace. -
Connect the tPostJob
component to the tHiveClose component
using an On Component Ok connection to
close the connection opened. -
From the Run tab,
click Run to run the Job and ensure of a
successful connection to Hive on HDInsight and of the readability of the table
data.