tHiveCreateTable
Creates Hive tables that fit a wide range of Hive data
formats.
A proper Hive data format such as RC or ORC allows
you to obtain a better performance in processing data with
Hive.
tHiveCreateTable connects to the Hive database to be used
and creates a Hive table that is dedicated to data of the format you specify.
tHiveCreateTable Standard properties
These properties are used to configure tHiveCreateTable running in the Standard Job framework.
The Standard
tHiveCreateTable component belongs to the Big Data and the Databases families.
The component in this framework is available when you are using one of the Talend solutions with Big Data.
Basic settings
-
When you use this component with Google Dataproc:
Project identifier
Enter the ID of your Google Cloud Platform project.
If you are not certain about your project ID, check it in the Manage
Resources page of your Google Cloud Platform services.Cluster identifier
Enter the ID of your Dataproc cluster to be used.
Region Enter the geographic zones in which the computing resources are used and your
data is stored and processed. If you do not need to specify a particular
region, leave the default value global.For further information about the available regions and the zones each region
groups, see Regions and Zones.Google Storage staging bucket As a Talend Job expects its
dependent jar files for execution, specify the Google Storage directory to
which these jar files are transferred so that your Job can access these
files at execution.The directory to be entered must end with a slash (/). If not existing, the
directory is created on the fly but the bucket to be used must already
exist.Database
Fill this field with the name of the database.
Provide Google Credentials in file
Leave this check box clear, when you
launch your Job from a given machine in which Google Cloud SDK has been
installed and authorized to use your user account credentials to access
Google Cloud Platform. In this situation, this machine is often your
local machine.When you launch your Job from a remote
machine, such as a Jobserver, select this check box and in the
Path to Google Credentials file field that is
displayed, enter the directory in which this JSON file is stored in the
Jobserver machine.For further information about this Google
Credentials file, see the administrator of your Google Cloud Platform or
visit Google Cloud Platform Auth
Guide. -
When you use this component with HDInsight:
WebHCat configuration
Enter the address and the authentication information of the WebHCat service
of the Microsoft HD Insight cluster to be used. The Studio uses this service to
submit the Job to the HD Insight cluster.In the Job result folder field, enter
the location in which you want to store the execution result of a Job in the Azure
Storage to be used.HDInsight
configurationEnter the authentication information of the HD Insight cluster to be
used.Windows Azure Storage
configurationEnter the address and the authentication information of the Azure Storage
account to be used.In the Container field, enter the name
of the container to be used.In the Deployment Blob field, enter the
location in which you want to store the current Job and its dependent libraries in
this Azure Storage account.Database
Fill this field with the name of the database.
-
When you use the other distributions:
Connection mode
Select a connection mode from the list. The
options vary depending on the distribution you are
using.Hive server
Select the Hive server through which you want the Job using this component to
execute queries on Hive.This Hive server list is available only
when the Hadoop distribution to be used such as HortonWorks Data
Platform V1.2.0 (Bimota) supports HiveServer2. It allows you to select
HiveServer2 (Hive 2), the server that better support
concurrent connections of multiple clients than HiveServer (Hive
1).For further information about HiveServer2, see https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2.
Host
Database server IP address.
Port
Listening port number of DB server.
Database
Fill this field with the name of the
database.Note:This field is not available when you
select Embedded
from the Connection
mode list.Username and
PasswordDB user authentication data.
To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.Use kerberos authentication
If you are accessing a Hive Metastore running with Kerberos security,
select this check box and then enter the relevant parameters in the fields that
appear.-
If this cluster is a MapR cluster of the version 4.0.1 or later, you can set the MapR
ticket authentication configuration in addition or as an alternative by following the
explanation in Connecting to a security-enabled MapR.Keep in mind that this configuration generates a new MapR security ticket for the username
defined in the Job in each execution. If you need to reuse an existing ticket issued for the
same username, leave both the Force MapR ticket
authentication check box and the Use Kerberos
authentication check box clear, and then MapR should be able to automatically
find that ticket on the fly.
The values of the following parameters can be found in the hive-site.xml file of the Hive system to be used.-
Hive principal uses the value
of hive.metastore.kerberos.principal. This is
the service principal of the Hive Metastore. -
HiveServer2 local user
principal uses the value of hive.server2.authentication.kerberos.principal. -
HiveServer2 local user keytab
uses the value of hive.server2.authentication.kerberos.keytab -
Metastore URL uses the value of
javax.jdo.option.ConnectionURL. This is the
JDBC connection string to the Hive Metastore. -
Driver class uses the value of
javax.jdo.option.ConnectionDriverName. This
is the name of the driver for the JDBC connection. -
Username uses the value of javax.jdo.option.ConnectionUserName. This, as
well as the Password parameter, is the user credential for connecting to
the Hive Metastore. -
Password uses the value of javax.jdo.option.ConnectionPassword.
For the other parameters that are displayed, please consult the Hadoop
configuration files they belong to. For example, the Namenode
principal can be found in the hdfs-site.xml file
or the hdfs-default.xml file of the distribution you are
using.This check box is available depending on the Hadoop distribution you are
connecting to.Use a keytab to authenticate Select the Use a keytab to authenticate
check box to log into a Kerberos-enabled system using a given keytab file. A keytab
file contains pairs of Kerberos principals and encrypted keys. You need to enter the
principal to be used in the Principal field and
the access path to the keytab file itself in the Keytab field. This keytab file must be stored in the machine in
which your Job actually runs, for example, on a Talend Jobserver.Note that the user that executes a keytab-enabled Job is not necessarily
the one a principal designates but must have the right to read the keytab file being
used. For example, the user name you are using to execute a Job is user1 and the principal to be used is guest; in this
situation, ensure that user1 has the right to read the keytab
file to be used.Use SSL encryption
Select this check box to enable the SSL or TLS encrypted connection.
Then in the fields that are displayed, provide the authentication
information:-
In the Trust store
path field, enter the path, or browse to the TrustStore
file to be used. By default, the supported TrustStore types are JKS and PKCS 12. -
To enter the password, click the […] button next to the
password field, and then in the pop-up dialog box enter the password between double quotes
and click OK to save the settings.
This feature is available only to the HiveServer2 in the Standalone mode of the following distributions:-
Hortonworks Data Platform 2.0 +
-
Cloudera CDH4 +
-
Pivotal HD 2.0 +
-
Amazon EMR 4.0.0 +
Set Resource Manager
Select this check box and in the displayed field, enter the location of the
ResourceManager of your distribution. For example, tal-qa114.talend.lan:8050.Then you can continue to set the following parameters depending on the
configuration of the Hadoop cluster to be used (if you leave the check box of a
parameter clear, then at runtime, the configuration about this parameter in the
Hadoop cluster to be used will be ignored ):-
Select the Set resourcemanager
scheduler address check box and enter the Scheduler address in
the field that appears. -
Select the Set jobhistory
address check box and enter the location of the JobHistory
server of the Hadoop cluster to be used. This allows the metrics information of
the current Job to be stored in that JobHistory server. -
Select the Set staging
directory check box and enter this directory defined in your
Hadoop cluster for temporary files created by running programs. Typically, this
directory can be found under the yarn.app.mapreduce.am.staging-dir property in the configuration files
such as yarn-site.xml or mapred-site.xml of your distribution. -
Allocate proper memory volumes to the Map and the Reduce
computations and the ApplicationMaster
of YARN by selecting the Set memory
check box in the Advanced settings
view. -
Select the Set Hadoop
user check box and enter the user name under which you
want to execute the Job. Since a file or a directory in Hadoop has its
specific owner with appropriate read or write rights, this field allows
you to execute the Job directly under the user name that has the
appropriate rights to access the file or directory to be processed. -
Select the Use datanode
hostname check box to allow the Job to access datanodes via
their hostnames. This actually sets the dfs.client.use.datanode.hostname property to true. When connecting to a S3N filesystem, you must select this check
box.
For further information about these parameters, see the documentation or
contact the administrator of the Hadoop cluster to be used.For further information about the Hadoop Map/Reduce framework, see the
Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org.Set NameNode URI
Select this check box and in the displayed field, enter the URI of the Hadoop
NameNode, the master node of a Hadoop system. For example, assuming that you have
chosen a machine called masternode as the NameNode, then the
location is hdfs://masternode:portnumber. If you are using WebHDFS, the location should be
webhdfs://masternode:portnumber; if this WebHDFS is secured
with SSL, the scheme should be swebhdfs and you need to use
a tLibraryLoad in the Job to load the library required by
the secured WebHDFS.For further information about the Hadoop Map/Reduce framework, see the
Map/Reduce tutorial in Apache’s Hadoop documentation on http://hadoop.apache.org. -
The other properties:
Property type |
Either Built-in or Repository. |
||
|
Built-in: No property data stored |
||
|
Repository: Select the repository |
||
Use an existing connection |
Select this check box and in the Component Note:
When a Job contains the parent Job and the child Job, if you need to share an
existing connection between the two levels, for example, to share the connection created by the parent Job with the child Job, you have to:
For an example about how to share a database connection across Job levels, see |
||
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
||
Hive version |
Select the version of the Hadoop distribution you are using. The available
options vary depending on the component you are using. Along with the evolution of Hadoop, please note the following changes:
|
||
Schema and Edit |
A schema is a row description. It defines the number of fields (columns) to Click Edit schema to make changes to the schema.
|
||
|
Built-In: You create and store the |
||
|
Repository: You have already created When the schema to be reused has default values that are integers or You can find more details about how to verify default |
||
Table Name |
Name of the table to be created. |
||
Action on table |
Select the action to be carried out for creating a table. |
||
Format |
Select the data format to which the table to be created is The available data formats vary depending on the version of the Note that when the file format to be used is PARQUET, you
might be prompted to find the specific Parquet jar file and install it into the Studio.
This jar file can be downloaded from Apache’s site. You can find more details about how to install external modules in Talend Help Center (https://help.talend.com). |
||
Inputformat class and Outputformat class |
These fields appear only when you have selected INPUTFORMAT and OUTPUTFORMAT from the These fields allow you to enter the name of the jar files to be |
||
Storage class |
Enter the name of the storage handler to be used for creating a This field is available only when you have selected STORAGE from the Format list. For further information about a storage handler, see https://cwiki.apache.org/confluence/display/Hive/StorageHandlers. |
||
Set partitions |
Select this check box to add partition columns to the table to be |
||
Set file location |
If you want to create a Hive table in a directory other than the This is typical useful when you need to create an external Hive |
||
Use S3 endpoint |
The Use S3 endpoint check box is Once this Use S3 endpoint check box is
selected, you need to enter the following parameters in the fields that appear:
Note that the format of the S3 file is S3N (S3 Native Filesystem). Since a Hive table created in S3 is actually an external table, |
Advanced settings
Like table |
Select this check box and enter the name of the Hive table you For further information about the Like parameter, see Apache’s |
Create an external table |
Select this check box to make the table to be created an external An external table is usually the better choice for accessing For further information about an external Hive table, see Apache’s |
Table comment |
Enter the description you want to use for the table to be |
As select |
Select this check box and enter the |
Set clustered_by or skewed_by |
Enter the |
SerDe properties |
If you are using the SerDe row format, you can add any custom |
Table properties |
Add any custom Hive table properties you want to override the |
Temporary path |
If you do not want to set the Jobtracker and the NameNode when you execute the |
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
Hive properties |
Talend Studio uses a default configuration for its engine to perform
operations in a Hive database. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones. For further information for Hive dedicated properties, see https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration.
|
Mapred job map memory mb and |
If the Hadoop distribution to be used is Hortonworks Data Platform V1.2 or Hortonworks In that situation, you need to enter the values you need in the Mapred If the distribution is YARN, then the memory parameters to be set become Map (in Mb), Reduce (in Mb) and |
Path separator in server |
Leave the default value of the Path separator in |
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
QUERY: the query statement being processed. This is a Flow
ERROR_MESSAGE: the error message generated by the A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component works standalone. If the Studio used to connect to a Hive database is operated on Windows, you |
Row format |
Set Delimited row format |
Set SerDe row format |
|
Die on error |
|
Dynamic settings |
Click the [+] button to add a The Dynamic settings table is For examples on using dynamic parameters, see Scenario: Reading data from databases through context-based dynamic connections and Scenario: Reading data from different MySQL databases using dynamically loaded connection parameters. For more information on Dynamic |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Related scenario
For a related scenario, see Scenario: creating a partitioned Hive table.