tSqoopImportAllTables
Defines the arguments required by Sqoop for writing all of the tables of a database
into HDFS.
tSqoopImportAllTables calls Sqoop to
transfer all of the tables of a relational database management system (RDBMS) such as
MySQL or Oracle into the Hadoop Distributed File System (HDFS).
Sqoop is typically installed in every Hadoop distribution. But if the Hadoop
distribution you need to use have no Sqoop installed, you have to install one on your
own and ensure to add the Sqoop command line to the PATH variable of that distribution.
For further information about how to install Sqoop, see the documentation of Sqoop.
tSqoopImportAllTables Standard properties
These properties are used to configure tSqoopImportAllTables running in the Standard Job framework.
The Standard
tSqoopImportAllTables component belongs to the Big Data and the File families.
The component in this framework is available in all Talend products with Big Data
and in Talend Data Fabric.
Basic settings
Mode |
Select the mode in which Sqoop is called in a Job execution.
Use Commandline: the Sqoop shell is used to call Sqoop.
Use Java API: the Java API is used to call Sqoop. In |
Hadoop properties |
Either Built-in or Repository:
|
Distribution |
Select the cluster you are using from the drop-down list. The options in the
list vary depending on the component you are using. Among these options, the following ones requires specific configuration:
|
Hadoop Version |
Select the version of the Hadoop distribution you are using. The available |
NameNode URI |
Type in the URI of the Hadoop NameNode, the master node of a |
JobTracker Host |
Select this check box and in the displayed field, enter the location of the This property is required when the query you want to use is executed in Then you can continue to set the following parameters depending on the
configuration of the Hadoop cluster to be used (if you leave the check box of a parameter clear, then at runtime, the configuration about this parameter in the Hadoop cluster to be used will be ignored ):
For further information about these parameters, see the documentation or For further information about the Hadoop Map/Reduce framework, see the |
Use kerberos authentication |
If you are accessing the Hadoop cluster running
with Kerberos security, select this check box, then, enter the Kerberos principal name for the NameNode in the field displayed. This enables you to use your user name to authenticate against the credentials stored in Kerberos.
In addition, since this component performs Map/Reduce computations, you This check box is available depending on the Hadoop distribution you are |
Use a keytab to authenticate |
Select the Use a keytab to authenticate Note that the user that executes a keytab-enabled Job is not necessarily |
Hadoop user name |
Enter the user name under which you want to execute the Job. Since a file or a directory in |
JDBC property |
Either Built-in or Repository:
|
Connection |
Enter the JDBC URL used to connect to the database where the source data is |
User name and |
Enter the authentication information used to connect to the source database. To enter the password, click the […] button next to the If your password is stored in a file, select the The
password is stored in a file check box and enter the path to that file in the File path field that is displayed.
Note that this feature is available depending on the Sqoop version you are using. |
Driver JAR |
In either the Use Commandline mode or the |
Driver class name |
Enter the class name for the specified driver between double |
File format |
Select a file format for the data to be transferred:
|
Specify Number of Mappers |
Select this check box to indicate the number of map tasks (parallel processes) If you do not want Sqoop to work in parallel, enter 1 in the |
Compress |
Select this check box to enable compression. |
Direct |
Select this check box to use the import fast path. |
Exclude table |
Select this check box and enter the name of the table(s) to be |
Print Log |
Select this check box to activate the Verbose |
Verbose |
Select this check box to print more information while working, for example, the |
Advanced settings
Define Hive mapping |
Sqoop provides default configuration that maps most SQL types to appropriate |
Use MySQL default delimiters |
Select this check box to use MySQL’s default delimiter set. This check box is |
Additional arguments |
Complete this table to use additional arguments if needs be. By adding additional arguments, you are able to perform multiple operations in |
Hadoop properties |
Talend Studio uses a default configuration for its engine to perform operations in a Hadoop distribution. If you need to use a custom configuration in a specific situation, complete this table with the property or properties to be customized. Then at runtime, the customized property or properties will override those default ones.
For further information about the properties required by Hadoop and its related systems such
as HDFS and Hive, see the documentation of the Hadoop distribution you are using or see Apache’s Hadoop documentation on http://hadoop.apache.org/docs and then select the version of the documentation you want. For demonstration purposes, the links to some properties are listed below:
|
Mapred job map memory mb and |
You can tune the map and reduce computations by In that situation, you need to enter the values you need in the Mapred The memory parameters to be set are Map (in Mb), |
Path separator in server |
Leave the default value of the Path separator in |
tStatCatcher Statistics |
Select this check box to collect log data at the component |
Global Variables
Global Variables |
ERROR_MESSAGE: the error message generated by the
EXIT_CODE: the exit code of the remote command. This is A Flow variable functions during the execution of a component while an After variable To fill up a field or expression with a variable, press Ctrl + For further information about variables, see |
Usage
Usage rule |
This component is used standalone. It respects the Sqoop prerequisites. You need We recommend using the Sqoop of version 1.4+ in order to benefit the full For further information about Sqoop, see the Sqoop manual on: http://sqoop.apache.org/docs/ |
Prerequisites |
The Hadoop distribution must be properly installed, so as to guarantee the interaction
For further information about how to install a Hadoop distribution, see the manuals |
Limitation |
If you have selected the Use Commandline The preconditions required by Sqoop for using its import-all-tables tool must be |
Connections |
Outgoing links (from this component to another):
Trigger: Run if; On Subjob Ok; On Incoming links (from one component to this one): Row: Iterate;
Trigger: Run if; On Subjob Ok; On For further information regarding connections, see |
Related scenarios
No scenario is available for the Standard version of this component yet.