- Sqoop Tutorial
- Sqoop - Home
- Sqoop - Introduction
- Sqoop - Installation
- Sqoop - Import
- Sqoop - Import-All-Tables
- Sqoop - Export
- Sqoop - Sqoop Job
- Sqoop - Codegen
- Sqoop - Eval
- Sqoop - List Databases
- Sqoop - List Tables
- Sqoop Useful Resources
- Sqoop - Questions and Answers
- Sqoop - Quick Guide
- Sqoop - Useful Resources
- Sqoop - Discussion
Sqoop Online Quiz
Following quiz provides Multiple Choice Questions (MCQs) related to Sqoop. You will have to read all the given answers and click over the correct answer. If you are not sure about the answer then you can check the answer using Show Answer button. You can use Next Quiz button to check new set of questions in the quiz.
Q 1 - The parameter in sqoop which specifies the output directories when importing data is
Answer : D
Explanation
The --target-dir and --warehouse-dir are the two parameters used for specifying the path where import will be done.
Answer : C
Explanation
You can do both full and partial data import from tables but not a subset of columns from a table.
Q 3 - The --options-file parameter is used to
B - specify the name of the data files to be created after import
C - store all the sqoop variables
D - store the parameters and their values in a file to be used by various sqoop commands.
Answer : D
Explanation
The command line options (the name and value of the parameters) that do not change from time to time can be saved into a file and used again and again. This is called an options file.
Q 4 - Data Transfer using sqoop can be
A - only imported into the Hadoop system
Answer : B
Explanation
The data can be both imported and exported form Hadoop system using sqoop.
Q 5 - Using the –staging-table parameter while loading data to relational tables the creation of staging table is done
Answer : C
Explanation
The user has to ensure that the staging tab e is created and accessible by sqoop.
Q 6 - The –update-key parameter is used to
A - Update the primary key field present in the Hadoop data to be exported
B - Update the primary key field in the table to which data is already exported
C - Update the database connectivity parameters like username, password etc
D - Update the already exported rows based on a primary key field
Answer : D
Explanation
The –update-key parameter uses the primary key table to update the entire record in the relational table.
Q 7 - If the table to which data is being exported has more columns than the data present in the hdfs file then
B - The load can be done only for the relevant columns present in HDFS file
Answer : B
Explanation
The load can still be done by specifying the –column parameter to populate a subset of columns in the relational table.
Q 8 - During import to hive using sqoop the data is
A - directly loaded to existing hive table
B - first moved into a hive directory as a hdfs file
Answer : B
Explanation
The data is first staged into a temporary location as a HDFS file and then loaded into the hive table.
Q 9 - If the hbase table to which sqoop is importing data does not exist then
C - sqoop waits for user input for hbase table details to proceed with import
D - sqoop imports the data to a temporary location under Hbase
Answer : B
Explanation
Unlike hive where sqoop creates the table if it does not exist, in HBase the job fails.
Q 10 - After importing a table into HBAse you find that the number of rows inserted is fewer than in the source. The possible reason is −
A - Sqoop is yet to have mature code for HBase
B - Sqoop version and Hbase version conflict
C - Hbase does not allow rows will all NULL values to be inserted
D - Hbase has very limited capabilities to handle numeric data types so some rows got rejected.
Answer : C
Explanation
As Hbase does not allow the rows with all NULL values, those rows were skipped during import and caused fewer row counts.