- Sqoop Tutorial
- Sqoop - Home
- Sqoop - Introduction
- Sqoop - Installation
- Sqoop - Import
- Sqoop - Import-All-Tables
- Sqoop - Export
- Sqoop - Sqoop Job
- Sqoop - Codegen
- Sqoop - Eval
- Sqoop - List Databases
- Sqoop - List Tables
- Sqoop Useful Resources
- Sqoop - Questions and Answers
- Sqoop - Quick Guide
- Sqoop - Useful Resources
- Sqoop - Discussion
Sqoop Mock Test
This section presents you various set of Mock Tests related to Sqoop. You can download these sample mock tests at your local machine and solve offline at your convenience. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself.
Sqoop Mock Test III
Q 1 - Sqoop can automatically clear the staging table before loading by using the parameter
Answer : B
Explanation
the –clear-staging-table automatically cleans data form the staging table.
Q 2 - Can sqoop use the TRUNCATE option in database while clearing data from a table?
Answer : C
Explanation
If available through the database driver, sqoop can clear the data quickly using TRUNCATE option.
Q 3 - The –update-key parameter is used to
A - Update the primary key field present in the Hadoop data to be exported
B - Update the primary key field in the table to which data is already exported
C - Update the database connectivity parameters like username, password etc
D - Update the already exported rows based on a primary key field
Answer : D
Explanation
The –update-key parameter uses the primary key table to update the entire record in the relational table.
Q 4 - The –update-key parameter can take
A - Only one column name as key field
B - Two column name as key filed
Answer : C
Explanation
A comma separate dlist of column names which together identify a unique record can be used in the –update-key parameter.
Q 5 - A table contains 4 columns (C1,C2,C3,C4). With –update-key C2,C4, the sqoop generated query will be like
A - Update table set C1 = ‘newval’, c3 = ’newval’ where c2 = ‘oldval’ and c4 = ’oldval’
B - Update table set C2 = ‘newval’, c4 = ’newval’ where c2 = ‘oldval’ and c4 = ’oldval’
Answer : A
Explanation
only the columns other than in the –update-key parameter will be appear in the SET clause.
Q 6 - The –update-key parameter can
A - Not insert new rows to the already exported table
B - Insert new rows to an already exported table
C - Insert new rows into the exported table only if it has a primary key
Answer : A
Explanation
The –update-key parameter cannot export new rows which do not have a matching key in the already exported table.
Q 7 - Sqoop can insert new rows and update existing changed rows into an already exported table by using the parameter
Answer : D
Explanation
the –update-mode allwoinsert can be used to update as well as insert existing rows into the exported table.
Q 8 - When using –update-mode allowinsert parameter with oracle database the feature of oracle used by sqoop is
Answer : B
Explanation
The Merger statement of oracle is used to achieve update else insert condition.
Q 9 - With MySQL, the feature used by sqoop for update or insert data into an exported table is
Answer : A
Explanation
The ON DUPLICATE KEY UPDATE feature of mySql is used for update else insert with sqoop.
Q 10 - Can the upsert feature of sqoop delete some data form the exported table?
Answer : A
Explanation
Sqoop will never delete data as part of upsert statement.
Q 11 - To sync a HDFS file with some deleted rows with a previously exported table for the same table the option is to
B - Export the data again to a new database table and rename it
Answer : B
Explanation
you can drop the existing table and re-import the data from Hadoop. Then rename it to the dropped table.
Q 12 - The parameter which can be used in place of --table parameter to insert data into table is
Answer : A
Explanation
The –call parameter will call a database stored procedure which in turn can insert data into table.
Q 13 - The disadvantage of using a stored procedure to laod data is
B - Parallel loads in the database table
C - The store procedure cannot load multiple tables at a time
Answer : D
Explanation
As sqoop will call the stored procedure using parallel jobs, so heavy laod is induced in the database.
Q 14 - If the table to which data is being exported has more columns than the data present in the hdfs file then
B - The load can be done only for the relevant columns present in HDFS file
Answer : B
Explanation
The load can still be done by specifying the –column parameter to populate a subset of columns in the relational table.
Q 15 - The parameter to specify only a selected number of columns to be exported to a table is
Answer : A
Explanation
The columns clause will take a comma separated values of column names which will be part of the export.
Q 16 - Load all or load nothing semantics is implemented by using the parameter
Answer : D
Explanation
The –staging-table parameter is used to load all the required data into a intermediate table before finally loading into the real table.
Q 17 - How do we decide the order of columns in which data is loaded to the target table?
A - By using -- order by parameter
B - By using a new mapreduce job aftet submitting sqoop export command
C - By using a database stored procedure
D - By using –columns parameter with comma separated column names in the required order.
Answer : D
Explanation
we can use the –column parameter and specify the required column in the required order.
Q 18 - What is the disadvantage of using the –columns parameter to insert a subset of columns to the relational table?
A - The relational table may have not null columns not covered in the –columns parameter.
B - The relational table may store the data from HDFS in wrong columns.
Answer : A
Explanation
If there are columns whose value is mandatory and the HDFS file does not have it in the subset the load will fail.
Q 19 - The parameter used to override NULL values to be inserted into relational targets is
Answer : B
Explanation
the parameter –input-null-string is used to override the NULL values when exporting to relational tables.
Q 20 - For Text based columns the parameter used for substituting null values is
Answer : A
Explanation
The –input- null-string is used to substitute null values for text based columns.
Q 21 - For a column of data type numeric, the parameter used for substituting null values is
Answer : B
Explanation
The –input- null-non-string is used to substitute null values for text based columns.
Q 22 - When a column value has a different data type in the HDFS system than expected in the relational table to which data will be exported −
C - Sqoop loads the remaining rows by halting and asking whether to continue the load
D - Sqoop automatically changes the data type to a compatible data type and loads the data.
Answer : B
Explanation
The job fails and sqoop gives a log showing the reason of failure.
Q 23 - The parameter used in sqoop to import data directly into hive is
Answer : C
Explanation
The parameter used is –hive-import which will directly place the data in hie without needing any connectors as in case of relational systems.
Q 24 - While importing directly to hive using sqoop, if the table meta data does not exist in hive then
B - sqoop creates the meta data in hive
C - sqoop waits for user to input the meta data
D - sqoop imports the data as a file without creating any meta data
Answer : B
Explanation
as both sqoop and hive are part of hadoop ecosystem, sqoop is able to create the meta data in hive.
Q 25 - To ensure that the columns created in hive by sqoop have the correct data types the parameter used by sqoop is
Answer : A
Explanation
The correct column mapping is handled by the parameter --map-column-hive.
Answer Sheet
Question Number | Answer Key |
---|---|
1 | B |
2 | C |
3 | D |
4 | C |
5 | A |
6 | A |
7 | D |
8 | B |
9 | A |
10 | A |
11 | B |
12 | A |
13 | D |
14 | B |
15 | A |
16 | D |
17 | D |
18 | A |
19 | B |
20 | A |
21 | B |
22 | B |
23 | C |
24 | B |
25 | A |
To Continue Learning Please Login
Login with Google