Lab 003 - jazzwang/hive_labs GitHub Wiki

實作三

Sqoop (3) : 將 MS SQL 資料表匯入 Hive

建立 Hive 資料表

  1. 首先,請刪除 Sqoop (1) 產生的 HDFS 目錄 MSSQL_DATA
user@master ~ $ hadoop fs -rmr MSSQL_DATA

否則,等一下執行 sqoop create-hive-tablesqoop import 時,會出現以下的訊息。

13/12/22 14:14:55 INFO hive.HiveImport: Hive import complete.
13/12/22 14:14:55 INFO hive.HiveImport: Export directory is not empty, keeping it.
  1. 請輸入如下指令並將「帳號」處替換成您的帳號:
user@master ~ $ export DBID=帳號
user@master ~ $ sqoop create-hive-table --connect "jdbc:sqlserver://sql.3du.me;database=$DBID" --table MSSQL_DATA --username $DBID -P
Enter password: 輸入密碼
  1. 若資料表 Schema 轉換有正常執行,您將看到以下的類似訊息:
13/12/22 12:15:58 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
13/12/22 12:15:58 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
13/12/22 12:15:58 INFO manager.SqlManager: Using default fetchSize of 1000
13/12/22 12:15:59 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [MSSQL_DATA] AS t WHERE 1=0
.. 略 ..
  1. 檢查 Hive 的 HDFS 目錄是否有產生資料表
user@master ~ $ hadoop fs -ls /hive/warehouse
Found 2 items
drwx------   - jazz supergroup          0 2013-12-17 00:24 /hive/warehouse/dummy
drwx------   - user supergroup          0 2013-12-22 12:08 /hive/warehouse/mssql_data
  1. 檢查 Hive mssql_data 資料表是否有內容
user@master ~ $ hadoop fs -ls /hive/warehouse/mssql_data

執行記錄匯入

  1. 請輸入如下指令(與前面不同的是加上 --hive-import --hive-table mssql_data 參數)
user@master ~ $ export DBID=帳號
user@master ~ $ sqoop import --connect "jdbc:sqlserver://sql.3du.me;database=$DBID" --table MSSQL_DATA --username $DBID -P --hive-import --hive-table mssql_data
Enter password: 輸入密碼
  1. 若資料表匯入有正常執行,您將看到以下的類似訊息:
13/12/22 13:19:44 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
13/12/22 13:19:44 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
13/12/22 13:19:44 INFO manager.SqlManager: Using default fetchSize of 1000
13/12/22 13:19:44 INFO tool.CodeGenTool: Beginning code generation
13/12/22 13:19:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [MSSQL_DATA] AS t WHERE 1=0
13/12/22 13:19:46 INFO orm.CompilationManager: HADOOP_HOME is /opt/hadoopmr
Note: /tmp/sqoop-user/compile/1ef4e360efbe3064927511c7be7022e8/MSSQL_DATA.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/12/22 13:19:49 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-user/compile/1ef4e360efbe3064927511c7be7022e8/MSSQL_DATA.jar
13/12/22 13:19:49 INFO mapreduce.ImportJobBase: Beginning import of MSSQL_DATA
... 略 ...
13/12/22 13:20:44 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 53.758 seconds (0 bytes/sec)
13/12/22 13:20:44 INFO mapreduce.ImportJobBase: Retrieved 2 records.
13/12/22 13:20:44 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [MSSQL_DATA] AS t WHERE 1=0
13/12/22 13:20:44 INFO hive.HiveImport: Removing temporary files from import process: hdfs://master.etusolution.com:9000/user/user/MSSQL_DATA/_logs
13/12/22 13:20:44 INFO hive.HiveImport: Loading uploaded data into Hive
13/12/22 13:20:47 INFO hive.HiveImport: WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
13/12/22 13:20:48 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.8.1-cdh4.0.1.jar!/hive-log4j.properties
13/12/22 13:20:48 INFO hive.HiveImport: Hive history file=/tmp/user/hive_job_log_user_201312221320_2139920476.txt
... 略 ...
13/12/22 13:20:55 INFO hive.HiveImport: OK
13/12/22 13:20:55 INFO hive.HiveImport: Time taken: 5.645 seconds
13/12/22 13:20:56 INFO hive.HiveImport:                 No encryption was performed by peer.
13/12/22 13:20:56 INFO hive.HiveImport: Loading data to table default.mssql_data
13/12/22 13:20:56 INFO hive.HiveImport: OK
13/12/22 13:20:56 INFO hive.HiveImport: Time taken: 1.354 seconds
13/12/22 13:20:56 INFO hive.HiveImport: Hive import complete.
13/12/22 13:20:56 INFO hive.HiveImport: Export directory is empty, removing it.

檢查匯入結果

  1. 轉換之結果,可於 Hive 儲存於 HDFS 的路徑查到。
user@master ~ $ hadoop fs -ls /hive/warehouse/mssql_data
Found 2 items
-rw-------   3 user user          0 2013-12-22 13:20 /hive/warehouse/mssql_data/_SUCCESS
-rw-------   3 user user         16 2013-12-22 13:20 /hive/warehouse/mssql_data/part-m-00000
  1. 檢視 part-m-***** 轉換結果
user@master ~ $ hadoop fs -cat /hive/warehouse/mssql_data/part-m-00000
No encryption was performed by peer.
13/12/22 13:33:57 INFO util.NativeCodeLoader: Loaded the native-hadoop library
1Hello
2World

結果觀察

  • 若您還有印象,當我們將資料表匯出到 HDFS 時,結果是
user@master ~ $ hadoop fs -cat mysql_data/part-m-00000
1,Hello
2,World
  • 但查詢 Hive 資料表,卻沒有分隔符號。那麼 Hive 是如何分辨資料欄位的呢?
user@master ~ $ hadoop fs -cat /hive/warehouse/mssql_data/part-m-00000
No encryption was performed by peer.
13/12/22 13:33:57 INFO util.NativeCodeLoader: Loaded the native-hadoop library
1Hello
2World
  • 原來 Hive 預設的列分隔符號並非跳格(TAB)字元,而是 CTRL-A 字元,所以沒有顯示出來。

參考資料

  1. "Sqoop: Big data conduit between NoSQL and RDBMS", Surajit Paul, Advisory Consultant, IBM, 23 Jul 2013
⚠️ **GitHub.com Fallback** ⚠️