[英]ERROR in MAP SIDE JOIN in HIVE
我發現如果要僅在join.ie reduce = 0中運行地圖階段,則必須設置以下屬性。如果我將屬性設置為false map-reduce run並且成功進行了加入,則出現如下錯誤。
hive> set hive.auto.convert.join=true;
hive> set hive.mapjoin.smalltable.filesize=(default it will be 25MB);
Query returned non-zero code: 1, cause: 'SET hive.mapjoin.smalltable.filesize=(default it will be 25MB)' FAILED because hive.mapjoin.smalltable.filesize expects LONG type value.
hive> SELECT /*+ MAPJOIN(expense) */ c.ID, c.NAME, o.AMOUNT, o.DATE FROM emp c CROSS JOIN expense o ON (c.ID = o.emp_ID);
Query ID = acadgild_20161226234949_6ede202c-7f91-42ac-a0c9-3b2617fad0ae
Total jobs = 1
java.io.IOException: Cannot run program "/home/acadgild/hadoop-2.6.0/bin/hadoop" (in directory "/home/acadgild"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
at java.lang.Runtime.exec(Runtime.java:450)
at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInChildVM(MapredLocalTask.java:289)
at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.execute(MapredLocalTask.java:137)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:247)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:248)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 23 more
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
hive> set hive.auto.convert.join=false;
hive>
Hive中的Map Side Join可以通過兩種方式執行。
通過在連接語句中指定關鍵字/*+ MAPJOIN(b) */
。
通過將以下屬性設置為true。
hive.auto.convert.join = true
我認為您正在嘗試將兩種方式結合起來。 請檢查此博客以獲取更多信息鏈接
希望這可以幫助。
如錯誤所提示,程序無法在/ home / acadgild /中找到本機hadoop庫,您可以做的是,嘗試運行以下命令:
cp -r /usr/local/hadoop-2.6.0/ /home/acadgild/
現在進入蜂巢並嘗試這樣做,它應該可以工作。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.