简体   繁体   English

蜂巢无法创建地图/减少工作

[英]the hive can't create map/reduce job

i have used hive-0.11.0, hadoop 2.0.3, and mysql 5.6 for metadata 我已经使用hive-0.11.0,hadoop 2.0.3和mysql 5.6作为元数据

i can successful run the statement like SELECT * FROM records,which not create a map/reduce task. 我可以成功运行SELECT * FROM记录之类的语句,而不会创建map / reduce任务。

But when i try run SELECT * FROM records where year='1949' the map/reduce task always get some error 但是,当我尝试运行SELECT * FROM记录,其中year ='1949'时,map / reduce任务始终会出现一些错误

the hadoop give me Diagnostics: hadoop给我诊断信息:
Application application_1382680612829_0136 failed 1 times due to AM Container for appattempt_1382680612829_0136_000001 exited with exitCode: -1000 due to: java.io.FileNotFoundException: File /tmp/hadoop-hadoop/nm-local-dir/filecache does not exist at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:492) at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:996) at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:150) at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:187) at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:730) at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:726) at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2379) at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:726) at org.apache.hadoop.yarn.util.FSDownload.createDir(FSDownload.java:88) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:274) at org.apache.hadoop.yarn.util.FSDownlo 应用程序application_1382680612829_0136失败1次,原因是appattempt_1382680612829_0136_000001的AM容器退出,退出代码为-1000,原因是:java.io.FileNotFoundException:文件/ tmp / hadoop-hadoop / nm-local-dir / filecache在org.apache.hadoop中不存在org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:996)上的.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:492)在org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:150) ),位于org.apache.hadoop.fs.FileContext的org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:187),位于org.apache.hadoop.fs.FileContext的$ 4.next(FileContext.java:730)在org.apache.hadoop.fs.FileContext处$ 4.next(FileContext.java:726)在org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:726)处$ FSLinkResolver.resolve(FileContext.java:2379)在org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:274)在org.apache.hadoop.yarn.util.FSDownload.createDir(FSDownload.java:88)在org.apache.hadoop.yarn .util.FSDownlo ad.call(FSDownload.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) .Failing this attempt.. Failing the application. ad.call(FSDownload.java:51)at java.util.concurrent.FutureTask $ Sync.innerRun(FutureTask.java:303)at java.util.concurrent.FutureTask.run(FutureTask.java:138)at java.util .concurrent.Executors $ RunnableAdapter.call(Executors.java:441)在java.util.concurrent.FutureTask $ Sync.innerRun(FutureTask.java:303)在java.util.concurrent.FutureTask.run(FutureTask.java:138) )在java.util.concurrent.ThreadPoolExecutor $ Worker.runTask(ThreadPoolExecutor.java:886)在java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:908)在java.lang.Thread.run(Thread。 java:662)。尝试失败。。失败的应用程序。

what i should do? 我该做什么? thanks 谢谢

In summary, as solution they recommend to create the parent folders, there is also a patch that is supposed to be fixed at 2.0.3 [1] 总而言之,作为他们建议创建父文件夹的解决方案,还有一个补丁应该被固定为2.0.3 [1]。

Tom White added a comment - 30/Nov/12 21:36 This fixes the problem by creating parent directories if they don't already exist. 汤姆·怀特(Tom White)添加了评论-30 / Nov / 12 21:36这可以通过创建父目录(如果不存在)来解决此问题。 Without the fix the test would fail about 4 times in 10; 如果没有修复,测试将失败10次中的4次。 with the fix I didn't see a failure. 修复后,我没有看到失败。

It looks like the most similar issue that I could find in the hadoop bugs database 看起来是我在hadoop错误数据库中可以找到的最相似的问题

It's also related to [2] and [3] if you want to take a look 如果您想看一下,它也与[2][3]有关

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM