简体   繁体   English

尝试运行HBase map reduce时出错

[英]Error while trying to run HBase map reduce

I am really having a hard time struggling with running Hbase-MapReduce with Hadoop. 我真的很难与Hadoop一起运行Hbase-MapReduce。

I do use Hadoop Hortonwork 2 version. 我确实使用Hadoop Hortonwork 2版本。 HBase version that I use is 0.96.1-hadoop2. 我使用的HBase版本是0.96.1-hadoop2。 Now when I try to run my MapReduce like this : 现在,当我尝试运行我的MapReduce时:

hadoop jar target/invoice-aggregation-0.1.jar  start="2014-02-01 01:00:00" end="2014-02-19 01:00:00" firstAccountId=0 lastAccountId=10

Hadoop tells me that is can not find the invoice-aggregation-0.1.jar in its file system ?! Hadoop告诉我,在其文件系统中找不到invoice-aggregation-0.1.jar? I am wondering why does it need to be there ? 我想知道它为什么需要在那里?

Here is the error I get 这是我得到的错误

14/02/05 10:31:48 ERROR security.UserGroupInformation: PriviledgedActionException as:adio (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/home/adio/workspace/projects/invoice-aggregation/target/invoice-aggregation-0.1.jar
java.io.FileNotFoundException: File does not exist: hdfs://localhost:8020/home/adio/workspace/projects/invoice-aggregation/target/invoice-aggregation-0.1.jar
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:264)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:300)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:387)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
    at com.company.invoice.MapReduceStarter.main(MapReduceStarter.java:244)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I would appreciate any suggestion, help or even I guess why I am getting this error ? 我会很感激任何建议,帮助甚至我猜我为什么会收到这个错误?

Error is due to hadoop cannot find the jars on place. 错误是由于hadoop无法找到放置的罐子。

Place the jars and re-run the job. 放置罐子并重新运行工作。 This will resolve the problem. 这将解决问题。

Ok, even I am not sure that this is the best solution I solved my problem by adding my application jar and all missing jars to HDFS. 好的,即使我不确定这是最好的解决方案我通过将我的应用程序jar和所有丢失的jar添加到HDFS来解决我的问题。 Using Hadoop fs -copyFromLocal 'myjarslocation' 'where_hdfs_needs_the_jars'. 使用Hadoop fs -copyFromLocal'myjarslocation''where_hdfs_needs_the_jars'。 So whenever MepReduce throws exception telling you that some jar is missing in some location on HDFS add the jar to that place. 因此,每当MepReduce抛出异常,告诉您在HDFS的某个位置缺少某些jar时,将jar添加到该位置。 This is what I did to solve my problem. 这就是我为解决问题所做的工作。 If anyone has a better approach I would be please to hear it. 如果有人有更好的方法,我会很高兴听到它。

Include the JAR in the “-libjars” command line option of the hadoop jar … command hadoop jar …命令的“-libjars”命令行选项中包含JAR

or check for other alternatives here 或者在这里检查其他替代方案

在我的情况下,通过将mapred-site.xml复制到HADOOP_CONF_DIR目录来修复错误

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM