简体   繁体   English

Hadoop mapreduce错误:org.apache.hadoop.mapreduce.Counter

[英]Hadoop mapreduce Error: org.apache.hadoop.mapreduce.Counter

I am trying to use counters within my MapReduce program, but whenever I am trying to increment it I am getting following error: 我试图在MapReduce程序中使用计数器,但是每当我尝试增加计数器时,都会出现以下错误:

14/04/18 12:22:51 INFO mapred.JobClient: Task Id : attempt_201404172237_0052_m_000003_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.Counter

And then when I'm trying to read the counter's value I'm getting following exception: 然后,当我尝试读取计数器的值时,出现以下异常:

Exception in thread "main" java.lang.IncompatibleClassChangeError: org.apache.hadoop.mapreduce.Counter
at com.zikesjan.bigdata.TfIdfMain.main(TfIdfMain.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

It is happening when I am trying to implement it from both Mapper or Reducer. 当我尝试从Mapper或Reducer实施它时,就会发生这种情况。 My implementation looks as follows: 我的实现如下所示:

In case when I use counter from Mapper it's just one line in the map method that looks like this: 如果我使用Mapper的counter,那么map方法中的一行只是这样:

context.getCounter(MyCounters.Documents).increment(1);

In case when I have tried it out from Reducer it was in cleanup: 如果我已经从Reducer中进行了尝试,那么它正在清理中:

public void cleanup(Context context){
    context.getCounter(MyCounters.Documents).increment(numberOfRows);
}

And then I have implemented the counter's Enum like this: 然后我实现了计数器的枚举,如下所示:

public enum MyCounters {
    Documents
}

And in my main Class I would like to retrieve counter's value as follows: 在我的主类中,我想按以下方式检索计数器的值:

long documents = countLines.getCounters().findCounter(MyCounters.Documents).getValue();

Unfortunately it seems like there aren't working for me any operations with the counters. 不幸的是,似乎没有任何计数器可以为我进行任何操作。 Is there some other specific way how the counters must be initialized except that I have described above? 除了我上面已经描述的以外,还有其他一些必须如何初始化计数器的特定方法吗?

I am using Hadoop version 1.1.1 on IBM BigInsights instance (if this information is anyhow relevant to the problem). 我正在IBM BigInsights实例上使用Hadoop 1.1.1版(如果此信息与问题完全相关)。 Particularly when I type hadoop version I got: 特别是当我输入hadoop版本时,我得到了:

Hadoop 1.1.1
Subversion git://dasani.svl.ibm.com/ on branch (no branch) -r f0025c9fd25730e3c1bfebceeeeb50d930b4fbaa
Compiled by jenkins on Fri Aug  9 17:06:14 PDT 2013
From source with checksum 21fb4557d5057d18b673b3fd46176f95

Thank you in advance for any help. 预先感谢您的任何帮助。

EDIT: I have tried my Map Reduce program on my toy one node Cloudera Hadoop instance that I have in the virtual box and it seems like there it works as I expected. 编辑:我已经在虚拟盒子中的玩具一个节点Cloudera Hadoop实例上尝试了Map Reduce程序,似乎在那里按预期工作了。 After the hadoop version command here I got: 在hadoop版本命令之后,我得到:

Hadoop 2.0.0-cdh4.4.0
Subversion file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.4.0/src/hadoop-common-project/hadoop-common -r c0eba6cd38c984557e96a16ccd7356b7de835e79
Compiled by jenkins on Tue Sep  3 19:33:17 PDT 2013
From source with checksum ac7e170aa709b3ace13dc5f775487180

So my questions are: 所以我的问题是:

1) Is the reason why counters do work for me at Cloudera only that it is single node instance? 1)计数器在Cloudera上对我有用的原因仅仅是因为它是单节点实例吗? Or are counters supposed to work on multi node instances? 还是计数器应该在多节点实例上工作? And thus there is only problem on the side of the IBM BigInsights? 因此,IBM BigInsights方面只有问题吗?

No, the issue has nothing to with single node instance. 不,单节点实例没有问题。 You need to upgrade your Hadoop version that is running on IBM Biginsights. 您需要升级在IBM Biginsights上运行的Hadoop版本。 It is succeeding on Cloudera's Sandbox because it is running Hadoop 2. 它在Cloudera的Sandbox上获得成功,因为它正在运行Hadoop 2。

Hadoop 2 API is incompatible with Hadoop 1 API. Hadoop 2 API与Hadoop 1 API不兼容。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 错误:找到接口org.apache.hadoop.mapreduce.Counter,但预期使用类 - Error: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected oozie抛出错误java.lang.IncompatibleClassChangeError:找到了接口org.apache.hadoop.mapreduce.Counter,但是期望使用类 - oozie throwing error java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.Counter, but class was expected 找不到org.apache.hadoop.mapreduce-Hadoop - org.apache.hadoop.mapreduce Not Found— Hadoop 不是org.apache.hadoop.mapreduce.Mapper - not org.apache.hadoop.mapreduce.Mapper Apache Hadoop 2.2中的org.apache.hadoop.mapreduce导入问题 - Issue with org.apache.hadoop.mapreduce imports in Apache Hadoop 2.2 MapReduce Apache Hadoop技术 - MapReduce Apache Hadoop Technology Hadoop MapReduce ClassNotFoundException错误 - Hadoop MapReduce ClassNotFoundException Error Hadoop Mapreduce错误NativeMethodAccessor - Hadoop Mapreduce error NativeMethodAccessor 如果我使用org.apache.hadoop.mapreduce(new)API,如何配置Hadoop MapReduce映射器输出压缩? - How to configure Hadoop MapReduce mapper output compression if I use org.apache.hadoop.mapreduce (new) API? 错误:(63,40)java:不兼容的类型:org.apache.hadoop.mapreduce.Job无法转换为org.apache.hadoop.mapred.JobConf - Error:(63, 40) java: incompatible types: org.apache.hadoop.mapreduce.Job cannot be converted to org.apache.hadoop.mapred.JobConf
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM