繁体   English   中英

远程运行Hadoop map-reduce作业会导致EOFException?

[英]Runnning Hadoop map-reduce job remotely causes EOFException?

我已经编写了一个Hadoop map-reduce程序,现在我想在虚拟盒子中同一台计算机上运行的Cloadera Hadoop发行版对其进行测试。

这是我如何提交map-reduce作业:

public class AvgCounter extends Configured implements Tool{

    public int run(String[] args) throws Exception {
        Job mrJob = Job.getInstance(new Cluster(getConf()), getConf()); 
        mrJob.setJobName("Average count");

        mrJob.setJarByClass(AvgCounter.class);
        mrJob.setOutputKeyClass(IntWritable.class);
        mrJob.setOutputValueClass(Text.class);
        mrJob.setMapperClass(AvgCounterMap.class);
        mrJob.setCombinerClass(AvgCounterReduce.class);
        mrJob.setReducerClass(AvgCounterReduce.class);
        mrJob.setInputFormatClass(TextInputFormat.class);
        mrJob.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.setInputPaths(mrJob, new Path("/user/test/testdata.csv"));
        FileOutputFormat.setOutputPath(mrJob, new Path("/user/test/result.txt"));
        mrJob.setWorkingDirectory(new Path("/tmp"));
        return mrJob.waitForCompletion(true)? 1: 0;
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.5.50:9000");
        conf.set("mapreduce.jobtracker.address", "192.168.5.50:9001");
        System.exit(ToolRunner.run(conf, new AvgCounter(), args));
    }
}

AvgCounterMap具有不执行任何操作的空map方法,以及AvgCounterReduce具有不执行任何操作的空reduce方法。 当我尝试运行main方法时,出现以下异常:

Exception in thread "main" java.io.IOException: Call to /192.168.5.50:9001 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
    at org.apache.hadoop.ipc.Client.call(Client.java:1031)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
    at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:86)
    at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:98)
    at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:74)
    at eu.xxx.mapred.AvgCounter.run(AvgCounter.java:22)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
    at eu.xxx.mapred.AvgCounter.main(AvgCounter.java:53)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698) 

运行Hadoop的虚拟Cloudera计算机在文件/etc/hadoop/conf/core.site.xml具有以下/etc/hadoop/conf/core.site.xml

<property>
    <name>fs.default.name</name>
    <value>hdfs://192.168.5.50:9000</value>
</property> 

在文件/etc/hadoop/conf/mapred.site.xml

<property>
     <name>mapred.job.tracker</name>
     <value>192.168.5.50:9001</value>
</property>

我还通过将92.168.5.50:50030写入Web浏览器来检查与虚拟机的连接, 92.168.5.50:50030进行了Hadoop Map / Reduce管理。 那么是什么导致了该异常,又如何消除它呢?

谢谢你的任何想法

问题在于,客户端使用的是与Hadoop安装不同的版本的Hadoop API(0.23.0)。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM