[英]Null Pointer Exception - Hadoop Mapreduce job
我是Hadoop和Java的初学者,我正在编写Map,Reduce函数,以根据接近度将一组纬度和经度聚类为一组,并设置一个量级(聚类中纬度,经度对的数量)和一个代表拉特长对(到目前为止,这是遇到的第一个拉特长对。)
这是我的代码:
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import util.hashing.*;
public class LatLong {
public static class Map extends Mapper<Object, Text, Text, Text> {
//private final static IntWritable one = new IntWritable(1);
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
String[] longLatArray = line.split(",");
double longi = Double.parseDouble(longLatArray[0]);
double lat = Double.parseDouble(longLatArray[1]);
//List<Double> origLatLong = new ArrayList<Double>(2);
//origLatLong.add(lat);
//origLatLong.add(longi);
Geohash inst = Geohash.getInstance();
//encode is the library's encoding function
String hash = inst.encode(lat,longi);
//Using the first 5 characters just for testing purposes
//Need to find the right one later
int accuracy = 4;
//hash of the thing is shortened to whatever I figure out
//to be the right size of each tile
Text shortenedHash = new Text(hash.substring(0,accuracy));
Text origHash = new Text(hash);
context.write(shortenedHash, origHash);
}
}
public static class Reduce extends Reducer<Text, Text, Text, Text> {
private IntWritable totalTileElementCount = new IntWritable();
private Text latlongimag = new Text();
private Text dataSeparator = new Text();
@Override
public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
int elementCount = 0;
boolean first = true;
Iterator<Text> it= values.iterator();
String lat = new String();
String longi = new String();
Geohash inst = Geohash.getInstance();
while (it.hasNext()) {
elementCount = elementCount+1;
if(first)
{
lat = Double.toString((inst.decode(it.toString()))[0]);
longi = Double.toString((inst.decode(it.toString()))[1]);
first = false;
}
@SuppressWarnings("unused")
String blah = it.next().toString();
}
totalTileElementCount.set(elementCount);
//Geohash inst = Geohash.getInstance();
String mag = totalTileElementCount.toString();
latlongimag.set(lat+","+ longi +","+mag+",");
dataSeparator.set("");
context.write(latlongimag, dataSeparator );
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setJarByClass(LatLong.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
我正在获得NPE。 我不知道该如何测试,也无法在代码中找到错误。
Hadoop错误:
java.lang.NullPointerException
at util.hashing.Geohash.decode(Geohash.java:41)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:67)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:663)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:426)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Geohash库中的解码函数返回一个双精度数组。 任何指针将不胜感激! 谢谢你的时间!
EDIT1(经过测试):
我已经意识到问题在于,在reduce函数中不仅需要it.toString(),而且还需要it.next()。toString(),但是当我进行此更改并进行测试时,我得到了此错误,当我在while循环条件下检查hasnext()时,我不知道为什么会出现此错误。
java.util.NoSuchElementException: iterate past last value
at org.apache.hadoop.mapreduce.ReduceContext$ValueIterator.next(ReduceContext.java:159)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:69)
at org.myorg.LatLong$Reduce.reduce(LatLong.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:663)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:426)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
EDIT2(进一步测试):解决方案
我不止一次地调用它.next(),并且作为一个迭代器,这只会导致它继续执行两次,在上一次迭代中,它会检查条件并输入,但随后我将其调用。 ()两次,这会导致问题,因为下一个元素只有一个(最后一个)。
您仍然在it
调用toString
,而不是it.next()
,因此您应该更改
lat = Double.toString((inst.decode(it.toString()))[0]);
longi = Double.toString((inst.decode(it.toString()))[1]);
成
String cords = it.next().toString();
lat = Double.toString((inst.decode(cords))[0]);
longi = Double.toString((inst.decode(cords))[1]);
不要让inst.decode(it.next().toString())
因为它会调用it.next()
两个一倍while
迭代。
之后不要调用String blah = it.next().toString();
因为您将获得java.util.NoSuchElementException: iterate past last value
,原因与上述相同。
而且当您删除String blah = it.next().toString();
请记住,在first = false
情况下,您永远不会输入if(first)
,也永远不会调用String cords = it.next().toString();
因此it.hasNext()
将始终返回true
并且您将永远不会退出while
循环,因此请添加适当的条件语句。
这意味着您的“ it”为空,或者解码后为空。 对它们进行空检查。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.