[英]Web Crawler Java
... ...
public static void download() {
try {
URL oracle = new URL("http://api.wunderground.com/api/54f05b23fd8fd4b0/geolookup/conditions/forecast/q/US/CO/Denver.json");
BufferedReader in = new BufferedReader(new InputStreamReader(oracle.openStream()));
File file = new File("C:\\Users\\User\\Desktop\\test2.json");
if (!file.exists()) {
file.createNewFile();
}
FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);
String inputLine;
while ((inputLine = in.readLine()) != null) {
bw.write(inputLine+"\n");
}
fw.close();
in.close();
System.out.println("Finished...");
}
catch(MalformedURLException e){e.printStackTrace();}
catch(IOException e){e.printStackTrace();}
}
I'm making a web-crawler to retrieve weather updates from Wunderground. 我正在制作一个网络爬虫,以从Wunderground检索天气更新。 I did get this to work, but it doesn't parse the entire document (cuts of the last bit).
我确实做到了这一点,但是它没有解析整个文档(最后一部分的内容)。 What did I do wrong?
我做错了什么?
You are wrapping your FileWriter
with a BufferedWriter
您正在用
BufferedWriter
包装FileWriter
FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);
but only closing the FileWriter
但只关闭
FileWriter
fw.close();
Since the FileWriter
doesn't have access and doesn't know about the BufferedWriter
, it won't flush any possible remaining buffer. 由于
FileWriter
无权访问并且不了解BufferedWriter
,因此它不会刷新任何可能的剩余缓冲区。 You could either call flush()
on bw
or close bw
instead of fw
. 您可以在
bw
上调用flush()
或关闭bw
而不是fw
。 Calling bw.close()
will take care of closing the wrapped FileWriter
. 调用
bw.close()
将负责关闭包装bw.close()
FileWriter
。
bw.close();
in.close();
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.