[英]Spark streaming output not saved to HDFS file
I am trying to save the Spark streaming output to a file on HDFS. 我正在尝试将Spark流输出保存到HDFS上的文件中。 Right now, it is not saving any file.
目前,它不保存任何文件。
Here is my code : 这是我的代码:
StreamingExamples.setStreamingLogLevels();
SparkConf sparkConf = new SparkConf().setAppName("MyTestCOunt");
JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(1000));
JavaReceiverInputDStream<String> lines = ssc.socketTextStream(args[0], Integer.parseInt(args[1]), StorageLevels.MEMORY_AND_DISK_SER);
JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
@Override
public Iterable<String> call(String x) {
return Lists.newArrayList(SPACE.split(x));
}
});
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(
new PairFunction<String, String, Integer>() {
@Override
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
}).reduceByKey(new Function2<Integer, Integer, Integer>() {
@Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
});
wordCounts.print();
wordCounts.saveAsHadoopFiles("hdfs://mynamenode:8020/user/spark/mystream/","abc");
ssc.start();
ssc.awaitTermination();
wordCounts.print()
works, but not wordCounts.saveAsHadoopFiles
, any ideas why ? wordCounts.print()
有效,但wordCounts.saveAsHadoopFiles
,任何想法为何?
I am running below commands : 我正在运行以下命令:
1) nc -lk 9999
1)
nc -lk 9999
2) ./bin/run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999
2)
./bin/run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999
Thanks in advance..!!! 提前致谢..!!!
I fixed the same problem by specifying master
as local[x] x > 1
. 我通过将
master
指定为local[x] x > 1
来解决了相同的问题。 If you run master as local, Spark could not assign slot to execute task. 如果将master作为本地运行,Spark无法分配插槽来执行任务。 Like
喜欢
SparkConf conf = new SparkConf().setAppName("conveyor").setMaster("local[4]");
Try: 尝试:
wordCounts.dstream().saveAsTextFiles("hdfs://mynamenode:8020/user/spark/mystream/", "abc");
instead: 代替:
wordCounts.saveAsHadoopFiles("hdfs://mynamenode:8020/user/spark/mystream/","abc");
JavaDStream<String> lines;
Initialize lines with our data. 用我们的数据初始化行。
` `
lines.foreachRDD(new VoidFunction<JavaRDD<String>>() {
public void call(JavaRDD<String > rdd) throws Exception {
Date today = new Date();
String date = (new SimpleDateFormat("dd-MM-yyyy").format(today));
rdd.saveAsTextFile(OUTPUT_LOCATION+"/"+date+"/");
}});
` `
I fixed this by changing the Sandbox / Server timezone to my local timezone, as my Twitter account has GMT and my Sandbox has UTC. 我通过将Sandbox / Server时区更改为本地时区来解决此问题,因为我的Twitter帐户具有GMT,而Sandbox具有UTC。 I have used the following commands to change my Sandbox timezone:
我已使用以下命令来更改沙盒时区:
ntpdate pool.ntp.org
chkconfig ntpd on
ntpdate pool.ntp.org
/etc/init.d/ntpd start
date
I haven't restarted my Hadoop services after the timezone change. 更改时区后,我还没有重新启动Hadoop服务。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.