简体   繁体   English

Spark-Scala将输出写入文本文件

[英]Spark-Scala writing the output in a textfile

I am executing the wordcount program in spark and trying to store the result in a text file. 我正在执行spark中的wordcount程序,并尝试将结果存储在文本文件中。

I have a scala script to count the word as SparkWordCount.scala. 我有一个scala脚本将单词计为SparkWordCount.scala。 I am trying to execute the script from Spark console as below. 我正在尝试从Spark控制台执行脚本,如下所示。

scala> :load /opt/spark-2.0.2-bin-hadoop2.7/bin/SparkWordCount.scala
Loading /opt/spark-2.0.2-bin-hadoop2.7/bin/SparkWordCount.scala...
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark._
defined object SparkWordCount

scala>

after the program is exectued i am getting the message as "defined object SparkWordCount" but I am not able to see the output result in the text file. 执行完程序后,我得到的消息为“已定义对象SparkWordCount”,但无法在文本文件中看到输出结果。

My Word count program is below. 我的单词计数程序如下。

import org.apache.spark.SparkContext 
import org.apache.spark.SparkContext._ 
import org.apache.spark._  

object SparkWordCount { 
   def main(args: Array[String]) { 

      val sc = new SparkContext( "local", "Word Count", "/opt/spark-2.0.2-bin-hadoop2.7",/opt/spark-2.0.2-bin-hadoop2.7/jars,map())



      val input = sc.textFile("demo.txt") 


      val count = input.flatMap(line ⇒ line.split(" ")).map(word ⇒ (word, 1)).reduceByKey(_ + _) 
      count.saveAsTextFile("outfile") 

   } 
}

Please can anyone suggest. 请任何人都可以建议。 Thanks. 谢谢。

Once object is defined you can call the method to execute your code. 定义对象后,您可以调用该方法来执行代码。 Spark-shell won't execute the main method automatically. Spark-shell不会自动执行main方法。 In your case you can use SparkWordCount.main(Array()) to execute your word-count program. 在您的情况下,您可以使用SparkWordCount.main(Array())执行您的单词计数程序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM