简体   繁体   中英

convert scala string to RDD[seq[string]]

 // 4 workers
  val sc = new SparkContext("local[4]", "naivebayes")

  // Load documents (one per line).
  val documents: RDD[Seq[String]] = sc.textFile("/tmp/test.txt").map(_.split(" ").toSeq)

  documents.zipWithIndex.foreach{
  case (e, i) =>
  val collectedResult = Tokenizer.tokenize(e.mkString)
  }

  val hashingTF = new HashingTF()
  //pass collectedResult instead of document
  val tf: RDD[Vector] = hashingTF.transform(documents)

  tf.cache()
  val idf = new IDF().fit(tf)
  val tfidf: RDD[Vector] = idf.transform(tf)

in the above code snippet, i would want to extract collectedResult to reuse it for hashingTF.transform, How can this be achieved where the signature of tokenize function is

 def tokenize(content: String): Seq[String] = {
...
}

Looks like you want map rather than foreach . I don't understand what you're using zipWithIndex for, nor why you're calling split on your lines only to join them straight back up again with mkString .

val lines: Rdd[String] = sc.textFile("/tmp/test.txt")
val tokenizedLines = lines.map(tokenize)
val hashes = tokenizedLines.map(hashingTF)
hashes.cache()
...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM