简体   繁体   English

将数组(行)的 RDD 转换为行的 RDD?

[英]Convert RDD of Array(Row) to RDD of Row?

I have such data in a file and I'd like to do some statistics using Spark.我在一个文件中有这样的数据,我想用 Spark 做一些统计。

File content:文件内容:

aaa|bbb|ccc
ddd|eee|fff|ggg

I need to assign each line an id.我需要为每一行分配一个 id。 I read them as rdd and use zipWithIndex() .我将它们读为 rdd 并使用zipWithIndex()

Then they should be like:那么它们应该是这样的:

(0, aaa|bbb|ccc)
(1, ddd|eee|fff|ggg)

I need to make each string associated with the id.我需要使每个字符串与 id 相关联。 I can get the RDD of Array(Row), but can't jump out of the array.可以拿到Array(Row)的RDD,但是跳不出数组。

How should I modify my code?我应该如何修改我的代码?

import org.apache.spark.sql.{Row, SparkSession}

val fileRDD = spark.sparkContext.textFile(filePath)
val fileWithIdRDD = fileRDD.zipWithIndex()
// make the line like this: (0, aaa), (0, bbb), (0, ccc)
// each line is a record of Array(Row)
fileWithIdRDD.map(x => {
  val id = x._1
  val str = x._2
  val strArr = str.split("\\|")
  val rowArr = strArr.map(y => {
    Row(id, y)
  }) 
  rowArr 
})

Now it looks like:现在它看起来像:

[(0, aaa), (0, bbb), (0, ccc)]
[(1, ddd), (1, eee), (1, fff), (1, ggg)]

But finally I want:但最后我想要:

(0, aaa)
(0, bbb) 
(0, ccc)
(1, ddd)
(1, eee)
(1, fff)
(1, ggg)

You just need to flatten your RDD你只需要压平你的RDD

yourRDD.flatMap(array => array)

Considering your code (some errors fixed, inside the inner map and in the assignation of id and str)考虑您的代码(修复了一些错误,在内部映射中以及在 id 和 str 的分配中)

fileWithIdRDD.map(x => {
  val id = x._1
  val str = x._2
  val strArr = str.split("\\|")
  val rowArr = strArr.map(y => {
    Row(id, y)
  }) 
  rowArr 
}).flatMap(array => array)

Quick example here:这里的快速示例:

INPUT输入

fileWithIdRDD.collect
res30: Array[(Int, String)] = Array((0,aaa|bbb|ccc), (1,ddd|eee|fff|ggg))

EXECUTION执行

scala> fileWithIdRDD.map(x => {
      val id = x._1
      val str = x._2
      val strArr = str.split("\\|")
        val rowArr = strArr.map(y => {
          Row(id, y)
        })
      rowArr
      }).flatMap(array => array)


res31: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[17] at flatMap at <console>:35

OUTPUT输出

scala> res31.collect
res32: Array[org.apache.spark.sql.Row] = Array([0,aaa], [0,bbb], [0,ccc], [1,ddd], [1,eee], [1,fff], [1,ggg])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM