简体   繁体   English

Scala嵌套映射到Spark RDD

[英]Scala Nested Map to Spark RDD

I'm trying to convert a list of maps (Seq[Map[String, Map[String, String]]) into an RDD table/tuple where each key -> value pair in the map is flat mapped into a tuple with the outer map's key. 我正在尝试将地图列表(Seq [Map [String [Map,String [String,String]]])转换为RDD表/元组,其中地图中的每个键->值对均被平面映射为具有外部元素的元组地图的钥匙。 For example 例如

Map(
 1 -> Map('k' -> 'v', 'k1' -> 'v1')
)  

becomes 变成

(1, 'k', 'v')
(1, 'k1', 'v1')

I've tried the following approach, but it seems to fail on concurrency issues. 我尝试了以下方法,但是在并发问题上似乎失败了。 I have two worker nodes, and it duplicates the key -> value twice(which I assume is because i'm doing this wrong) 我有两个工作程序节点,它两次重复键->值(我认为这是因为我做错了)

Lets assume I hold my map type in a case class 'Records' 假设我将地图类型保存在案例类“记录”中

  val rdd = sc.parallelize(1 to records.length)
    val recordsIt = records.iterator
      val res: RDD[(String, String, String)] = rdd.flatMap(f => {
        val currItem = recordsIt.next()
        val x: immutable.Iterable[(String, String, String)] = currItem.mapData.map(v => {
          (currItem.identifier, v._1, v._2)
        })
        x
      }).sortBy(r => r)

Is there a way to paralleize this work without running into serious concurrency issues(as I suspect is happening? 有没有一种方法可以并行化这项工作而不会遇到严重的并发问题(我怀疑这是正在发生的事情?

example duplicated output 示例重复输出

(201905_001ac172c2751c1d4f4b4cb0affb42ef_gFF0dSg4iw,CID,B13131608623827542)
(201905_001ac172c2751c1d4f4b4cb0affb42ef_gFF0dSg4iw,CID,B13131608623827542)
(201905_001ac172c2751c1d4f4b4cb0affb42ef_gFF0dSg4iw,ROD,19190321)
(201905_001ac172c2751c1d4f4b4cb0affb42ef_gFF0dSg4iw,ROD,19190321)
(201905_001b3ba44f6d1f7505a99e2288108418_mSfAfo31f8,CID,339B4C3C03DDF96AAD)
(201905_001b3ba44f6d1f7505a99e2288108418_mSfAfo31f8,CID,339B4C3C03DDF96AAD)
(201905_001b3ba44f6d1f7505a99e2288108418_mSfAfo31f8,ROD,19860115)
(201905_001b3ba44f6d1f7505a99e2288108418_mSfAfo31f8,ROD,19860115)

Spark parallelize is very efficient from the beginning (since you already start storing data in memory it is much less expensive to just iterate over it locally), nonetheless a more idiomatic approach would be a simple flatMap : 从一开始,Spark parallelize就非常高效(因为您已经开始将数据存储在内存中,因此只需在本地进行迭代就便宜得多),不过,更惯用的方法是使用简单的flatMap

sc.parallelize(records.toSeq)
  .flatMapValues(identity)
  .map { case (k1, (k2, v)) => (k1, k2, v) } 

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM