[英]Converting a Spark Dataframe to a Scala Map collection
I'm trying to find the best solution to convert an entire Spark dataframe to a scala Map collection.我试图找到将整个 Spark dataframe 转换为 scala Map 集合的最佳解决方案。 It is best illustrated as follows:
最好的说明如下:
To go from this (in the Spark examples):从此(在 Spark 示例中)到 go:
val df = sqlContext.read.json("examples/src/main/resources/people.json")
df.show
+----+-------+
| age| name|
+----+-------+
|null|Michael|
| 30| Andy|
| 19| Justin|
+----+-------+
To a Scala collection (Map of Maps) represented like this:对于 Scala 集合(Map of Maps)表示如下:
val people = Map(
Map("age" -> null, "name" -> "Michael"),
Map("age" -> 30, "name" -> "Andy"),
Map("age" -> 19, "name" -> "Justin")
)
I don't think your question makes sense -- your outermost Map
, I only see you are trying to stuff values into it -- you need to have key / value pairs in your outermost Map
. 我认为您的问题没有道理-您的最外层
Map
,我只看到您正在尝试将值填充到其中-您需要在最外层Map
具有键/值对。 That being said: 话虽如此:
val peopleArray = df.collect.map(r => Map(df.columns.zip(r.toSeq):_*))
Will give you: 会给你:
Array(
Map("age" -> null, "name" -> "Michael"),
Map("age" -> 30, "name" -> "Andy"),
Map("age" -> 19, "name" -> "Justin")
)
At that point you could do: 在那时,您可以执行以下操作:
val people = Map(peopleArray.map(p => (p.getOrElse("name", null), p)):_*)
Which would give you: 这会给你:
Map(
("Michael" -> Map("age" -> null, "name" -> "Michael")),
("Andy" -> Map("age" -> 30, "name" -> "Andy")),
("Justin" -> Map("age" -> 19, "name" -> "Justin"))
)
I'm guessing this is really more what you want. 我猜这确实是您想要的。 If you wanted to key them on an arbitrary
Long
index, you can do: 如果要在任意
Long
索引上键入它们,可以执行以下操作:
val indexedPeople = Map(peopleArray.zipWithIndex.map(r => (r._2, r._1)):_*)
Which gives you: 这给你:
Map(
(0 -> Map("age" -> null, "name" -> "Michael")),
(1 -> Map("age" -> 30, "name" -> "Andy")),
(2 -> Map("age" -> 19, "name" -> "Justin"))
)
First get the schema from Dataframe 首先从数据框获取架构
val schemaList = dataframe.schema.map(_.name).zipWithIndex//get schema list from dataframe
Get the rdd from dataframe and mapping with it 从数据框获取rdd并与其映射
dataframe.rdd.map(row =>
//here rec._1 is column name and rce._2 index
schemaList.map(rec => (rec._1, row(rec._2))).toMap
).collect.foreach(println)
val map =df.collect.map(a=>(a(0)->a(1))).toMap.asInstanceOf[Map[String,String]]
if the result is needed in a map instead of array(map)如果 map 中需要结果而不是 array(map)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.