繁体   English   中英

Spark:使用Map从复杂的数据框架构中获取数据

[英]Spark: fetch data from complex dataframe schema with map

我有以下结构

json.select($"comments").printSchema

 root
 |-- comments: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- comment: struct (nullable = true)
 |    |    |    |-- date: string (nullable = true)
 |    |    |    |-- score: string (nullable = true)
 |    |    |    |-- shouts: array (nullable = true)
 |    |    |    |    |-- element: string (containsNull = true)
 |    |    |    |-- tags: array (nullable = true)
 |    |    |    |    |-- element: string (containsNull = true)
 |    |    |    |-- text: string (nullable = true)
 |    |    |    |-- username: string (nullable = true)
 |    |    |-- subcomments: array (nullable = true)
 |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |-- date: string (nullable = true)
 |    |    |    |    |-- score: string (nullable = true)
 |    |    |    |    |-- shouts: array (nullable = true)
 |    |    |    |    |    |-- element: string (containsNull = true)
 |    |    |    |    |-- tags: array (nullable = true)
 |    |    |    |    |    |-- element: string (containsNull = true)
 |    |    |    |    |-- text: string (nullable = true)
 |    |    |    |    |-- username: string (nullable = true)

我想获取评论的数组/列表[用户名,分数,文本]。 通常,在pyspark我会做这样的事情

comments = json
 .select("comments")
 .flatMap(lambda element: 
    map(lambda comment: 
      Row(username = comment.username, 
          score = comment.score, 
          text = comment.text), 
      element[0])
 .toDF()

但是,当我在scala中尝试相同的方法时

json.select($"comments").rdd.map{row: Row => row(0)}.take(3)

我有一些奇怪的输出

Array[Any] =
Array(
  WrappedArray([[stirng,string,WrappedArray(),WrappedArray(),,string] ...],  ...)

有没有什么方法可以像在python中一样轻松地在scala中执行该任务?

另外,如何像数组/列表一样迭代WrappedArray,我遇到了这样的错误

rror: scala.collection.mutable.WrappedArray.type does not take parameters

如何使用静态类型的Dataset呢?

case class Comment(
    date: String, score: String,
    shouts: Seq[String], tags: Seq[String],
    text: String, username: String
)

df
  .select(explode($"comments.comment").alias("comment"))
  .select("comment.*")
  .as[Comment]
  .map(c => (c.username, c.score, c.date))

如果您不依赖REPL,则可以进一步简化:

df
  .select("comments.comment")
  .as[Seq[Comment]]
  .flatMap(_.map(c => (c.username, c.score, c.text)))

如果您真的想处理Rows使用类型化的getter:

df.rdd.flatMap(
  _.getAs[SR]("comments")
    .map(_.getAs[Row]("comment"))
    .map {
      // You could also _.getAs[String]("score") or getString(0)
      case Row(_, score: String, _, _, text: String, username: String) => 
        (username, score, text)
    }
)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM