繁体   English   中英

无法将RDD [Row]转换为DataFrame

[英]Unable to convert an RDD[Row] to a DataFrame

对于以下代码-将DataFrame转换为RDD[Row]并通过mapPartitions追加新列的数据:

 // df is a DataFrame
val dfRdd = df.rdd.mapPartitions {
  val bfMap = df.rdd.sparkContext.broadcast(factorsMap)
  iter =>
    val locMap = bfMap.value
    iter.map { r =>
      val newseq = r.toSeq :+ locMap(r.getAs[String](inColName))
      Row(newseq)
    }
}

对于另一RDD[Row]RDD[Row] ,输出正确:

println("**dfrdd\n" + dfRdd.take(5).mkString("\n"))

**dfrdd
[ArrayBuffer(0021BEC286CC, 4, Series, series, bc514da3e0d534da8207e3aab231d1cb, livetv, 148818)]
[ArrayBuffer(0021BEE7C556, 4, Series, series, bc514da3e0d534da8207e3aab231d1cb, livetv, 26908)]
[ArrayBuffer(8C7F3BFD4B82, 4, Series, series, bc514da3e0d534da8207e3aab231d1cb, livetv, 99942)]
[ArrayBuffer(0021BEC8F8B8, 1, Series, series, 0d2debc63efa3790a444c7959249712b, livetv, 53994)]
[ArrayBuffer(10EA59F10C8B, 1, Series, series, 0d2debc63efa3790a444c7959249712b, livetv, 1427)]

让我们尝试将RDD[Row]转换回DataFrame:

val newSchema = df.schema.add(StructField("userf",IntegerType))

现在让我们创建更新的DataFrame:

val df2 = df.sqlContext.createDataFrame(dfRdd,newSchema)

新架构看起来正确吗?

newSchema.show()

root
 |-- user: string (nullable = true)
 |-- score: long (nullable = true)
 |-- programType: string (nullable = true)
 |-- source: string (nullable = true)
 |-- item: string (nullable = true)
 |-- playType: string (nullable = true)
 |-- userf: integer (nullable = true)

注意,我们确实看到了新的userf列。

但是,它不起作用:

println("df2: " + df2.take(1))

Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, 
most recent failure: Lost task 0.0 in stage 9.0 (TID 9, localhost, executor driver): java.lang.RuntimeException: Error while encoding: 

java.lang.RuntimeException: scala.collection.mutable.ArrayBuffer is not a  
 valid external type for schema of string
if (assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, user), StringType), true) AS user#28
+- if (assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object), 0, user), StringType), true)
   :- assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object).isNullAt
   :  :- assertnotnull(input[0, org.apache.spark.sql.Row, true], top level row object)
   :  :  +- input[0, org.apache.spark.sql.Row, true]
   :  +- 0
   :- null

那么:这里缺少什么细节?

注意:我对不同的方法感兴趣:例如withColumnDatasets ..让我们仅考虑以下方法:

  • 转换为RDD
  • 向每行添加新的数据元素
  • 更新新列的架构
  • 将新的RDD +模式转换回DataFrame

调用Row的构造函数似乎有一个小错误:

val newseq = r.toSeq :+ locMap(r.getAs[String](inColName))
Row(newseq)

此“构造函数”的签名(实际上是apply方法)为:

def apply(values: Any*): Row

当您传递Seq[Any] ,会将其视为Seq[Any]类型的单个值 您要传递此序列的元素 ,因此应使用:

val newseq = r.toSeq :+ locMap(r.getAs[String](inColName))
Row(newseq: _*)

解决此问题后,行将与您构建的架构匹配,您将获得预期的结果。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM