[英]Converting List of List or RDD to DataFrame in Spark-Scala
So basically what I am trying to achieve is - I have a table with 4 columns (say) and I expose it to a DataFrame - DF1. 所以基本上我想要实现的是-我有一个包含4列的表(例如),并将其公开给DataFrame-DF1。 Now I want to store each of the row of the DF1 to another hive table (basically DF2 which schema as - Column1, Column2, Column3) while the column3 value will be the '-' delimited row of DataFrame DF1.
现在,我想将DF1的每一行存储到另一个配置单元表(基本上是DF2,其架构为-Column1,Column2,Column3),而column3的值将是DataFrame DF1的'-'分隔行。
val df = hiveContext.sql("from hive_table SELECT *")
val writeToHiveDf = df.filter(new Column("id").isNotNull)
var builder : List[(String, String, String)] = Nil
var finalOne = new ListBuffer[List[(String, String, String)]]()
writeToHiveDf.rdd.collect().foreach {
row =>
val item = row.mkString("-@")
builder = List(List("dummy", "NEVER_NULL_CONSTRAINT", "some alpha")).map{case List(a,b,c) => (a,b,c)}
finalOne += builder
}
Now I have the finalOne as a list of lists, which I want to convert to a dataframe directly or via RDD. 现在,我将finalOne作为列表列表,我想直接将其或通过RDD转换为数据框 。
var listRDD = sc.parallelize(finalOne) //Converts to RDD - It works.
val dataFrameForHive : DataFrame = listRDD.toDF("table_name", "constraint_applied", "data") //Doesn't work
Error : 错误:
java.lang.ClassCastException: org.apache.spark.sql.types.ArrayType cannot be cast to org.apache.spark.sql.types.StructType
at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:414)
at org.apache.spark.sql.SQLImplicits.rddToDataFrameHolder(SQLImplicits.scala:94)
Can some one help me understand the right way to convert this to DataFrame. 有人可以帮我理解将其转换为DataFrame的正确方法吗? Thanks a ton in advance for your support.
提前感谢您的支持。
if you want 3 columns of type string in your dataframe, you should flatten your List[List[(String,String,String)]]
to List[(String,String,String)]
: 如果要在数据帧中使用3列类型的字符串,则应将
List[List[(String,String,String)]]
展平为List[(String,String,String)]
:
var listRDD = sc.parallelize(finalOne.flatten) // makes List[(String,String,String)]
val dataFrameForHive : DataFrame = listRDD.toDF("table_name", "constraint_applied", "data")
我相信,在将“ finalOne”数据帧传递到sc.parallelize()函数之前,先将其展平应该会产生符合您期望的结果。
var listRDD = sc.parallelize(finalOne)
val dataFrameForHive : DataFrame = listRDD.toDF("table_name", "constraint_applied", "data")
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.