繁体   English   中英

Spark DataFrame列名未传递给从节点?

[英]Spark DataFrame column names not passed to slave nodes?

我正在应用一个函数,让我们说f(),通过map方法到DataFrame的行(调用它df )但是如果将df.columns作为参数传递给f,我会在调用collect on df.columns RDD时看到NullPointerException( )。

以下Scala代码可以粘贴在spark-shell中,它显示了一个最小的问题示例(请参阅函数prepRDD_buggy() )。 我还在函数prepRDD()中发布了此问题的当前解决方法,其中列名称作为val而不是df.columns传递的唯一区别。

有些Spark专家可以指出这种情况发生的确切原因或确认我们的假设,即从节点没有得到DataFrame列名吗?

import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, Row}
import org.apache.spark.sql.types._
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors

// A Simple DataFrame
val dataRDD: RDD[Row] = sc.parallelize(Array(
  Row(1.0,2.1,3.3),
  Row(3.4,5.9,8.9),
  Row(3.1,2.3,4.1)))
val struct: StructType = StructType(
  StructField("y", DoubleType, false) ::
  StructField("x1", DoubleType, false) ::
  StructField("x2", DoubleType, false) :: Nil)
val df: DataFrame = sqlContext.createDataFrame(dataRDD, struct)

// Make LabeledPoint object from Row objects
def makeLP(row: Row, colnames: Array[String]) =
  LabeledPoint(row.getDouble(0), 
    Vectors.dense((1 until row.length).toArray map (i => row.getDouble(i))))

// Make RDD[LabeledPoint] from DataFrame
def prepRDD_buggy(df: DataFrame): RDD[LabeledPoint] = {
  df map (row => makeLP(row, df.columns))
}
val mat_buggy = prepRDD_buggy(df) 
mat_buggy.collect // throws NullPointerException !

// Make RDD[LabeledPoint] from DataFrame
def prepRDD(df: DataFrame): RDD[LabeledPoint] = {
  val cnames = df.columns
  df map (row => makeLP(row, cnames))
}
val mat = prepRDD(df) 
mat.collect // Works fine

这是我在spark-shell中运行mat_buggy.collect看到的(非常详细)错误消息的前几行。

15/12/24 18:09:28 INFO SparkContext: Starting job: collect at <console>:42
15/12/24 18:09:28 INFO DAGScheduler: Got job 0 (collect at <console>:42) with 2 output partitions
15/12/24 18:09:28 INFO DAGScheduler: Final stage: ResultStage 0(collect at <console>:42)
15/12/24 18:09:28 INFO DAGScheduler: Parents of final stage: List()
15/12/24 18:09:28 INFO DAGScheduler: Missing parents: List()
15/12/24 18:09:28 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at map at <console>:38), which has no missing parents
15/12/24 18:09:28 INFO MemoryStore: ensureFreeSpace(11600) called with curMem=0, maxMem=560993402
15/12/24 18:09:28 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 11.3 KB, free 535.0 MB)
15/12/24 18:09:28 INFO MemoryStore: ensureFreeSpace(4540) called with curMem=11600, maxMem=560993402
15/12/24 18:09:28 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.4 KB, free 535.0 MB)
15/12/24 18:09:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.10.98:53386 (size: 4.4 KB, free: 535.0 MB)
15/12/24 18:09:28 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
15/12/24 18:09:28 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at map at <console>:38)
15/12/24 18:09:28 INFO YarnScheduler: Adding task set 0.0 with 2 tasks
15/12/24 18:09:28 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-10-10-10-217.ec2.internal, PROCESS_LOCAL, 2385 bytes)
15/12/24 18:09:28 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, ip-10-10-10-213.ec2.internal, PROCESS_LOCAL, 2385 bytes)
15/12/24 18:09:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-10-10-213.ec2.internal:56642 (size: 4.4 KB, free: 535.0 MB)
15/12/24 18:09:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-10-10-217.ec2.internal:56396 (size: 4.4 KB, free: 535.0 MB)
15/12/24 18:09:29 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-10-10-217.ec2.internal): java.lang.NullPointerException
    at org.apache.spark.sql.DataFrame.schema(DataFrame.scala:290)
    at org.apache.spark.sql.DataFrame.columns(DataFrame.scala:306)
    at $line34.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$prepRDD_buggy$1.apply(<console>:38)
    at $line34.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$prepRDD_buggy$1.apply(<console>:38)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)

你的假设是正确的。 columns需要访问schema和模式取决于queryExecution ,它是暂时的,因此不会发送给worker。 因此,您在prepRDD所做的事情prepRDD是正确的,尽管可以直接从行中提取相同的信息:

scala> df.rdd.map(_.schema.fieldNames).first
res14: Array[String] = Array(y, x1, x2, x3)

在旁注中, VectorAssembler加上简单的map将是更好的选择。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM