簡體   English   中英

Spark DataFrame列名未傳遞給從節點?

[英]Spark DataFrame column names not passed to slave nodes?

我正在應用一個函數,讓我們說f(),通過map方法到DataFrame的行(調用它df )但是如果將df.columns作為參數傳遞給f,我會在調用collect on df.columns RDD時看到NullPointerException( )。

以下Scala代碼可以粘貼在spark-shell中,它顯示了一個最小的問題示例(請參閱函數prepRDD_buggy() )。 我還在函數prepRDD()中發布了此問題的當前解決方法,其中列名稱作為val而不是df.columns傳遞的唯一區別。

有些Spark專家可以指出這種情況發生的確切原因或確認我們的假設,即從節點沒有得到DataFrame列名嗎?

import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, Row}
import org.apache.spark.sql.types._
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors

// A Simple DataFrame
val dataRDD: RDD[Row] = sc.parallelize(Array(
  Row(1.0,2.1,3.3),
  Row(3.4,5.9,8.9),
  Row(3.1,2.3,4.1)))
val struct: StructType = StructType(
  StructField("y", DoubleType, false) ::
  StructField("x1", DoubleType, false) ::
  StructField("x2", DoubleType, false) :: Nil)
val df: DataFrame = sqlContext.createDataFrame(dataRDD, struct)

// Make LabeledPoint object from Row objects
def makeLP(row: Row, colnames: Array[String]) =
  LabeledPoint(row.getDouble(0), 
    Vectors.dense((1 until row.length).toArray map (i => row.getDouble(i))))

// Make RDD[LabeledPoint] from DataFrame
def prepRDD_buggy(df: DataFrame): RDD[LabeledPoint] = {
  df map (row => makeLP(row, df.columns))
}
val mat_buggy = prepRDD_buggy(df) 
mat_buggy.collect // throws NullPointerException !

// Make RDD[LabeledPoint] from DataFrame
def prepRDD(df: DataFrame): RDD[LabeledPoint] = {
  val cnames = df.columns
  df map (row => makeLP(row, cnames))
}
val mat = prepRDD(df) 
mat.collect // Works fine

這是我在spark-shell中運行mat_buggy.collect看到的(非常詳細)錯誤消息的前幾行。

15/12/24 18:09:28 INFO SparkContext: Starting job: collect at <console>:42
15/12/24 18:09:28 INFO DAGScheduler: Got job 0 (collect at <console>:42) with 2 output partitions
15/12/24 18:09:28 INFO DAGScheduler: Final stage: ResultStage 0(collect at <console>:42)
15/12/24 18:09:28 INFO DAGScheduler: Parents of final stage: List()
15/12/24 18:09:28 INFO DAGScheduler: Missing parents: List()
15/12/24 18:09:28 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at map at <console>:38), which has no missing parents
15/12/24 18:09:28 INFO MemoryStore: ensureFreeSpace(11600) called with curMem=0, maxMem=560993402
15/12/24 18:09:28 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 11.3 KB, free 535.0 MB)
15/12/24 18:09:28 INFO MemoryStore: ensureFreeSpace(4540) called with curMem=11600, maxMem=560993402
15/12/24 18:09:28 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.4 KB, free 535.0 MB)
15/12/24 18:09:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.10.10.98:53386 (size: 4.4 KB, free: 535.0 MB)
15/12/24 18:09:28 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
15/12/24 18:09:28 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at map at <console>:38)
15/12/24 18:09:28 INFO YarnScheduler: Adding task set 0.0 with 2 tasks
15/12/24 18:09:28 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, ip-10-10-10-217.ec2.internal, PROCESS_LOCAL, 2385 bytes)
15/12/24 18:09:28 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, ip-10-10-10-213.ec2.internal, PROCESS_LOCAL, 2385 bytes)
15/12/24 18:09:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-10-10-213.ec2.internal:56642 (size: 4.4 KB, free: 535.0 MB)
15/12/24 18:09:28 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on ip-10-10-10-217.ec2.internal:56396 (size: 4.4 KB, free: 535.0 MB)
15/12/24 18:09:29 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-10-10-217.ec2.internal): java.lang.NullPointerException
    at org.apache.spark.sql.DataFrame.schema(DataFrame.scala:290)
    at org.apache.spark.sql.DataFrame.columns(DataFrame.scala:306)
    at $line34.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$prepRDD_buggy$1.apply(<console>:38)
    at $line34.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$prepRDD_buggy$1.apply(<console>:38)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)

你的假設是正確的。 columns需要訪問schema和模式取決於queryExecution ,它是暫時的,因此不會發送給worker。 因此,您在prepRDD所做的事情prepRDD是正確的,盡管可以直接從行中提取相同的信息:

scala> df.rdd.map(_.schema.fieldNames).first
res14: Array[String] = Array(y, x1, x2, x3)

在旁注中, VectorAssembler加上簡單的map將是更好的選擇。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM