簡體   English   中英

Spark DeltaLake Upsert(合並)正在拋出“org.apache.spark.sql.AnalysisException”

[英]Spark DeltaLake Upsert (merge) is throwing “org.apache.spark.sql.AnalysisException”

在下面的代碼中,我嘗試將 dataframe 合並到增量表中。 在這里,我將新的 dataframe 與 delta 表連接起來,然后轉換連接的數據以匹配 delta 表模式,然后將其合並到 delta 表中。

但我得到了AnalysisException

Exception in thread "main" org.apache.spark.sql.AnalysisException: Resolved attribute(s) id#514 missing from _file_name_#872,age#516,id#879,name#636,age#881,name#880,city#882,id#631,_row_id_#866L,city#641 in operator !Join Inner, (id#514 = id#631). Attribute(s) with the same name appear in the operation: id. Please check if the right attribute(s) are used.;;
!Join Inner, (id#514 = id#631)
:- SubqueryAlias deltaData
:  +- Project [id#631, name#636, age#516, city#641]
:     +- Project [age#516, id#631, name#636, new_city#510 AS city#641]
:        +- Project [age#516, id#631, new_name#509 AS name#636, new_city#510]
:           +- Project [age#516, new_id#508 AS id#631, new_name#509, new_city#510]
:              +- Project [age#516, new_id#508, new_name#509, new_city#510]
:                 +- Join Inner, (id#514 = new_id#508)
:                    :- Relation[id#514,name#515,age#516,city#517] parquet
:                    +- LocalRelation [new_id#508, new_name#509, new_city#510]
+- Project [id#879, name#880, age#881, city#882, _row_id_#866L, input_file_name() AS _file_name_#872]
   +- Project [id#879, name#880, age#881, city#882, monotonically_increasing_id() AS _row_id_#866L]
      +- Project [id#854 AS id#879, name#855 AS name#880, age#856 AS age#881, city#857 AS city#882]
         +- Relation[id#854,name#855,age#856,city#857] parquet

我的設置是Spark 3.0.0、Delta Lake 0.7.0、Hadoop 2.7.4

但是下面的代碼在 Databricks 7.4 運行時運行良好,並且新的 dataframe 正在與增量表合並

代碼片段:

import io.delta.tables.DeltaTable
import org.apache.spark.sql.functions.col
import org.apache.spark.sql.{SaveMode, SparkSession}

object CodePen extends App {
  val spark = SparkSession.builder().master("local[*]").getOrCreate()

  val deltaPath = "<delta-path>"
  val oldEmployee = Seq(
    Employee(10, "Django", 22, "Bangalore"),
    Employee(11, "Stephen", 30, "Bangalore"),
    Employee(12, "Calvin", 25, "Hyderabad"))

  val newEmployee = Seq(EmployeeNew(10, "Django", "Bangkok"))
  spark.createDataFrame(oldEmployee).write.format("delta").mode(SaveMode.Overwrite).save(deltaPath) // Saving the data to a delta table
  val newDf = spark.createDataFrame(newEmployee)

  val deltaTable = DeltaTable.forPath(deltaPath)
  val joinedDf = deltaTable.toDF.join(newDf, col("id") === col("new_id"), "inner")

  joinedDf.show()
  val cols = newDf.columns
  // Transforming the joined Dataframe to match the schema of the delta table
  var intDf = joinedDf.drop(cols.map(removePrefix): _*)
  for (column <- newDf.columns)
    intDf = intDf.withColumnRenamed(column, removePrefix(column))

  intDf = intDf.select(deltaTable.toDF.columns.map(col): _*)

  deltaTable.toDF.show()
  intDf.show()

  deltaTable.as("oldData")
    .merge(
      intDf.as("deltaData"),
      col("oldData.id") === col("deltaData.id"))
    .whenMatched()
    .updateAll()
    .execute()

  deltaTable.toDF.show()

  def removePrefix(column: String) = {
    column.replace("new_", "")
  }
}

case class Employee(id: Int, name: String, age: Int, city: String)

case class EmployeeNew(new_id: Int, new_name: String, new_city: String)

下面是數據幀的 output。

新 Dataframe:

+---+------+-------+
| id|  name|   city|
+---+------+-------+
| 10|Django|Bangkok|
+---+------+-------+

加入Datafame:

+---+------+---+---------+------+--------+--------+
| id|  name|age|     city|new_id|new_name|new_city|
+---+------+---+---------+------+--------+--------+
| 10|Django| 22|Bangalore|    10|  Django| Bangkok|
+---+------+---+---------+------+--------+--------+

增量表數據:

+---+-------+---+---------+
| id|   name|age|     city|
+---+-------+---+---------+
| 11|Stephen| 30|Bangalore|
| 12| Calvin| 25|Hyderabad|
| 10| Django| 22|Bangalore|
+---+-------+---+---------+

改造后的新 Dataframe:

+---+------+---+-------+
| id|  name|age|   city|
+---+------+---+-------+
| 10|Django| 22|Bangkok|
+---+------+---+-------+

您收到此 AnalysisException 是因為deltaTableintDf的架構略有不同:

deltaTable.toDF.printSchema()
root
 |-- id: integer (nullable = true)
 |-- name: string (nullable = true)
 |-- age: integer (nullable = true)
 |-- city: string (nullable = true)



intDf.printSchema()
root
 |-- id: integer (nullable = false)
 |-- name: string (nullable = true)
 |-- age: integer (nullable = true)
 |-- city: string (nullable = true)

由於事實上,該 intDf 表是由“id”列用作鍵的連接產生的,它將強制您的連接條件列不可為空。

如果您按照此處的說明更改 nullale 屬性,您將獲得所需的 output:

+---+-------+---+---------+
| id|   name|age|     city|
+---+-------+---+---------+
| 11|Stephen| 30|Bangalore|
| 12| Calvin| 25|Hyderabad|
| 10| Django| 22|  Bangkok|
+---+-------+---+---------+

使用 Spark 3.0.1 和 Delta 0.7.0 進行測試。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM