![](/img/trans.png)
[英]NoSuchMethodError: com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer
[英]Cant write a DF as delta table i spark 2.4.4 and scala 2.12 java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala
无法将 DF 写为增量表我触发 2.4.4 和 Scala 2.12
读取镶木地板文件作为 DF 试图将其写为增量表。
代码
val dF=spark.read.load("path") //parquet file
dF.write.format("delta").mode("overwrite").save("path")
错误:
java.lang.NoSuchMethodError: com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper.$init$(Lcom/fasterxml/jackson/module/scala/experimental/ScalaObjectMapper;)V
at org.apache.spark.sql.delta.util.JsonUtils$$anon$1.<init>(JsonUtils.scala:27)
at org.apache.spark.sql.delta.util.JsonUtils$.<init>(JsonUtils.scala:27)
at org.apache.spark.sql.delta.util.JsonUtils$.<clinit>(JsonUtils.scala)
at org.apache.spark.sql.delta.metering.DeltaLogging.recordDeltaEvent(DeltaLogging.scala:62)
at org.apache.spark.sql.delta.metering.DeltaLogging.recordDeltaEvent$(DeltaLogging.scala:56)
at org.apache.spark.sql.delta.DeltaOptions$.recordDeltaEvent(DeltaOptions.scala:133)
at org.apache.spark.sql.delta.DeltaOptions$.verifyOptions(DeltaOptions.scala:176)
at org.apache.spark.sql.delta.DeltaOptions.<init>(DeltaOptions.scala:128)
at org.apache.spark.sql.delta.DeltaOptions.<init>(DeltaOptions.scala:130)
at org.apache.spark.sql.delta.sources.DeltaDataSource.createRelation(DeltaDataSource.scala:130)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
... 49 elided
请帮我解决这个问题
Spark 2.4.4
应该使用Jackson 2.6.7
。
检查您的版本。 这是pom
: https : //github.com/apache/spark/blob/v2.4.4/pom.xml
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.