簡體   English   中英

當表已存在時,使用 spark 數據幀覆蓋表失敗

[英]Overwriting Table using spark dataframe fails when table already exists

我正在嘗試使用 spark 數據框完全覆蓋 postgres 表。 出於某種原因,即使我指定mode("overwrite") ,我也會得到一個relation already exists postgres 錯誤。 為什么我的代碼沒有像預期的那樣覆蓋數據庫中的數據? 我已經使用客戶端檢查過該表,它確實存在(這應該無關緊要)。 里面也有數據。 怎么了? 這可能是內存問題嗎? 可能是queryTimeout嗎?

    df.write.format('jdbc').options(
        url=PSQL_URL_SPARK,
        driver=SPARK_ENV['PSQL_DRIVER'],
        dbtable="schema.table",
        user=SPARK_ENV['PSQL_USER'],
        password=SPARK_ENV['PSQL_PASS'],
        batchsize=2000000,
        queryTimeout=690
    ).mode("overwrite").save()

Traceback (most recent call last):
  File "/home/hadoop/spark_script.py", line 671, in <module>
    main()
  File "/home/hadoop/spark_script.py", line 83, in main
    ).mode("overwrite").save()
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 732, in save
  File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o352.save.
: org.postgresql.util.PSQLException: ERROR: relation "<table>" already exists
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2468)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2211)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:309)
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:446)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:370)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:311)
    at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:297)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:274)
    at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:246)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:859)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:81)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:156)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

我遇到了同樣的問題,問題來自數據庫模式。 確保數據庫中表中的列和類型與數據框相同。

您可以在新的時態表中編寫數據框,並在 sql 引擎中使用 DESCRIBE 來查看兩個表中的列和類型。 您可以嘗試在時態表上再次覆蓋以查看它是否成功將數據寫入現有表。

另一個可能的問題是許可。 檢查表中用戶的權限:

SELECT grantee, privilege_type 
FROM information_schema.role_table_grants 
WHERE table_name='mytable';

似乎mode("overwrite")不是問題所在。 問題出在save() ,但 Spark 試圖創建表似乎也很奇怪:

...
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:859)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:81)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
...

您是否正確指定了表名? 我想知道它是否可能是一個 Spark 錯誤(我對 Spark 的了解不夠,無法做出決定)——也許它試圖在public.tablename創建一個表(因為schame.tablename不存在——這就是我的方式可以設想錯誤表現出來),即使您指定了schema.tablename

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM