简体   繁体   English

从 Databricks 将 pyspark df 写入 BigQuery 时出错

[英]Error when writing pyspark df to BigQuery from Databricks

I am getting this error after having run this command运行此命令后出现此错误

result_df.write.format("bigquery").option("table", "id:dataset.my_table").option("temporaryGcsBucket", "gs://test").save()

But have gotten this error:但是得到了这个错误:

Py4JJavaError                             Traceback (most recent call last)
<command-3760281478830411> in <module>
----> 1 result_df.write.format("bigquery").option("table", "id:dataset.my_table").option("temporaryGcsBucket", "gs://test").save()`

/databricks/spark/python/pyspark/sql/readwriter.py in save(self, path, format, mode, partitionBy, **options)
    736             self.format(format)
    737         if path is None:
--> 738             self._jwrite.save()
    739         else:
    740             self._jwrite.save(path)

/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1302 
   1303         answer = self.gateway_client.send_command(command)
-> 1304         return_value = get_return_value(
   1305             answer, self.gateway_client, self.target_id, self.name)
   1306 

/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
    115     def deco(*a, **kw):
    116         try:
--> 117             return f(*a, **kw)
    118         except py4j.protocol.Py4JJavaError as e:
    119             converted = convert_exception(e.java_exception)

/databricks/spark/python/lib/py4j-0.10.9.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    324             value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
    328                     format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o2287.save.
: java.io.IOException: Invalid PKCS8 data.
    at shaded.databricks.com.google.cloud.hadoop.util.CredentialFactory.privateKeyFromPkcs8(CredentialFactory.java:346)
    at shaded.databricks.com.google.cloud.hadoop.util.CredentialFactory.getCredentialsFromSAParameters(CredentialFactory.java:310)
    at shaded.databricks.com.google.cloud.hadoop.util.CredentialFactory.getCredential(CredentialFactory.java:393)
    at shaded.databricks.com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.getCredential(GoogleHadoopFileSystemBase.java:1542)
    at shaded.databricks.com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.createGcsFs(GoogleHadoopFileSystemBase.java:1677)
    at shaded.databricks.com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1661)
    at shaded.databricks.com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:470)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:537)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
    at com.google.cloud.spark.bigquery.BigQueryWriteHelper.<init>(BigQueryWriteHelper.scala:63)
    at com.google.cloud.spark.bigquery.BigQueryInsertableRelation.insert(BigQueryInsertableRelation.scala:42)
    at com.google.cloud.spark.bigquery.BigQueryRelationProvider.createRelation(BigQueryRelationProvider.scala:119)
    at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:80)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:78)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:89)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:160)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:239)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:386)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:186)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)
    at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:141)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:336)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:160)
    at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:156)
    at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:575)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:167)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:575)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:551)
    at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:156)
    at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:324)
    at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:156)
    at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:141)
    at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:132)
    at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:186)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:959)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:427)
    at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:396)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:258)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)

I have tried debugging but to no avail我试过调试但无济于事

I have never used an export to bigQuery from databricks but your .option("table", "id:dataset.my_table").option("temporaryGcsBucket", "gs://test") seems wrong.我从未使用过从数据块导出到 bigQuery,但你的.option("table", "id:dataset.my_table").option("temporaryGcsBucket", "gs://test")似乎是错误的。 It should be something like:它应该是这样的:

bucket = YOUR_BUCKET_NAME
table = "together.myTable" # or "<project_name>.<dataset_name>.employees"

df.write
  .format("bigquery")
  .option("temporaryGcsBucket", bucket)
  .option("table", table)
  .mode("overwrite").save()

Did you follow the steps below to link databricks with google link to article: https://cloud.google.com/bigquery/docs/connect-databricks#create_a_service_account_for_databricks您是否按照以下步骤将数据块与谷歌链接链接到文章: https ://cloud.google.com/bigquery/docs/connect-databricks#create_a_service_account_for_databricks

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用 PySpark 从 BigQuery 读取和写入数据:错误 `Failed to find data source: bigquery` - Reading and writing data from BigQuery, using PySpark: ERROR `Failed to find data source: bigquery` 使用 simba JDBC 从 pyspark 连接到 BigQuery - Connect to BigQuery from pyspark using simba JDBC 从 Google Cloud Storage 加载 csv 文件时出现 BigQuery 错误 - BigQuery error when loading csv file from Google Cloud Storage 将 apache 光束 pCollection 写入 bigquery 会导致类型错误 - Writing apache beam pCollection to bigquery causes type Error 使用 to_dataframe() 作为 BigQuery 管理员角色时出现 BigQuery 权限错误 - BigQuery Permission error when using to_dataframe() as BigQuery Admin role 尝试将镶木地板文件写入 S3 存储桶时出现 PySpark SparkSession 错误:org.apache.spark.SparkException:写入行时任务失败 - PySpark SparkSession error when trying to write parquet files to S3 bucket: org.apache.spark.SparkException: Task failed while writing rows 写入 Bigquery 时间单位列分区时,Dataflow 不会创建空分区 - Dataflow doesn’t create an empty partition when writing to a Bigquery time-unit column partition 尝试在 BigQuery 上创建表时收到“意外错误”消息 - Getting "unexpected error" message when trying to create table on BigQuery pandas 在尝试从 bigquery 读取时卡住 - pandas gets stuck when trying to read from bigquery 将表从 Google BigQuery 导出到 Google Cloud Storage 时出现权限错误 - Permissions Error Exporting a table from Google BigQuery to Google Cloud Storage
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM