简体   繁体   中英

How to read a table from a databse in azure synapse and write it into the default (spak)?

I am writing the below code in the notebook of azure synapse

%%spark
val df = spark.read.sqlanalytics("emea_analytics.abc.cde_mydata") 
df.write.mode("overwrite").saveAsTable("default.t1")

I am getting the below error:

Error: com.microsoft.spark.sqlanalytics.exception.SQLAnalyticsConnectorException: The specified table does not exist. Please provide a valid table.
  at com.microsoft.spark.sqlanalytics.read.SQLAnalyticsReader.readSchema(SQLAnalyticsReader.scala:103)

  at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation$.create(DataSourceV2Relation.scala:175)

  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:204)

  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)

  at org.apache.spark.sql.SqlAnalyticsConnector$SQLAnalyticsFormatReader.sqlanalytics(SqlAnalyticsConnector.scala:42)

The error message clearly says - The specified table does not exist. Please provide a valid table. The specified table does not exist. Please provide a valid table.

Error : com.microsoft.spark.sqlanalytics.exception.SQLAnalyticsConnectorException: The specified table does not exist. Please provide a valid table.

Make sure specified table exists before you running above code.

在此处输入图像描述

Reference: Azure Synapse Analytics - Load the NYC Taxi data into the Spark nyctaxi database .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM