[英]How to read a parquet file in Azure Databricks?
I have few parquet files stored in my storage account, which I am trying to read using the below code.我的存储帐户中存储的镶木地板文件很少,我正尝试使用以下代码读取这些文件。 However it fails with error as incorrect syntax.
但是,由于语法不正确,它失败并出现错误。 Can someone suggest to me as whats the correct way to read parquet files using azure databricks?
有人可以向我建议使用 azure 数据块读取镶木地板文件的正确方法是什么吗?
val data = spark.read.parquet("abfss://containername@storagename.dfs.core.windows.net/TestFolder/XYZ/part-00000-1cf0cf7b-6c9f-41-a268-be-c000.snappy.parquet")
display(data)
abfss://containername@storagename.dfs.core.windows.net/TestFolder/XYZ/part-00000-1cf0cf7b-6c9f-41-a268-be-c000.snappy.parquet
abfss://containername@storagename.dfs.core.windows.net/TestFolder/XYZ/part-00000-1cf0cf7b-6c9f-41-a268-be-c000.snappy.parquet
As per the above abfss URL you can use delta or parquet format in the storage account.根据上面的 abfss URL,您可以在存储帐户中使用 delta 或 parquet 格式。
Note: If you created delta table, part file creates automatically like this part-00000-1cf0cf7b-6c9f-41-a268-be-c000.snappy.parquet
.As per above code it is not possible to read parquet file in delta format.注意:如果您创建了增量表,零件文件会像这样自动创建
part-00000-1cf0cf7b-6c9f-41-a268-be-c000.snappy.parquet
。按照上面的代码,无法读取增量格式的镶木地板文件。
I have written the datafram df1
and overwrite into a storage account with parquet format.我已经编写了数据帧
df1
并覆盖到镶木地板格式的存储帐户中。
df1.coalesce(1).write.format('parquet').mode("overwrite").save("abfss://<container>@<stoarge_account>.dfs.core.windows.net/<folder>/<sub_folder>")
Scala
Scala
val df11 = spark.read.format("parquet").load("abfss://<container>@<stoarge_account>.dfs.core.windows.net/demo/d121/part-00000-tid-2397072542034942773-def47888-c000.snappy.parquet")
display(df11)
python
python
df11 = spark.read.format("parquet").load("abfss://<container>@<stoarge_account>.dfs.core.windows.net/demo/d121/part-00000-tid-2397072542034942773-def47888-c000.snappy.parquet")
display(df11)
Output: Output:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.