[英]Writing out file to delta lake produces different results from Data Frame read using Apache Spark on Databricks
I have the following code on my databricks notebook我的数据块笔记本上有以下代码
fulldf = spark.read.format("csv").option("header", True).option("inferSchema",True).load("/databricks-datasets/flights/")
fulldf.write.format("delta").mode("overwrite").save('/mnt/lake/BASE/flights/Full/')
df = fulldf.limit(10)
df.write.format("delta").mode("overwrite").save('/mnt/lake/BASE/flights/Small/')
when I do a display on df I get the results I expect to see:当我在 df 上显示时,我得到了我期望看到的结果:
display(df)
As you can see there are ten rows with correct information如您所见,有十行包含正确的信息
However, when I read the actual parquet saved to '/mnt/lake/BASE/flights/Small/' using the following:但是,当我使用以下命令读取保存到“/mnt/lake/BASE/flights/Small/”的实际镶木地板时:
test = spark.read.parquet('/mnt/lake/BASE/flights/Small/part-00000-d9d24a80-28d6-43f5-950f-3c53a7d1336a-c000.snappy.parquet')
display(test)
I get a completely different result (although it should be the exact same result)我得到一个完全不同的结果(尽管它应该是完全相同的结果)
This is so strange.这太奇怪了。
I believe the problem is with limiting the results to 10 rows, but I don't see why I should get a completely different result我认为问题在于将结果限制为 10 行,但我不明白为什么我会得到完全不同的结果
I am surprised you even got output. On Databricks I got nothing but an error with your read approach.我很惊讶你竟然得到了 output。在 Databricks 上,除了你的读取方法出错之外,我什么也没得到。
As it is a delta
file / sub directory and you must use the delta
format therefore.由于它是一个
delta
文件/子目录,因此您必须使用delta
格式。 Sure, it uses parquet
underneath, but you need to use the delta api
.当然,它在下面使用
parquet
,但您需要使用delta api
。
Eg例如
df.write.format("delta").mode("overwrite").save("/AAAGed")
and和
df = spark.read.format("delta").load("/AAAGed")
and apply partitioning - if present, with a filter.并应用分区 - 如果存在,则使用过滤器。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.