简体   繁体   English

无法在 DBFS 中保存文件

[英]Unable to save file in DBFS

在此处输入图像描述 I have took the azure datasets that are available for practice.我采用了可用于练习的 azure 数据集。 I got the 10 days data from that dataset and now I want to save this data into DBFS in csv format.我从该数据集中获得了 10 天的数据,现在我想将这些数据以 csv 格式保存到 DBFS 中。 I have facing an error:我面临一个错误:

" No such file or directory: '/dbfs/temp/hive/mytest.csv'" “没有这样的文件或目录:'/dbfs/temp/hive/mytest.csv'”

but on the other hand if I am able to access the path directly from DBFS.但另一方面,如果我能够直接从 DBFS 访问路径。 This path is correct.这条路是正确的。

My code:我的代码:

from azureml.opendatasets import NoaaIsdWeather
from datetime import datetime
from dateutil import parser 
from dateutil.relativedelta import relativedelta


spark.sql('DROP Table if exists mytest')
dbutils.fs.rm("dbfs:/tmp/hive",recurse = True)

basepath = "dbfs:/tmp/hive" 

try:
  dbutils.fs.ls(basepath)
except:
  dbutils.fs.mkdirs(basepath)
else:
  raise Exception("The Folder "+ basepath + " already exist, this notebook will remove in the end")

dbutils.fs.mkdirs("dbfs:/tmp/hive")

start_date = parser.parse('2020-5-1')
end_date = parser.parse('2020-5-10')

isd = NoaaIsdWeather(start_date, end_date)
pdf = isd.to_spark_dataframe().toPandas().to_csv("/dbfs/temp/hive/mytest.csv")

What should I do?我应该怎么办?

Thanks谢谢

I tried reproducing the same issue.我尝试重现相同的问题。 First I have used the following code and made sure that the directory exists using os.listdir() .首先,我使用了以下代码并使用os.listdir()确保该目录存在。

from azureml.opendatasets import NoaaIsdWeather
from datetime import datetime
from dateutil import parser 
from dateutil.relativedelta import relativedelta
spark.sql('DROP Table if exists mytest')
dbutils.fs.rm("dbfs:/tmp/hive",recurse = True)
basepath = "dbfs:/tmp/hive" 
try:
  dbutils.fs.ls(basepath)
except:
  dbutils.fs.mkdirs(basepath)
else:
  raise Exception("The Folder "+ basepath + " already exist, this notebook will remove in the end")

dbutils.fs.mkdirs("dbfs:/tmp/hive")

import os  
os.listdir("/dbfs/tmp/hive/")

在此处输入图像描述

  • Then I used the following to write the csv using to_pandas_dataframe() .然后我使用以下内容使用to_pandas_dataframe()编写 csv 。 This has successfully written the required dataframe to csv file in required path.这样就成功地将所需的 dataframe 写入所需路径中的 csv 文件。
mydf = isd.to_pandas_dataframe()  
mydf.to_csv("/dbfs/tmp/hive/mytest.csv")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM