I am using Databricks(Pyspark) to write a csv file inside Azure Blob Storage using:
file_location = "/mnt/ndemo/nsalman/curation/movies/"
df.repartition(1).write.format("com.databricks.spark.csv").option("header", "true").save(file_location)
The file that is created is named as : part-00000-tid-3921235530521294160-fb002878-253d-44f5-a773-7bda908c7178-13-1-c000.csv
Now I am renaming it to "movies.csv" using this:
filePath = "/mnt/ndemo/nsalman/curation/movies/"
fs.rename(spark._jvm.org.apache.hadoop.fs.Path(filePath+"part*"), spark._jvm.org.apache.hadoop.fs.Path(filePath+"movies.csv"))
After running it gives me this output:
Since I am new to Pyspark, I am not sure why is not my file being renamed? Can anyone please let me know where I am going wrong
Try this
old_file_name = "test1.csv"
new_file_name = "test2.csv"
dbutils.fs.mv(old_file_name,new_file_name)
working for me.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.