简体   繁体   中英

Databricks save Rdata file to AWS S3 Bucket

I have developed a model in R using databricks. I want to save the output datafile on AWS S3 bucket but when I save the file as below it does not save to the mounted drive.

doc <- save(data, file=paste0(getwd(), "/datafile.RData"))

What is the best way to mount data to S3 using R?

I have tried the below sample code and it works so I know my connection between AWS and Databricks is working.

%python
display(dbutils.fs.ls("/"))

From Databricks File System (DBFS) documentation :

You can use local file APIs to read and write to DBFS paths. Databricks configures each cluster node with a FUSE mount /dbfs that allows processes running on cluster nodes to read and write to the underlying distributed storage layer with local file APIs. When using local file APIs, you must provide the path under /dbfs.

For example:

save(data, file="/dbfs/datafile.RData")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM