简体   繁体   中英

How to write a spark.sql.dataframe into a S3 bucket in databricks?

I am using databricks and I am reading .csv file from a bucket.

MOUNT_NAME = "myBucket/"
ALL_FILE_NAMES = [i.name for i in dbutils.fs.ls("/mnt/%s/" % MOUNT_NAME)] \
dfAll = spark.read.format('csv').option("header", "true").schema(schema).load(["/mnt/%s/%s" % (MOUNT_NAME, FILENAME) for FILENAME in ALL_FILE_NAMES])

I would like at the same time to write a table there.

myTable.write.format('com.databricks.spark.csv').save('myBucket/')

The snippet below shows how to save a dataframe as a single CSV file on DBFS and S3.

myTable.write.save(“s3n://my-bucket/my_path/”, format=”csv”)

OR

# DBFS (CSV)
df.write.save('/FileStore/parquet/game_stats.csv', format='csv')

# S3 (CSV)
df.coalesce(1).write.format("com.databricks.spark.csv")
   .option("header", "true").save("s3a://my_bucket/game_sstats.csv")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM