简体   繁体   中英

How to save and download locally csv in DBFS?

I'm trying to save csv file as a result of SQL query, sent to Athena via Databricks. The file is supposed to be a big table of about 4-6 GB (~40m rows).

I'm doing the next steps:

  1. Creating PySpark dataframe by:

     df = sqlContext.sql("select * from my_table where year = 19")
  2. Converting PySpark dataframe to Pandas dataframe. I realize, this step may be unnecessary, but I only start using Databricks and may not know the required commands to do it more swiftly. So I do it like this:

     ab = df.toPandas()
  3. Save the file somewhere to download it locally later:

     ab.to_csv('my_my.csv')

But how do I download it?

I kindly ask you to be very specific as I do not know many tricks and details in working with Databricks.

Using GUI, you can download full results (max 1 millions rows).

在此处输入图像描述

To download full results, first save the file to dbfs and then copy the file to local machine using Databricks cli as follows.

dbfs cp "dbfs:/FileStore/tables/my_my.csv" "A:\AzureAnalytics"

Reference: Databricks file system

The DBFS command-line interface (CLI) uses the DBFS API to expose an easy to use command-line interface to DBFS. Using this client, you can interact with DBFS using commands similar to those you use on a Unix command line. For example:

# List files in DBFS
dbfs ls
# Put local file ./apple.txt to dbfs:/apple.txt
dbfs cp ./apple.txt dbfs:/apple.txt
# Get dbfs:/apple.txt and save to local file ./apple.txt
dbfs cp dbfs:/apple.txt ./apple.txt
# Recursively put local dir ./banana to dbfs:/banana
dbfs cp -r ./banana dbfs:/banana

Reference: Installing and configuring Azure Databricks CLI

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM