[英]Load CSV stored as an Azure Blob directly into a Pandas data frame without saving to disk first
The article Explore data in Azure blob storage with pandas ( here ) shows how to load data from an Azure Blob Store into a Pandas data frame. The article Explore data in Azure blob storage with pandas ( here ) shows how to load data from an Azure Blob Store into a Pandas data frame.
They do it by first downloading the blob and storing it locally as a CSV file and then loading that CSV file into a data frame.他们首先下载 blob 并将其作为 CSV 文件在本地存储,然后将该 CSV 文件加载到数据帧中。
import pandas as pd
from azure.storage.blob import BlockBlobService
blob_service = BlockBlobService(account_name=STORAGEACCOUNTNAME, account_key=STORAGEACCOUNTKEY)
blob_service.get_blob_to_path(CONTAINERNAME, BLOBNAME, LOCALFILENAME)
dataframe_blobdata = pd.read_csv(LOCALFILE)
Is there a way to pull the blob directly into a data frame without saving it to local disk first?有没有办法将 blob 直接拉入数据帧而不先将其保存到本地磁盘?
You could try something like that (using StringIO
):你可以尝试这样的事情(使用
StringIO
):
import pandas as pd
from azure.storage.blob import BlockBlobService
from io import StringIO
blob_service = BlockBlobService(account_name=STORAGEACCOUNTNAME, account_key=STORAGEACCOUNTKEY)
blob_string = blob_service.get_blob_to_text(CONTAINERNAME, BLOBNAME)
dataframe_blobdata = pd.read_csv(StringIO(blobstring))
Be aware that the file will be stored in-memory, which means that if it's a large file it can cause a MemoryError
(maybe you can try to del
the blob_string
in order to free memory once you got data in dataframe, idk).请注意,该文件将存储在内存中,这意味着如果它是一个大文件,则可能会导致
MemoryError
(也许您可以尝试del
blob_string
以便在您获得 dataframe、idk 中的数据后释放 memory)。
I've done more or less the same thing with Azure DataLake Storage Gen2 (which uses Azure Blob Storage).我对 Azure DataLake Storage Gen2(它使用 Azure Blob Storage)或多或少做了同样的事情。
Hope it helps.希望能帮助到你。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.