[英]Moving data from a database to Azure blob storage
I'm able to use dask.dataframe.read_sql_table to read the data eg df = dd.read_sql_table(table='TABLE', uri=uri, index_col='field', npartitions=N)
我可以使用dask.dataframe.read_sql_table读取数据,例如
df = dd.read_sql_table(table='TABLE', uri=uri, index_col='field', npartitions=N)
What would be the next (best) steps to saving it as a parquet file in Azure blob storage?将其保存为 Azure blob 存储中的镶木地板文件的下一个(最佳)步骤是什么?
From my small research there are a couple of options:根据我的小型研究,有几个选择:
$ pip install adlfs
dd.to_parquet(
df=df,
path='absf://{BLOB}/{FILE_NAME}.parquet',
storage_options={'account_name': 'ACCOUNT_NAME',
'account_key': 'ACCOUNT_KEY'},
)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.