![](/img/trans.png)
[英]How can I split large JSON file in Azure Blob Storage into individual files for each record using Python?
[英]How can I append JSON data to an existing JSON file stored in Azure blob storage through python?
I've been looking around the web to append data to an existing JSON file in azure storage, I also check on this post, but it didn't help. I have millions of JSON records coming in real-time which are available in python list and I want to append those JSON records to an existing JSON file in azure blob. 虽然我的主要数据源是 KafkaConsumer,并且我正在使用来自 Kafka 主题的数据,并且我希望这些数据以 JSON 格式存储到 azure 存储中。 As, I'm using python and I don't want to read/write on my local hard disk, I just want like if I have list of JSON records I can directly append to JSON file which already in azure container. 任何人都可以帮助我或提供一些参考,这对我来说会很高兴。 谢谢
我尝试在我的系统中将 append 数据保存到现有文件中,我将虚拟 json 数据用于测试目的,您可以传递您的 json 数据
from azure.storage.blob import AppendBlobService
import json
def append_data_to_blob(data):
service = AppendBlobService(account_name="appendblobex",
account_key="key")
data1 = {}
data1['hi'] = 'hello'
json_data = json.dumps(data1)
data = json.dumps(data1)
print(data1)
try:
service.append_blob_from_text(container_name="test", blob_name="test1", text = data)
except:
#To create the blob and append data
#service.create_blob(container_name="test", blob_name="test1")
service.append_blob_from_text(container_name="test", blob_name="test1", text = data)
print('Data Appended to Blob Successfully.')
append_data_to_blob("data")
OUTPUT
azure存储文件中附加的数据下载后打开文件查看数据
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.