[英]How to upload HDF5 file directly to S3 bucket in Python
I want to upload a HDF5 file created with h5py to S3 bucket without saving locally using boto3. 我想将使用h5py创建的HDF5文件上传到S3存储桶,而无需使用boto3在本地保存。
This solution uses pickle.dumps and pickle.loads and other solutions I have found, store the file locally which I like to avoid. 该解决方案使用pickle.dumps和pickle.loads以及其他我发现的解决方案,将文件存储在本地,这是我希望避免的。
You can use io.BytesIO()
to and put_object
as illustrated here 6 . 您可以使用io.BytesIO()
到put_object
,如图6所示 。 Hope this helps. 希望这可以帮助。 Even in this case, you'd have to 'store' the data locally(though 'in memory'). 即使在这种情况下,也必须将数据“本地”存储(尽管“在内存中”)。 You could also create a tempfile.TemporaryFile
and then upload your file with put_object
. 您还可以创建tempfile.TemporaryFile
,然后使用put_object
上传文件。 I don't think you can stream to an S3 Buckets in the sense that the local data would be discarded as it is uploaded to the Bucket. 我不认为您可以流式传输到S3存储桶,因为在将本地数据上传到存储桶时,本地数据将被丢弃。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.