简体   繁体   中英

Downloading a large text file from S3 with boto3

The analytics team of my company uploads every day a CSV file in S3, which is usually the size of 300MB, but ever-increasing. A lambda function I have to implement needs to read this file and process each line.

My main concern with this is that the huge size of the file may cause memory problems in the execution context of my lambda. Is there any way with boto3 to download this file from S3 as a stream and read it as it is being downloaded? If not, which approach should I follow to tackle this situation?

Thanks in advance.

跟踪我的问题,我发现它是智能开放的: https : //github.com/RaRe-Technologies/smart_open/tree/master/smart_open ,它以非常优雅的方式处理了我的问题。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM