[英]Loop through compressed gzip files throws "ERROR [Errno 2] No such file or directory: 'part-r-00001.gz'" at second iteration - Python
I am looping through multiple files within an s3 bucket.我正在循环访问 s3 存储桶中的多个文件。 The first iteration works perfectly fine, but once jumping to the next I receive an "ERROR [Errno 2] No such file or directory: 'part-r-00001.gz'".
第一次迭代工作得很好,但是一旦跳转到下一次迭代,我就会收到“错误 [Errno 2] 没有这样的文件或目录:'part-r-00001.gz'”。 (part-r-00000.gz was accessed correctly)
(正确访问了 part-r-00000.gz)
I am not sure why the file is not found as it is available in the bucket.我不确定为什么找不到该文件,因为它在存储桶中可用。
This is the code:这是代码:
BUCKET = 'bucket'
PREFIX = 'path'
now = datetime.utcnow()
today = (now - timedelta(days=2)).strftime('%Y-%m-%d')
folder_of_the_day = PREFIX + today + '/'
logger.info("map folder: %s", folder_of_the_day)
client = boto3.client('s3')
response = client.list_objects_v2(Bucket=BUCKET, Prefix=folder_of_the_day)
for content in response.get('Contents', []):
bucket_file = os.path.split(content["Key"])[-1]
if bucket_file.endswith('.gz'):
logger.info("----- starting with file: %s -----", bucket_file)
try:
with gzip.open(bucket_file, mode="rt") as file:
for line in file:
//do something
except Exception as e:
logger.error(e)
logger.critical("Failed to open file!")
sys.exit(4)
Once executed for the second round, this is the output:在第二轮执行后,这是 output:
2022-06-18 12:14:48,027 [root] INFO ----- starting with file: part-r-00001.gz ----- 2022-06-18 12:14:48,028 [root] ERROR [Errno 2] No such file or directory: 'part-r-00001.gz'
2022-06-18 12:14:48,027 [root] INFO ----- 从文件开始:part-r-00001.gz ----- 2022-06-18 12:14:48,028 [root] ERROR [ Errno 2] 没有这样的文件或目录:'part-r-00001.gz'
Update Based on the comment I updated my code to a proper gzip method, but still the error remains.更新根据评论,我将代码更新为正确的 gzip 方法,但错误仍然存在。 Once the first iteration is done, the second file is not being found.
第一次迭代完成后,找不到第二个文件。
This is the updated code:这是更新后的代码:
try:
with gzip.GzipFile(bucket_file) as gzipfile:
decompressed_content = gzipfile.read()
for line in decompressed_content.splitlines():
//do something
break
I think you can not use gzip.open
on the S3 path directly.我认为您不能直接在 S3 路径上使用
gzip.open
。
You may need a proper gzip method to read files in S3 bucket.您可能需要适当的 gzip 方法来读取 S3 存储桶中的文件。
Reading contents of a gzip file from a AWS S3 in Python 从 Python 中的 AWS S3 读取 gzip 文件的内容
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.