簡體   English   中英

讀取一個大的 csv 文件然后將其拆分導致 OOM 錯誤

[英]Reading a large csv file then splitting it causing a OOM error

嗨,我正在創建一個 GLUE 作業,它將讀取 csv 文件,然后通過特定列將其拆分,不幸的是,它導致了OOM(Out of Memory)錯誤。 請看下面的代碼

import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import boto3


#get date 
Current_Date = datetime.now() - timedelta(days=1)
now = Current_Date.strftime('%Y-%m-%d')

#get date
Previous_Date = datetime.now() - timedelta(days=2)
prev = Previous_Date.strftime('%Y-%m-%d')

#read csv file that contain today's date
filepath = "s3://bucket/file"+now+".csv.gz"

data = pd.read_csv(filepath, sep='|', header=None,compression='gzip') 

#   count no. of loops
loop = 0
for i, x in data.groupby(data[10].str.slice(0,10)):
    loop += 1

# if no. of distinct values of column 10 (last_update) is greater than or equal to 7
if loop >= 7:
    #run loop for the dataframe and split by distinct values of column 10 (last_update)
    for i, x in data.groupby(data[10].str.slice(0, 10)):
        x.to_csv("s3://bucket/file.csv.gz".format(i.lower()),header=None,compression='gzip')

#if no. of distinct values of column 10 (last_update) is less than 7
#filter dateframe (current date and previous date); new dataframe is created
else:
    d = data[(data[10].str.slice(0,10)==prev)|(data[10].str.slice(0,10)==now)]
#run loop for the filtered data frame and split by distinct values of column 10 (last_update)
 for i, x in d.groupby(d[10].str.slice(0, 10)):
        x.to_csv("s3://bucket/file.csv.gz".format(i.lower()),header=None,compression='gzip')

解決方案 - 我通過增加膠水作業的最大容量解決了這個問題

不確定您的文件大小有多大,但如果您按塊拆分文件,您應該能夠避免該錯誤。 我們已經使用這種方法成功地測試了一個 2.5gb 的文件。 此外,如果您使用的是 python shell,請記住將您的膠水作業最大容量更新為 1

data = pd.read_csv(filepath, chunksize=1000, iterator=True) 
for chunk in enumerate(data):
#Loop through the chunks and process the data

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM