简体   繁体   中英

Pandas read_csv with 4GB of csv

My machine was laggy while trying to read a 4GB of csv in jupyter notebook with chunksize option: raw = pd.read_csv(csv_path, chunksize=10**6) data = pd.concat(raw, ignore_index=True) This takes forever to run and also freeze my machine (Ubuntu 16.04 with 16GB of RAM). What is the right way to do this?

The point of using chunk is that you don't need the whole dataset in memory at one time and you can process each chunk when you read the file. Assuming you don't need the whole dataset in memory at one time, you can do

chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
   do_something(chunk)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM