I am trying to read 3GB file (2.5 million rows, mostly categorical (string) data) into Pandas dataframe with read_csv
function and get error: out of memory
dtype
to improve reading the CSV, but there are too many columns in my data set and I want to load it first, then decide on data type.The Traceback is:
Traceback (most recent call last):
File "/home/a/Dropbox/Programming/Python/C and d/main.com.py", line 9, in <module>
preprocessing()
File "/home/a/Dropbox/Programming/Python/C and d/main.com.py", line 5, in preprocessing
df = pd.read_csv(filepath_or_buffer = file_path, sep ='\t', low_memory = False)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 498, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 285, in _read
return parser.read()
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 747, in read
ret = self._engine.read(nrows)
File "/usr/lib/python3/dist-packages/pandas/io/parsers.py", line 1197, in read
data = self._reader.read(nrows)
File "pandas/parser.pyx", line 769, in pandas.parser.TextReader.read (pandas/parser.c:8011)
File "pandas/parser.pyx", line 857, in pandas.parser.TextReader._read_rows (pandas/parser.c:9140)
File "pandas/parser.pyx", line 1833, in pandas.parser.raise_parser_error (pandas/parser.c:22649)
pandas.parser.CParserError: Error tokenizing data. C error: out of memory
My code:
import pandas as pd
def preprocessing():
file_path = r'/home/a/Downloads/main_query.txt'
df = pd.read_csv(filepath_or_buffer = file_path, sep ='\t', low_memory = False)
The above code produced error message, which I posted above.
I then tried to remove low_memory = False
, and everything worked, it only gave warning:
sys:1: DtypeWarning: Columns (17,20,23,24,33,44,58,118,134,
135,137,142,145,146,147) have mixed types.
Specify dtype option on import or set low_memory=False.
UPDATE: in Pandas 0.19.0 it should be possible to specify categorical
dtype when using read_csv()
method :
pd.read_csv(filename, dtype={'col1': 'category'})
so you may try to use pandas 0.19.0 RC1
OLD answer:
you can read your CSV in chunks and concatenate it to the resulting DF on each step:
chunksize = 10**5
df = pd.DataFrame()
for chunk in (pd.read_csv(filename,
dtype={'col1':np.int8, 'col2':np.int32, ...}
chunksize=chunksize)
):
df = pd.concat([df, chunk], ignore_index=True)
NOTE: parameter dtype
is unsupported with engine='python'
The question is a duplicate:
df.info(memory_usage='deep')
or df.memory_usage(deep=True)
) pd.read_csv(..., dtype={'foo': 'category', 'bar': 'category', ...})
usecols = ['foo', 'bar', 'baz']
nrows=1e5
or see also skiprows=...
)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.