简体   繁体   中英

The fastest way to read input in Python

I want to read a huge text file that contains list of lists of integers. Now I'm doing the following:

G = []
with open("test.txt", 'r') as f:
    for line in f:
        G.append(list(map(int,line.split())))

However, it takes about 17 secs (via timeit). Is there any way to reduce this time? Maybe, there is a way not to use map.

numpy has the functions loadtxt and genfromtxt , but neither is particularly fast. One of the fastest text readers available in a widely distributed library is the read_csv function in pandas ( http://pandas.pydata.org/ ). On my computer, reading 5 million lines containing two integers per line takes about 46 seconds with numpy.loadtxt , 26 seconds with numpy.genfromtxt , and a little over 1 second with pandas.read_csv .

Here's the session showing the result. (This is on Linux, Ubuntu 12.04 64 bit. You can't see it here, but after each reading of the file, the disk cache was cleared by running sync; echo 3 > /proc/sys/vm/drop_caches in a separate shell.)

In [1]: import pandas as pd

In [2]: %timeit -n1 -r1 loadtxt('junk.dat')
1 loops, best of 1: 46.4 s per loop

In [3]: %timeit -n1 -r1 genfromtxt('junk.dat')
1 loops, best of 1: 26 s per loop

In [4]: %timeit -n1 -r1 pd.read_csv('junk.dat', sep=' ', header=None)
1 loops, best of 1: 1.12 s per loop

pandas which is based on numpy has a C based file parser which is very fast:

# generate some integer data (5 M rows, two cols) and write it to file
In [24]: data = np.random.randint(1000, size=(5 * 10**6, 2))

In [25]: np.savetxt('testfile.txt', data, delimiter=' ', fmt='%d')

# your way
In [26]: def your_way(filename):
   ...:     G = []
   ...:     with open(filename, 'r') as f:
   ...:         for line in f:
   ...:             G.append(list(map(int, line.split(','))))
   ...:     return G        
   ...: 

In [26]: %timeit your_way('testfile.txt', ' ')
1 loops, best of 3: 16.2 s per loop

In [27]: %timeit pd.read_csv('testfile.txt', delimiter=' ', dtype=int)
1 loops, best of 3: 1.57 s per loop

So pandas.read_csv takes about one and a half second to read your data and is about 10 times faster than your method.

As a general rule of thumb (for just about any language), using read() to read in the entire file is going to be quicker than reading one line at a time. If you're not constrained by memory, read the whole file at once and then split the data on newlines, then iterate over the list of lines.

The easiest speedup would be to go for PyPy http://pypy.org/

The next issue to NOT read the file at all (if possible). Instead process it like a stream.

List comprehensions are often faster.

G = [[int(item) item in line.split()] for line in f]

Beyond that, try PyPy and Cython and numpy

You might also try to bring the data into a database via bulk-insert, then processing your records with set operations. Depending on what you have to do, that may be faster, as bulk-insert software is optimized for this type of task.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM