简体   繁体   中英

Python - Efficient way to read large amounts of tabular data

I have a file containing a large table of numbers, roughly 300 MB in size. I want to read this in Python.

Data looks like this:

-200 1 11097.4 16414.2 1
-200 1 11197.4 16414.8 1
-200 1 11297.4 16415.4 1
-200 1 11397.4 16416 1
-200 1 11497.4 16416.5 1
-200 1 11597.4 16417.1 1
-200 1 11697.4 16417.7 1

Python code looks like this:

    with open(filename) as f:
        nrow, ncol= [int(x) for x in next(f).split()] 
        for k in range(2):
            rr = []
            for i in range(nrow+1):
                row = []
                for j in range(ncol+1):
                    a = next(f).split()                     
                    row.append([int(a[0]), int(a[1]), float(a[2]), float(a[4])])
                rr.append(row)          
            summary.append(rr)

This is very slow; it takes about 60 seconds to read the file. I want to get it down to less than 10 seconds. What's the simplest way to make it a bit faster?

I am perfectly happy to change the data file format, if it helps.

Use pandas. This might be a duplicate so also check out these answers

code.py

import pandas as pd
import numpy as np

df = pd.read_csv("large_file.txt", sep="\s")
np.save("large_file.npz", df.values)

with load('large_file.npz') as data:
    print(data.shape)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM