简体   繁体   中英

Efficient way of reading large txt file in python

I'm trying to open a txt file with 4605227 rows (305 MB)

The way I have done this before is:

data = np.loadtxt('file.txt', delimiter='\t', dtype=str, skiprows=1)

df = pd.DataFrame(data, columns=["a", "b", "c", "d", "e", "f", "g", "h", "i"])

df = df.astype(dtype={"a": "int64", "h": "int64", "i": "int64"})

But it's using up most of available ram ~10GB and not finishing. Is there a faster way of reading in this txt file and creating a pandas dataframe?

Thanks!

Edit: Solved now, thank you. Why is np.loadtxtx() so slow?

Rather than reading it in with numpy you could just read it directly in as a Pandas DataFrame. Eg, using the pandas.read_csv function, with something like:

df = pd.read_csv('file.txt', delimiter='\t', usecols=["a", "b", "c", "d", "e", "f", "g", "h", "i"])

Method 1:

You can read the file by chunks, Moreover there is a buffer size which ou can mention in readline and you can read.

inputFile = open('inputTextFile','r')
buffer_line = inputFile.readlines(BUFFERSIZE)
while buffer_line:
    #logic goes here

Method 2:

You can also use nmap Module, Here below is the link whic will explain the usage.

import mmap

with open("hello.txt", "r+b") as f:
    # memory-map the file, size 0 means whole file
    mm = mmap.mmap(f.fileno(), 0)
    # read content via standard file methods
    print(mm.readline())  # prints b"Hello Python!\n"
    # read content via slice notation
    print(mm[:5])  # prints b"Hello"
    # update content using slice notation;
    # note that new content must have same size
    mm[6:] = b" world!\n"
    # ... and read again using standard file methods
    mm.seek(0)
    print(mm.readline())  # prints b"Hello  world!\n"
    # close the map
    mm.close()

https://docs.python.org/3/library/mmap.html

You read it directly in as a Pandas DataFrame. eg

import pandas as pd
pd.read_csv(path)

If you want to read faster, you can use modin:

import modin.pandas as pd
pd.read_csv(path)

https://github.com/modin-project/modin

Below code will read the file line by line, It will iterate over each line in the file object in a for loop and process those lines as you want.

with open("file.txt") as fobj:

for line in fobj:

    print(line) #do your process

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM