简体   繁体   中英

Python fastest way to read a large number of small files into memory?

I'm trying to read a few thousands html files stored on disk.

Is there any way to do better than;

for files in os.listdir('.'):
    if files.endswith('.html') :
        with (open) files as f:
            a=f.read()
            #do more stuffs

For a similar problem I have used this simple piece of code:

import glob
for file in glob.iglob("*.html"):
    with open(file) as f:
        a = f.read()

iglob doesn't stores all file simultaneously, this is perfect with a huge directory.
Remenber to close files after you have finished, the construct "with-open" make sure for you.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM