简体   繁体   中英

Python Dictionary eating up huge amount of ram

I have build a python dictionary that will store the word as key and the list of files that they appear in. Below is the code snipet.

if len(sys.argv) < 2:
    search_query = input("Enter the search query")
else:
    search_query = sys.argv[1]

#path to the directory where files are stored, store the file names in list    named directory_name
directory_name = os.listdir("./test_input")
#create a list list_of_files to get the entore path of the files , so that they can be opend later
list_of_files = []
#appending the files to the list_files
for files in directory_name:
    list_of_files.append("./test_input"+"/"+files)
#empty dictionary
search_dictionary = {}

#iterate over the files in the list_of files one by one
for files in list_of_files:
    #open the file 
    open_file = open(files,"r")
    #store the basename of the file in as file_name
    file_name = os.path.basename(files)

   for line in open_file:
        for word in line.split():
        #if word in the file is not in the dictionary, add the word and the file_name in the dictionary
            if word not in search_dictionary:
                search_dictionary[word] = [file_name]
            else:
        #if the filename of a particular word is the same then ignore that
                if file_name in search_dictionary[word]:
                    continue
        #if the same word is found in the different file then append that filename
                search_dictionary[word].append(file_name)

def search(search_dictionary, search_query):
    if search_query in search_dictionary:
        print 'found '+ search_query
        print search_dictionary[search_query]
    else:
        print 'not found '+ search_query 

search(search_dictionary, search_query)

input_word = ""
while input_word != 'quit':    
    input_word = raw_input('enter a word to search ')
    start1 = time.time()
    search(search_dictionary,input_word)
    end1 = time.time()
    print(end1 - start1)

but if the no. of files in the directory is like 500 MB the RAM and SWAP space is eaten up. How do I manage the memory usage.

If you have a large number of files the fact that you aren't closing your files could be the reason. A more common pattern is using files as context managers something like this:

with open(files, 'r') as open_file:
    file_name=os.path.basename(files)
    for line in open_file:
        for word  in line.split():
            if word not in search_dictionary:
                search_dictionary[word]=[file_name]
            else:
                if file_name in search_dictionary[word]:
                    continue
                search_dictionary[word].append(file_name)

Using this syntax means you don't have to worry about closing your files. If you don't want to do this, you should still call open_file.close() after you're done iterating through the lines. This is the only issue I can see in your code that I can see that might be causing such high memory usage (although if you're opening some enormous files without line breaks, that could also do it).

This won't help with memory usage, but there's a data type you could use to greatly simplify your code: collections.defaultdict . Your code could be written like this (I also included a couple of things that the os module can help you with):

from collections import defaultdict


directory_name="./test_input"

list_of_files=[]
for files in os.listdir(directory_name):
    list_of_files.append(os.path.join(directory_name, files))
search_dictionary = defaultdict(set)

start=time.time()
for files in list_of_files:
    with open(files) as open_file:
        file_name=os.path.basename(files)
        for line in open_file:
            for word  in line.split():
                search_dictionary[word].add(file_name)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM