繁体   English   中英

Python字典吞噬了大量的ram

[英]Python Dictionary eating up huge amount of ram

我建立了一个python字典,它将单词存储为关键字以及它们出现在其中的文件列表。下面是代码片段。

if len(sys.argv) < 2:
    search_query = input("Enter the search query")
else:
    search_query = sys.argv[1]

#path to the directory where files are stored, store the file names in list    named directory_name
directory_name = os.listdir("./test_input")
#create a list list_of_files to get the entore path of the files , so that they can be opend later
list_of_files = []
#appending the files to the list_files
for files in directory_name:
    list_of_files.append("./test_input"+"/"+files)
#empty dictionary
search_dictionary = {}

#iterate over the files in the list_of files one by one
for files in list_of_files:
    #open the file 
    open_file = open(files,"r")
    #store the basename of the file in as file_name
    file_name = os.path.basename(files)

   for line in open_file:
        for word in line.split():
        #if word in the file is not in the dictionary, add the word and the file_name in the dictionary
            if word not in search_dictionary:
                search_dictionary[word] = [file_name]
            else:
        #if the filename of a particular word is the same then ignore that
                if file_name in search_dictionary[word]:
                    continue
        #if the same word is found in the different file then append that filename
                search_dictionary[word].append(file_name)

def search(search_dictionary, search_query):
    if search_query in search_dictionary:
        print 'found '+ search_query
        print search_dictionary[search_query]
    else:
        print 'not found '+ search_query 

search(search_dictionary, search_query)

input_word = ""
while input_word != 'quit':    
    input_word = raw_input('enter a word to search ')
    start1 = time.time()
    search(search_dictionary,input_word)
    end1 = time.time()
    print(end1 - start1)

但如果没有。 目录中的文件数量大约为500 MB,这将占用RAM和SWAP空间。 如何管理内存使用情况。

如果您有大量文件,则可能是您没有关闭文件这一事实。 一种更常见的模式是将文件用作上下文管理器 ,如下所示:

with open(files, 'r') as open_file:
    file_name=os.path.basename(files)
    for line in open_file:
        for word  in line.split():
            if word not in search_dictionary:
                search_dictionary[word]=[file_name]
            else:
                if file_name in search_dictionary[word]:
                    continue
                search_dictionary[word].append(file_name)

使用此语法意味着您不必担心关闭文件。 如果您不想执行此操作,则在完成各行的迭代之后,仍应调用open_file.close() 这是我在您的代码中看到的唯一一个问题,我可能看到这可能导致如此高的内存使用率(尽管如果您要打开一些巨大的文件而没有换行符,也可以这样做)。

这对内存使用没有帮助,但是可以使用一种数据类型来大大简化代码: collections.defaultdict 您的代码可以这样写(我还包括os模块可以帮助您的几件事):

from collections import defaultdict


directory_name="./test_input"

list_of_files=[]
for files in os.listdir(directory_name):
    list_of_files.append(os.path.join(directory_name, files))
search_dictionary = defaultdict(set)

start=time.time()
for files in list_of_files:
    with open(files) as open_file:
        file_name=os.path.basename(files)
        for line in open_file:
            for word  in line.split():
                search_dictionary[word].add(file_name)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM