繁体   English   中英

Python 脚本使用所有 RAM

[英]Python script uses all RAM

我有一个 Python 脚本,用于解析大型文档中的电子邮件。 该脚本使用了我机器上的所有 RAM,并使其锁定到我必须重新启动它的位置。 我想知道是否有一种方法可以限制这种情况,或者甚至在完成读取一个文件并提供一些输出后暂停一下。 任何帮助都会非常感谢。

#!/usr/bin/env python

# Extracts email addresses from one or more plain text files.
#
# Notes:
# - Does not save to file (pipe the output to a file if you want it saved).
# - Does not check for duplicates (which can easily be done in the terminal).
# - Does not save to file (pipe the output to a file if you want it saved).
# Twitter @Critical24 - DefensiveThinking.io 


from optparse import OptionParser
import os.path
import re

regex = re.compile(("([a-z0-9!#$%&'*+\/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+\/=?^_`"
                    "{|}~-]+)*(@|\sat\s)(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?(\.|"
                    "\sdot\s))+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?)"))

def file_to_str(filename):
    """Returns the contents of filename as a string."""
    with open(filename, encoding='utf-8') as f: #Added encoding='utf-8'
    return f.read().lower() # Case is lowered to prevent regex mismatches.

def get_emails(s):
    """Returns an iterator of matched emails found in string s."""
    # Removing lines that start with '//' because the regular expression
    # mistakenly matches patterns like 'http://foo@bar.com' as '//foo@bar.com'.
    return (email[0] for email in re.findall(regex, s) if not email[0].startswith('//'))

import os
not_parseble_files = ['.txt', '.csv']
for root, dirs, files in os.walk('.'):#This recursively searches all sub directories for files
for file in files:
    _,file_ext = os.path.splitext(file)#Here we get the extension of the file
    file_path = os.path.join(root,file)
    if file_ext in not_parseble_files:#We make sure the extension is not in the banned list 'not_parseble_files'
       print("File %s is not parseble"%file_path)
       continue #This one continues the loop to the next file
    if os.path.isfile(file_path):
        for email in get_emails(file_to_str(file_path)):
            print(email)

我想你应该试试这个资源模块:

import resource
resource.setrlimit(resource.RLIMIT_AS, (megs * 1048576L, -1L))

您似乎正在使用f.read()高达8 GB的文件读入内存。 相反,您可以尝试将正则表达式应用于文件的每一行,而不必将整个文件放在内存中。

with open(filename, encoding='utf-8') as f: #Added encoding='utf-8'
    return (email[0] for line in f
                     for email in re.findall(regex, line.lower())
                     if not email[0].startswith('//'))

但是,这仍然需要很长时间。 另外,我没有检查你的正则表达式可能存在的问题。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM