简体   繁体   English

慢Python正则表达式,无灾难性回溯

[英]Slow Python regex without catastrophic backtracking

I have 2 csv files. 我有2个csv文件。 The first, input , consists of input street addresses with various errors. 第一个input ,其中包含输入错误的街道地址。 The second, ref is a clean street address table. 第二, ref是干净的街道地址表。 Records within input need to be matched to records within ref . input记录需要与ref记录匹配。 Converting the files to lists with unique records is fast, but once I get to the matching process, it's dreadfully slow, taking a full 85 seconds just to match two addresses within input to ref without any regular expressions! 将文件转换为具有唯一记录的列表的速度很快,但是一旦进入匹配过程,它的速度将非常缓慢,仅花费85秒的时间就可以将input中的两个地址与没有任何正则表达式的ref进行匹配! I realize that the size of ref is the issue here; 我意识到ref的大小是这里的问题; it is over 1 million records in length and the file size is 30 MB. 它的长度超过100万条记录,文件大小为30 MB。 I was anticipating some performance issues with these kinds of sizes, but taking this long for only two records is unacceptable (realistically, I may have to match up to 10,000 records or more. Additionally, I will eventually need to embed some regex to ref items to allow for more flexible matching. Testing the new regex module is even worse, taking a whopping 185 seconds for the same two input records. Does anybody know the best way to speed things up substantially? Can I somehow index by zip code, for example? 我原本预计此类大小会导致一些性能问题,但是花这么长时间仅接受两条记录是不可接受的(实际上,我可能必须匹配多达10,000条记录或更多记录。此外,我最终将需要嵌入一些正则表达式来ref项目为了更灵活地匹配,测试新的正则表达式模块会更糟,对于相同的两个input记录要花费185秒的时间。有人知道哪种方法可以大大加快速度吗?例如,我可以以某种方式通过邮政编码进行索引吗? ?

Here are sample addresses from input and ref, respectively (after preprocessing): 以下是分别来自输入和参考的样本地址(经过预处理):

60651 N SPRINGFIELD AVE CHICAGO
60061 BROWNING CT VERNON HILLS

Here is what I have so far. 这是我到目前为止所拥有的。 (being a novice, I realize that there is probably all kinds of inefficiencies with my code, but that's not the issue) : (作为一个新手,我意识到我的代码可能存在各种效率低下的问题,但这不是问题所在):

import csv, re

f = csv.reader(open('/Users/benjaminbauman/Documents/inputsample.csv','rU'))

columns = zip(*f)

l = list(columns)

inputaddr = l[0][1:]

f = csv.reader(open('/Users/benjaminbauman/Documents/navstreets.csv','rU'))
f.next()

reffull = []
for row in f:
    row = str(row[0:7]).strip(r'['']').replace("\'","")
    if not ", , , , ," in row: reffull.append(row) 

input = list(set(inputaddr))

ref1 = list(set(reffull))
ref2 = ref1

input_scrub = []
for i in inputaddr:
    t = i.replace(',',' ')
    input_scrub.append(' '.join(t.split()))

ref_scrub = []

for i in ref1:
    t = i.replace(',',' ')
    ref_scrub.append(' '.join(t.split()))

output_iter1 = dict([ (i, [ r for r in ref_scrub if re.match(r, i) ]) for i in input_scrub ])

unmatched_iter1 = [i for i, j in output_iter1.items() if len(j) < 1]
matched_iter1 = {i: str(j[0][1]).strip(r'['']') for i, j in output_iter1.items() if len(j) is 1}
tied_iter1 = {k: zip(*(v))[1] for k,v in output_iter1.iteritems() if len(v) > 1}

Instead of fuzzy regex in the new module, maybe you could use the difflib module, if the execution time is acceptable: 如果执行时间可接受,则可以使用difflib模块来代替新模块中的模糊正则表达式:

import difflib


REF = ['455 Gateway Dr, Brooklyn, NY 11239',
       '10 Devoe St, Brooklyn, NY 11211',
       '8801 Queens Blvd, Elmhurst, NY 11373 ',
       '342 Wythe Ave, Brooklyn, NY 11249 ',
       '4488 E Live Oak Ave, Arcadia, CA 91006',
       '1134 N Vermont Ave, Los Angeles, CA 90029',
       '1101 17th St NW, Washington, DC 20036 ',
       '3001 Syringa St, Hopeful-City, AL 48798',
       '950 Laurel St, Minneapolis, KS 67467']


INPUT = ['4554 Gagate Dr, Brooklyn, NY 11239',
         '10 Devoe St, Brooklyn, NY 11211',
         '8801 Queens Blvd, Elmhurst, NY 11373 ',
         '342 Wythe Ave, Brooklyn, NY 11249 ',
         '4488 E Live Oak Ave, Arcadia, CA 91006',
         '1134 N Vermont Ave, Los Angeles, CA 90029',
         '1101 17th St NW, Washington, DC 20036 ',
         '3001 Syrinuy St, Hopeful Dam, AL 48798',
         '950 Laurel St, Minneapolis, KS 67467',
         '455 Gateway Doctor, Forgotten Place, NY 11239',
         '10 Devoe St, Brook., NY 11211',
         '82477 Queens Blvd, Elmerst, NY 11373 ',
         '342 Waithe Street, Brooklyn, MN 11249 ',
         '4488 E Live Poke Ave, Arcadia, CA 145',
         '1134 N Vermiculite Ave, Liz Angelicas, CA 90029',
         '1101 1st St NW, Washing, DC 20036 ']


def treatment(inp,reference,crit,gcm = difflib.get_close_matches):
    for input_item in inp:
        yield (input_item,gcm(input_item,reference,1000,crit))


for a,b in treatment(INPUT,REF,0.65):
    print '\n- %s\n     %s' % (a, '\n     '.join(b))

the result is: 结果是:

- 4554 Gagate Dr, Brooklyn, NY 11239
     455 Gateway Dr, Brooklyn, NY 11239
     342 Wythe Ave, Brooklyn, NY 11249 

- 10 Devoe St, Brooklyn, NY 11211
     10 Devoe St, Brooklyn, NY 11211

- 8801 Queens Blvd, Elmhurst, NY 11373 
     8801 Queens Blvd, Elmhurst, NY 11373 

- 342 Wythe Ave, Brooklyn, NY 11249 
     342 Wythe Ave, Brooklyn, NY 11249 
     455 Gateway Dr, Brooklyn, NY 11239

- 4488 E Live Oak Ave, Arcadia, CA 91006
     4488 E Live Oak Ave, Arcadia, CA 91006

- 1134 N Vermont Ave, Los Angeles, CA 90029
     1134 N Vermont Ave, Los Angeles, CA 90029

- 1101 17th St NW, Washington, DC 20036 
     1101 17th St NW, Washington, DC 20036 

- 3001 Syrinuy St, Hopeful Dam, AL 48798
     3001 Syringa St, Hopeful-City, AL 48798

- 950 Laurel St, Minneapolis, KS 67467
     950 Laurel St, Minneapolis, KS 67467

- 455 Gateway Doctor, Forgotten Place, NY 11239
     455 Gateway Dr, Brooklyn, NY 11239

- 10 Devoe St, Brook., NY 11211
     10 Devoe St, Brooklyn, NY 11211

- 82477 Queens Blvd, Elmerst, NY 11373 
     8801 Queens Blvd, Elmhurst, NY 11373 

- 342 Waithe Street, Brooklyn, MN 11249 
     342 Wythe Ave, Brooklyn, NY 11249 
     455 Gateway Dr, Brooklyn, NY 11239

- 4488 E Live Poke Ave, Arcadia, CA 145
     4488 E Live Oak Ave, Arcadia, CA 91006

- 1134 N Vermiculite Ave, Liz Angelicas, CA 90029
     1134 N Vermont Ave, Los Angeles, CA 90029

- 1101 1st St NW, Washing, DC 20036 
     1101 17th St NW, Washington, DC 20036 

It occurred to me why the line 我想到为什么行

output_iter1 = dict([ (i, [ r for r in ref_scrub if re.match(r, i) ]) for i in input_scrub ])

was taking so long. 花了这么长时间。 The matching process is searching for a match for every item within the exceptionally large list, ref to items within the smaller list, input , as opposed to the other way around. 匹配过程是针对异常大列表中的每个项目搜索匹配项,而不是相反,请ref较小列表中的项目input Unfortunately, I wanted it structured this way so that I could embed regular expressions to items within ref , as these items are tokenized by address attribute to allow for easy anchoring. 不幸的是,我希望它采用这种结构,以便我可以将正则表达式嵌入到ref内的项目中,因为这些项目由address属性标记化,以便于锚定。 I suppose there are two workarounds given my limited understanding of sql. 考虑到我对sql的有限理解,我想有两种解决方法。 The first could use the idea brought up in my last comment per eyquem's suggestion. 第一个可以使用根据eyquem的建议在我的最后评论中提出的想法。 The second could use a lookup (index?) by city and zip code attributes using an equals to statement before doing more complicated matching, either with regex or difflib. 第二种方法可以在执行更复杂的匹配(正则表达式或difflib)之前,使用equals to语句按城市和邮政编码属性使用查找(索引?)。

I've split items within input and ref so that the city and zip code attributes are separate items within a list, such as the following: 我将inputref项目分开,以便城市和邮政编码属性是列表中的单独项目,例如:

ref ('COVE POINTE CT', 'BLOOMINGTON, 61704')
input ('S EBERHART', 'CHICAGO, 60628')

The following allows me to narrow the search to the portion of ref that shares the same city and zip code. 以下内容使我可以将搜索范围缩小到具有相同城市和邮政编码的ref部分。 This narrows the length of time down to 56 seconds for an input file containing over 1000 records. 对于包含1000多个记录的input文件,这将时间长度缩短到56秒。 This is substantially better. 这实质上更好。

matchaddr = []
refaddr = []
unmatched = []
for i in ref:
    for t in input:
        if t[1] == i[1]:
            if re.match(i[0],t[0]):
                matchaddr.append(t[0] + ', ' + t[1])
                refaddr.append(i[0] + ', ' + i[1]) 

Now I can use my beloved regex again (pending that the expressions don't cause additional problems, such as catastrophic backtracking). 现在,我可以再次使用心爱的正则表达式(在表达式不会引起其他问题的情况下,例如灾难性的回溯)。 Also, this code's speed is because perfect matches are found first with city and zip code attributes. 另外,此代码的速度是因为首先找到具有城市和邮政编码属性的完美匹配。 If I try to allow for flexible matching with city and zip code as well, speed will likely be greatly sacrificed. 如果我也尝试允许与城市和邮政编码进行灵活匹配,则速度可能会大大降低。 Unfortunately, it may have to come to that point (input contains messy city and zip code attributes as well). 不幸的是,它可能到了这一点(输入中还包含混乱的城市和邮政编码属性)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM