簡體   English   中英

python - 通過readlines(size)提高大文件搜索的效率

[英]python - increase efficiency of large-file search by readlines(size)

我是Python新手,我目前正在使用Python 2.我有一些源文件,每個源文件都包含大量數據(大約1900萬行)。 它看起來如下:

apple   \t N   \t apple
n&apos
garden  \t N   \t garden
b\ta\md 
great   \t Adj \t great
nice    \t Adj \t (unknown)
etc

我的任務是在每個文件的第3列搜索一些目標詞,並且每次在語料庫中找到目標詞時,必須將該詞前后的10個詞添加到多維詞典中。

編輯:應排除包含'&','\\'或字符串'(未知)'的行。

我嘗試使用readlines()和enumerate()來解決這個問題,如下面的代碼所示。 代碼執行它應該做的事情但顯然對源文件中提供的數據量不夠高效。

我知道readlines()或read()不應該用於大型數據集,因為它將整個文件加載到內存中。 然而,逐行讀取文件,我沒有設法使用枚舉方法來獲取目標詞之前和之后的10個單詞。 我也不能使用mmap,因為我沒有權限在該文件上使用它。

所以,我認為具有一定大小限制的readlines方法將是最有效的解決方案。 然而,為此,我不會做出一些錯誤,因為每次達到大小限制結束時,目標字不會被捕獲,因為代碼剛剛破壞了10個字?

def get_target_to_dict(file):
targets_dict = {}
with open(file) as f:
    for line in f:
            targets_dict[line.strip()] = {}
return targets_dict

targets_dict = get_target_to_dict('targets_uniq.txt')
# browse directory and process each file 
# find the target words to include the 10 words before and after to the dictionary
# exclude lines starting with <,-,; to just have raw text

    def get_co_occurence(path_file_dir, targets, results):
        lines = []
        for file in os.listdir(path_file_dir):
            if file.startswith('corpus'):
            path_file = os.path.join(path_file_dir, file)
            with gzip.open(path_file) as corpusfile:
                # PROBLEMATIC CODE HERE
                # lines = corpusfile.readlines()
                for line in corpusfile:
                    if re.match('[A-Z]|[a-z]', line):
                        if '(unknown)' in line:
                            continue
                        elif '\\' in line:
                            continue
                        elif '&' in line:
                            continue
                        lines.append(line)
                for i, line in enumerate(lines):
                    line = line.strip()
                    if re.match('[A-Z][a-z]', line):
                        parts = line.split('\t')
                        lemma = parts[2]
                        if lemma in targets:
                            pos = parts[1]
                            if pos not in targets[lemma]:
                                targets[lemma][pos] = {}
                            counts = targets[lemma][pos]
                            context = []
                            # look at 10 previous lines
                            for j in range(max(0, i-10), i):
                                context.append(lines[j])
                            # look at the next 10 lines
                            for j in range(i+1, min(i+11, len(lines))):
                                context.append(lines[j])
                            # END OF PROBLEMATIC CODE
                            for context_line in context:
                                context_line = context_line.strip()
                                parts_context = context_line.split('\t')
                                context_lemma = parts_context[2]
                                if context_lemma not in counts:
                                    counts[context_lemma] = {}
                                context_pos = parts_context[1]
                                if context_pos not in counts[context_lemma]:
                                    counts[context_lemma][context_pos] = 0
                                counts[context_lemma][context_pos] += 1
                csvwriter = csv.writer(results, delimiter='\t')
                for k,v in targets.iteritems():
                    for k2,v2 in v.iteritems():
                        for k3,v3 in v2.iteritems():
                            for k4,v4 in v3.iteritems():
                                csvwriter.writerow([str(k), str(k2), str(k3), str(k4), str(v4)])
                                #print(str(k) + "\t" + str(k2) + "\t" + str(k3) + "\t" + str(k4) + "\t" + str(v4))

results = open('results_corpus.csv', 'wb')
word_occurrence = get_co_occurence(path_file_dir, targets_dict, results)

我復制整個代碼的部分是出於完整性的原因,因為它是一個函數的一部分,它從所有提取的信息中創建一個多維字典,然后將其寫入csv文件。

我真的很感激任何提示或建議使這個代碼更有效。

編輯我更正了代碼,因此它考慮了目標詞之前和之后的確切10個單詞

我的想法是創建一個緩沖區來存儲10行之前和另一個緩沖區存儲10行之后,當讀取文件時,它將被推入緩沖區之前,如果大小超過10則緩沖區將彈出

對於后緩沖區,我從文件迭代器1克隆另一個迭代器。 然后在循環內並行運行迭代器,使用克隆迭代器運行10次迭代以獲得10行之后。

這樣可以避免使用readlines()並將整個文件加載到內存中。 希望它在實際情況下適合您

編輯:如果第3列不包含'&','\\','(未知)'中的任何一個,則只填充之前的緩沖區。還要將split('\\ t')更改為split()所以它會照顧所有空格或制表符

import itertools
def get_co_occurence(path_file_dir, targets, results):
    excluded_words = ['&', '\\', '(unknown)'] # modify excluded words here 
    for file in os.listdir(path_file_dir): 
        if file.startswith('testset'): 
            path_file = os.path.join(path_file_dir, file) 
            with open(path_file) as corpusfile: 
                # CHANGED CODE HERE
                before_buf = [] # buffer to store before 10 lines 
                after_buf = []  # buffer to store after 10 lines 
                corpusfile, corpusfile_clone = itertools.tee(corpusfile) # clone file iterator to access next 10 lines 
                for line in corpusfile: 
                    line = line.strip() 
                    if re.match('[A-Z]|[a-z]', line): 
                        parts = line.split() 
                        lemma = parts[2]

                        # before buffer handling, fill buffer excluded line contains any of excluded words 
                        if not any(w in line for w in excluded_words): 
                            before_buf.append(line) # append to before buffer 
                        if len(before_buf)>11: 
                            before_buf.pop(0) # keep the buffer at size 10 
                        # next buffer handling
                        while len(after_buf)<=10: 
                            try: 
                                after = next(corpusfile_clone) # advance 1 iterator 
                                after_lemma = '' 
                                after_tmp = after.split()
                                if re.match('[A-Z]|[a-z]', after) and len(after_tmp)>2: 
                                    after_lemma = after_tmp[2]
                            except StopIteration: 
                                break # copy iterator will exhaust 1st coz its 10 iteration ahead 
                            if after_lemma and not any(w in after for w in excluded_words): 
                                after_buf.append(after) # append to buffer
                                # print 'after',z,after, ' - ',after_lemma
                        if (after_buf and line in after_buf[0]):
                            after_buf.pop(0) # pop off one ready for next

                        if lemma in targets: 
                            pos = parts[1] 
                            if pos not in targets[lemma]: 
                                targets[lemma][pos] = {} 
                            counts = targets[lemma][pos] 
                            # context = [] 
                            # look at 10 previous lines 
                            context= before_buf[:-1] # minus out current line 
                            # look at the next 10 lines 
                            context.extend(after_buf) 

                            # END OF CHANGED CODE
                            # CONTINUE YOUR STUFF HERE WITH CONTEXT

用Python 3.5編寫的功能替代方案。 我簡化了你的例子,雙方只拿了5個字。 關於垃圾值過濾還有其他簡化,但它只需要稍作修改。 我將使用PyPI中的包fn來使這個功能代碼更自然地閱讀。

from typing import List, Tuple
from itertools import groupby, filterfalse
from fn import F

首先我們需要提取列:

def getcol3(line: str) -> str:
    return line.split("\t")[2]

然后我們需要將行拆分為由謂詞分隔的塊:

TARGET_WORDS = {"target1", "target2"}

# this is out predicate
def istarget(word: str) -> bool:
    return word in TARGET_WORDS        

讓我們過濾垃圾並寫一個函數來取最后一個和前5個單詞:

def isjunk(word: str) -> bool:
    return word == "(unknown)"

def first_and_last(words: List[str]) -> (List[str], List[str]):
    first = words[:5]
    last = words[-5:]
    return first, last

現在,讓我們來看看小組:

words = (F() >> (map, str.strip) >> (filter, bool) >> (map, getcol3) >> (filterfalse, isjunk))(lines)
groups = groupby(words, istarget)

現在,處理組

def is_target_group(group: Tuple[str, List[str]]) -> bool:
    return istarget(group[0])

def unpack_word_group(group: Tuple[str, List[str]]) -> List[str]:
    return [*group[1]]

def unpack_target_group(group: Tuple[str, List[str]]) -> List[str]:
    return [group[0]]

def process_group(group: Tuple[str, List[str]]):
    return (unpack_target_group(group) if is_target_group(group) 
            else first_and_last(unpack_word_group(group)))

最后的步驟是:

words = list(map(process_group, groups))

PS

這是我的測試用例:

from io import StringIO

buffer = """
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\ttarget1
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\ttarget2
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\ttarget1
_\t_\tword
_\t_\t(unknown)
_\t_\tword
_\t_\tword
_\t_\tword
"""

# this simulates an opened file
lines = StringIO(buffer)

給定此文件,您將獲得此輸出:

[(['word', 'word', 'word', 'word', 'word'],
  ['word', 'word', 'word', 'word', 'word']),
 (['target1'], ['target1']),
 (['word', 'word', 'word', 'word'], ['word', 'word', 'word', 'word']),
 (['target2'], ['target2']),
 (['word', 'word', 'word', 'word', 'word'],
  ['word', 'word', 'word', 'word', 'word']),
 (['target1'], ['target1']),
 (['word', 'word', 'word', 'word'], ['word', 'word', 'word', 'word'])]

從這里你可以刪除前5個單詞和最后5個單詞。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM