简体   繁体   中英

NLTK - statistics count extremely slow with big corpus

I'd like to see basic statistics about my corpus like word/sentence counters, distributions etc. I have a tokens_corpus_reader_ready.txt which contains 137.000 lines of tagged example sentences in this format:

Zur/APPRART Zeit/NN kostenlos/ADJD aber/KON auch/ADV nur/ADV 11/CARD kW./NN Zur/APPRART Zeit/NN anscheinend/ADJD kostenlos/ADJD ./$.
...

I also have a TaggedCorpusReader() which I have a describe() method for:

class CSCorpusReader(TaggedCorpusReader):
  def __init__(self):
    TaggedCorpusReader.__init__(self, raw_corpus_path, 'tokens_corpus_reader_ready.txt')

    def describe(self):
    """
    Performs a single pass of the corpus and
    returns a dictionary with a variety of metrics
    concerning the state of the corpus.

    modified method from https://github.com/foxbook/atap/blob/master/snippets/ch03/reader.py
    """
    started = time.time()

    # Structures to perform counting.
    counts = nltk.FreqDist()
    tokens = nltk.FreqDist()

    # Perform single pass over paragraphs, tokenize and count
    for sent in self.sents():
        print(time.time())
        counts['sents'] += 1

        for word in self.words():
            counts['words'] += 1
            tokens[word] += 1

    return {
        'sents':  counts['sents'],
        'words':  counts['words'],
        'vocab':  len(tokens),
        'lexdiv': float(counts['words']) / float(len(tokens)),
        'secs':   time.time() - started,
    }

If I run the describe method like this in IPython:

>> corpus = CSCorpusReader()
>> print(corpus.describe())

There is about a 7 second delay between each sentence:

1543770777.502544
1543770784.383989
1543770792.2057862
1543770798.992075
1543770805.819034
1543770812.599932
...

If I run the same thing with just a few sentences in the tokens_corpus_reader_ready.txt the output time is totally reasonable:

1543771884.739753
1543771884.74035
1543771884.7408729
1543771884.7413561
{'sents': 4, 'words': 212, 'vocab': 42, 'lexdiv': 5.0476190476190474, 'secs': 0.002869129180908203}

Where does this behavior come from and how can I fix it?

Edit 1

By not every time accessing the corpus itself but operate on lists, the time went down to about 3 seconds per sentence, which is still very long, though:

    sents = list(self.sents())
    words = list(self.words())

    # Perform single pass over paragraphs, tokenize and count
    for sent in sents:
        print(time.time())
        counts['sents'] += 1

        for word in words:
            counts['words'] += 1
            tokens[word] += 1

Right here is your problem: For each sentence, you read the entire corpus with the words() method. No wonder it's taking a long time.

for sent in self.sents():
    print(time.time())
    counts['sents'] += 1

    for word in self.words():
        counts['words'] += 1
        tokens[word] += 1

In fact a sentence is already tokenized into words, so this is what you meant:

for sent in self.sents():
    print(time.time())
    counts['sents'] += 1

    for word in sent:
        counts['words'] += 1
        tokens[word] += 1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM