繁体   English   中英

使用python textmining模块构建文本文档矩阵时,如何保留连词?

[英]How can I preserve hypenated words when building a text document matrix with the python textmining module?

我在下面的这段代码中将一段文本与停用词集进行比较,并返回文本中不在停用词集中的单词列表。 然后,我将单词列表更改为字符串,以便可以在文本挖掘模块中使用它来创建术语文档矩阵。

我在代码中进行了检查,这些代码表明连字在列表和字符串中得到维护,但是一旦我将它们传递给代码的TDM部分,连字就会被分解。 有没有办法在文本挖掘模块和TDM中维护带连字符的单词?

import re

f= open ("words")  #dictionary
stops = set()
for line in f:
    stops.add(line.strip())

f = open ("azathoth") #Azathoth (1922)
azathoth = list()
for line in f:
    azathoth.extend(re.findall("[A-z\-\']+", line.strip()))

azathothcount = list()
for w in azathoth:
    if w in stops:
        continue
    else:
        azathothcount.append(w)

print azathothcount[1:10]
raw_input('Press Enter...')

azathothstr = ' '.join(azathothcount)
print azathothstr
raw_input('Press Enter...')

import textmining

def termdocumentmatrix_example():
    doc1 = azathothstr

    tdm = textmining.TermDocumentMatrix()
    tdm.add_doc(doc1)

    tdm.write_csv('matrixhp.csv', cutoff=1)

    for row in tdm.rows(cutoff=1):
        print row

raw_input('Press Enter...')
termdocumentmatrix_example()

初始化TermDocumentMatrix类时,文本挖掘程序包默认为其自身的'simple_tokenize'函数。 add_doc()将您的文本通过simple_tokenize()推送,然后再将其添加到tdm。

帮助(文本挖掘)部分产生:

class TermDocumentMatrix(__builtin__.object)
 |  Class to efficiently create a term-document matrix.
 |  
 |  The only initialization parameter is a tokenizer function, which should
 |  take in a single string representing a document and return a list of
 |  strings representing the tokens in the document. If the tokenizer
 |  parameter is omitted it defaults to using textmining.simple_tokenize
 |  
 |  Use the add_doc method to add a document (document is a string). Use the
 |  write_csv method to output the current term-document matrix to a csv
 |  file. You can use the rows method to return the rows of the matrix if
 |  you wish to access the individual elements without writing directly to a
 |  file.
 |  
 |  Methods defined here:
 |  
 |  __init__(self, tokenizer=<function simple_tokenize>)
 |
 |  ...
 |
 |simple_tokenize(document)
 |  Clean up a document and split into a list of words.
 |
 |  Converts document (a string) to lowercase and strips out 
 |  everything which is not a lowercase letter.

因此,您必须滚动自己的不会在连字符上拆分的令牌生成器,并在初始化TermDocumentMatrix类时将其传递通过。

在我看来,最好是此过程保留simple_tokenize()函数的其余功能-减去删除带连字符的单词,以便您可以在该函数的结果周围传送带连字符的单词。 下面,我从文档中删除了带连字符的单词,将其余部分通过simple_tokenize()推送,然后合并了两个列表(带连字符的单词+ simple_tokenize()结果),然后将它们添加到tdm中:

doc1 = 'blah "blah" blahbitty-blah, in-the bloopity blip bleep br-rump! '

import re

def toknzr(txt): 
    hyph_words = re.findall(r'\w+(?:-\w+)+',txt)
    remove = '|'.join(hyph_words)
    regex = re.compile(r'\b('+remove+r')\b', flags=re.IGNORECASE)
    simple = regex.sub("", txt)
    return(hyph_words + textmining.simple_tokenize(simple))

tdm = textmining.TermDocumentMatrix(tokenizer = toknzr)
tdm.add_doc(doc1)

这可能不是制作自己的令牌生成器的最Python方式(赞赏反馈!),但是这里的要点是,您必须使用新的令牌生成器初始化该类,而不要使用默认的simple_tokenize()。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM