[英]Reading a text file and splitting it into single words in python
我有这个由数字和单词组成的文本文件,例如这样 - 09807754 18 n 03 aristocrat 0 blue_blood 0 patrician
我想拆分它,以便每个单词或数字都作为一个新行出现。
空格分隔符是理想的,因为我希望带破折号的单词保持连接。
这是我到目前为止:
f = open('words.txt', 'r')
for word in f:
print(word)
不太确定如何从这里开始,我希望这是输出:
09807754
18
n
3
aristocrat
...
鉴于此文件:
$ cat words.txt
line1 word1 word2
line2 word3 word4
line3 word5 word6
如果您一次只想要一个单词(忽略文件中空格与换行符的含义):
with open('words.txt','r') as f:
for line in f:
for word in line.split():
print(word)
印刷:
line1
word1
word2
line2
...
word6
同样,如果您想拼合文件到文件中的单词一个平面列表,你可以做这样的事情:
with open('words.txt') as f:
flat_list=[word for line in f for word in line.split()]
>>> flat_list
['line1', 'word1', 'word2', 'line2', 'word3', 'word4', 'line3', 'word5', 'word6']
它可以使用print '\\n'.join(flat_list)
... 创建与第一个示例相同的输出。
或者,如果您想要文件每一行中单词的嵌套列表(例如,从文件创建行和列的矩阵):
with open('words.txt') as f:
matrix=[line.split() for line in f]
>>> matrix
[['line1', 'word1', 'word2'], ['line2', 'word3', 'word4'], ['line3', 'word5', 'word6']]
如果您想要一个正则表达式解决方案,它将允许您在示例文件中过滤wordN
与lineN
类型的单词:
import re
with open("words.txt") as f:
for line in f:
for word in re.findall(r'\bword\d+', line):
# wordN by wordN with no lineN
或者,如果您希望它成为带有正则表达式的逐行生成器:
with open("words.txt") as f:
(word for line in f for word in re.findall(r'\w+', line))
f = open('words.txt')
for word in f.read().split():
print(word)
作为补充,如果您正在读取一个非常大的文件,并且您不想一次将所有内容读入内存,您可以考虑使用buffer ,然后通过 yield 返回每个单词:
def read_words(inputfile):
with open(inputfile, 'r') as f:
while True:
buf = f.read(10240)
if not buf:
break
# make sure we end on a space (word boundary)
while not str.isspace(buf[-1]):
ch = f.read(1)
if not ch:
break
buf += ch
words = buf.split()
for word in words:
yield word
yield '' #handle the scene that the file is empty
if __name__ == "__main__":
for word in read_words('./very_large_file.txt'):
process(word)
你可以做的是使用 nltk 来标记单词,然后将所有单词存储在一个列表中,这就是我所做的。 如果你不知道 nltk; 它代表自然语言工具包,用于处理自然语言。 如果你想开始,这里有一些资源 [ http://www.nltk.org/book/]
import nltk
from nltk.tokenize import word_tokenize
file = open("abc.txt",newline='')
result = file.read()
words = word_tokenize(result)
for i in words:
print(i)
输出将是这样的:
09807754
18
n
03
aristocrat
0
blue_blood
0
patrician
with open(filename) as file:
words = file.read().split()
它是文件中所有单词的列表。
import re
with open(filename) as file:
words = re.findall(r"([a-zA-Z\-]+)", file.read())
这是我完全实用的方法,它避免了阅读和拆分行。 它使用了itertools
模块:
map
替换itertools.imap
import itertools
def readwords(mfile):
byte_stream = itertools.groupby(
itertools.takewhile(lambda c: bool(c),
itertools.imap(mfile.read,
itertools.repeat(1))), str.isspace)
return ("".join(group) for pred, group in byte_stream if not pred)
示例用法:
>>> import sys
>>> for w in readwords(sys.stdin):
... print (w)
...
I really love this new method of reading words in python
I
really
love
this
new
method
of
reading
words
in
python
It's soo very Functional!
It's
soo
very
Functional!
>>>
我想在你的情况下,这将是使用该功能的方式:
with open('words.txt', 'r') as f:
for word in readwords(f):
print(word)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.