简体   繁体   English

如何将字符串拆分为单词列表?

[英]How do I split a string into a list of words?

How do I split a sentence and store each word in a list?如何拆分句子并将每个单词存储在列表中? For example, given a string like "these are words" , how do I get a list like ["these", "are", "words"] ?例如,给定一个像"these are words"这样的字符串,我如何得到一个像["these", "are", "words"]这样的列表?

给定一个字符串sentence ,它将每个单词存储在一个名为words的列表中:

words = sentence.split()

To split the string text on any consecutive runs of whitespace:要在任何连续运行的空格上拆分字符串text

words = text.split()      

To split the string text on a custom delimiter such as "," :要在自定义分隔符上拆分字符串text ,例如","

words = text.split(",")   

The words variable will be a list and contain the words from text split on the delimiter. words变量将是一个list ,并包含分隔符上拆分的text中的单词。

Usestr.split() :使用str.split()

Return a list of the words in the string, using sep as the delimiter ... If sep is not specified or is None, a different splitting algorithm is applied: runs of consecutive whitespace are regarded as a single separator, and the result will contain no empty strings at the start or end if the string has leading or trailing whitespace.返回字符串中的单词列表,使用 sep 作为分隔符...如果未指定 sep 或为 None,则应用不同的拆分算法:连续空格的运行被视为单个分隔符,结果将包含如果字符串具有前导或尾随空格,则在开头或结尾处没有空字符串。

>>> line = "a sentence with a few words"
>>> line.split()
['a', 'sentence', 'with', 'a', 'few', 'words']

Depending on what you plan to do with your sentence-as-a-list, you may want to look at the Natural Language Took Kit .根据您计划对句子列表执行的操作,您可能需要查看Natural Language Take Kit It deals heavily with text processing and evaluation.它主要处理文本处理和评估。 You can also use it to solve your problem:您也可以使用它来解决您的问题:

import nltk
words = nltk.word_tokenize(raw_sentence)

This has the added benefit of splitting out punctuation.这具有拆分标点符号的额外好处。

Example:例子:

>>> import nltk
>>> s = "The fox's foot grazed the sleeping dog, waking it."
>>> words = nltk.word_tokenize(s)
>>> words
['The', 'fox', "'s", 'foot', 'grazed', 'the', 'sleeping', 'dog', ',', 
'waking', 'it', '.']

This allows you to filter out any punctuation you don't want and use only words.这使您可以过滤掉任何您不想要的标点符号并仅使用单词。

Please note that the other solutions using string.split() are better if you don't plan on doing any complex manipulation of the sentence.请注意,如果您不打算对句子进行任何复杂的操作,使用string.split()的其他解决方案会更好。

[Edited] [已编辑]

How about this algorithm?这个算法怎么样? Split text on whitespace, then trim punctuation.在空白处拆分文本,然后修剪标点符号。 This carefully removes punctuation from the edge of words, without harming apostrophes inside words such as we're .这会小心地从单词边缘删除标点符号,而不会损害诸如we're之类的单词中的撇号。

>>> text
"'Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad.'"

>>> text.split()
["'Oh,", 'you', "can't", 'help', "that,'", 'said', 'the', 'Cat:', "'we're", 'all', 'mad', 'here.', "I'm", 'mad.', "You're", "mad.'"]

>>> import string
>>> [word.strip(string.punctuation) for word in text.split()]
['Oh', 'you', "can't", 'help', 'that', 'said', 'the', 'Cat', "we're", 'all', 'mad', 'here', "I'm", 'mad', "You're", 'mad']

I want my python function to split a sentence (input) and store each word in a list我希望我的 python 函数拆分一个句子(输入)并将每个单词存储在一个列表中

The str().split() method does this, it takes a string, splits it into a list: str().split()方法执行此操作,它接受一个字符串,将其拆分为一个列表:

>>> the_string = "this is a sentence"
>>> words = the_string.split(" ")
>>> print(words)
['this', 'is', 'a', 'sentence']
>>> type(words)
<type 'list'> # or <class 'list'> in Python 3.0

The problem you're having is because of a typo, you wrote print(words) instead of print(word) :您遇到的问题是由于拼写错误,您写的是print(words)而不是print(word)

Renaming the word variable to current_word , this is what you had:word变量重命名为current_word ,这就是你所拥有的:

def split_line(text):
    words = text.split()
    for current_word in words:
        print(words)

..when you should have done: ..当你应该做的时候:

def split_line(text):
    words = text.split()
    for current_word in words:
        print(current_word)

If for some reason you want to manually construct a list in the for loop, you would use the list append() method, perhaps because you want to lower-case all words (for example):如果出于某种原因您想在 for 循环中手动构造一个列表,您将使用 list append()方法,可能是因为您想将所有单词小写(例如):

my_list = [] # make empty list
for current_word in words:
    my_list.append(current_word.lower())

Or more a bit neater, using a list-comprehension :或者更简洁一些,使用list-comprehension

my_list = [current_word.lower() for current_word in words]

If you want all the chars of a word/sentence in a list, do this:如果您想要列表中的单词/句子的所有字符,请执行以下操作:

print(list("word"))
#  ['w', 'o', 'r', 'd']


print(list("some sentence"))
#  ['s', 'o', 'm', 'e', ' ', 's', 'e', 'n', 't', 'e', 'n', 'c', 'e']

shlex has a .split() function. shlex有一个.split()函数。 It differs from str.split() in that it does not preserve quotes and treats a quoted phrase as a single word:它与str.split()的不同之处在于它不保留引号并将引用的短语视为单个单词:

>>> import shlex
>>> shlex.split("sudo echo 'foo && bar'")
['sudo', 'echo', 'foo && bar']

NB: it works well for Unix-like command line strings.注意:它适用于类 Unix 命令行字符串。 It doesn't work for natural-language processing.它不适用于自然语言处理。

I think you are confused because of a typo.我认为你因为错字而感到困惑。

Replace print(words) with print(word) inside your loop to have every word printed on a different line在循环中将print(words)替换为print(word)以将每个单词打印在不同的行上

Split the words without without harming apostrophes inside words Please find the input_1 and input_2 Moore's law在不损害单词内部撇号的情况下拆分单词 请找到 input_1 和 input_2 摩尔定律

def split_into_words(line):
    import re
    word_regex_improved = r"(\w[\w']*\w|\w)"
    word_matcher = re.compile(word_regex_improved)
    return word_matcher.findall(line)

#Example 1

input_1 = "computational power (see Moore's law) and "
split_into_words(input_1)

# output 
['computational', 'power', 'see', "Moore's", 'law', 'and']

#Example 2

input_2 = """Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad."""

split_into_words(input_2)
#output
['Oh',
 'you',
 "can't",
 'help',
 'that',
 'said',
 'the',
 'Cat',
 "we're",
 'all',
 'mad',
 'here',
 "I'm",
 'mad',
 "You're",
 'mad']

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM