簡體   English   中英

預處理腳本無法刪除標點符號

[英]Pre-processing script not removing punctuation

我有一個代碼,應該對文本文檔列表進行預處理。 即:給定一個文本文檔列表,它返回一個列表,其中每個文本文檔都經過了預處理。 但是由於某種原因,它無法刪除標點符號。

import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
nltk.download("stopwords")
nltk.download('punkt')
nltk.download('wordnet')


def preprocess(docs):
  """ 
  Given a list of documents, return each documents as a string of tokens, 
  stripping out punctuation 
  """
  clean_docs = [clean_text(i) for i in docs]
  tokenized_docs = [tokenize(i) for i in clean_docs]
  return tokenized_docs

def tokenize(text):
  """ 
  Tokenizes text -- returning the tokens as a string 
  """
  stop_words = stopwords.words("english")
  nltk_tokenizer = nltk.WordPunctTokenizer().tokenize
  tokens = nltk_tokenizer(text)  
  result = " ".join([i for i in tokens if not i in stop_words])
  return result


def clean_text(text): 
  """ 
  Cleans text by removing case
  and stripping out punctuation. 
  """
  new_text = make_lowercase(text)
  new_text = remove_punct(new_text)
  return new_text

def make_lowercase(text):
  new_text = text.lower()
  return new_text

def remove_punct(text):
  text = text.split()
  punct = string.punctuation
  new_text = " ".join(word for word in text if word not in string.punctuation)
  return new_text

# Get a list of titles  
s1 = "[UPDATE] I am tired"
s2 = "I am cold."

clean_docs = preprocess([s1, s2])
print(clean_docs)

打印輸出:

['[ update ] tired', 'cold .']

換句話說,它不會刪除標點符號,因為“ [”,“]”和“。” 全部出現在最終產品中。

您正在嘗試搜索標點中的單詞。 顯然[UPDATE]不是標點符號。

嘗試在文本/替換標點中搜索標點:

import string


def remove_punctuation(text: str) -> str:
    for p in string.punctuation:
        text = text.replace(p, '')
    return text


if __name__ == '__main__':
    text = '[UPDATE] I am tired'
    print(remove_punctuation(text))

# output:
# UPDATE I am tired

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM