[英]Cleaning Twitter data pandas python
嘗試將 Twitter 數據作為熊貓數據框進行清理。 我好像少了一步。 處理完所有推文后,我想我錯過了覆蓋舊推文的新推文嗎? 當我保存文件時,我看不到推文中的任何變化。 我錯過了什么?
import pandas as pd
import re
import emoji
import nltk
nltk.download('words')
words = set(nltk.corpus.words.words())
trump_df = pd.read_csv('new_Trump.csv')
for tweet in trump_df['tweet']:
tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", tweet) #Remove http links
tweet = " ".join(tweet.split())
tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis
tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text
tweet = " ".join(w for w in nltk.wordpunct_tokenize(tweet) \
if w.lower() in words or not w.isalpha()) #Remove non-english tweets (not 100% success)
print(tweet)
trump_df.to_csv('new_Trump.csv')
正如您所說的那樣,您永遠不會將數據存儲回去,讓我們創建一個函數來完成所有工作,然后使用map
將其傳遞給數據框。 它比循環遍歷數據幀中的每個值並將其存儲到列表中(選項 B)更有效。
def cleaner(tweet):
tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", tweet) #Remove http links
tweet = " ".join(tweet.split())
tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis
tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text
tweet = " ".join(w for w in nltk.wordpunct_tokenize(tweet) \
if w.lower() in words or not w.isalpha())
return tweet
trump_df['tweet'] = trump_df['tweet'].map(lambda x: cleaner(x))
trump_df.to_csv('') #specify location
這將使用修改覆蓋tweet
列。
如前所述,這將證明我認為效率會低一些,但它就像在for
循環之前創建一個列表一樣簡單,並用每個干凈的推文填充它。
clean_tweets = []
for tweet in trump_df['tweet']:
tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
##Here's where all the cleaning takes place
clean_tweets.append(tweet)
trump_df['tweet'] = clean_tweets
trump_df.to_csv('') #Specify location
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.