[英]How to make text processing in a pandas df column more faster for large textual data?
我有一个超过 1GB 的聊天数据 (chat.txt) 的大文本文件,格式如下:
john|12-02-1999|hello#,there#,how#,are#,you#,tom$
tom|12-02-1999|hey#,john$,hows#, it#, goin#
mary|12-03-1999|hello#,boys#,fancy#,meetin#,ya'll#,here#
...
...
john|12-02-2000|well#,its#,been#,nice#,catching#,up#,with#,you#,and#, mary$
mary|12-03-2000|catch#,you#,on#,the#,flipside#,tom$,and#,john$
我想处理这个文本并分别为每个用户总结某些关键字的字数(比如 500 个字 - 你好,不错,喜欢......晚餐,不)。 此过程还涉及从每个单词中删除所有结尾的特殊字符
output 看起来像
user hello nice like ..... dinner No
Tom 10000 500 300 ..... 6000 0
John 6000 1200 200 ..... 3000 5
Mary 23 9000 10000 ..... 100 9000
这是我目前的 pythonic 解决方案:
chat_data = pd.read_csv("chat.txt", sep="|", names =["user","date","words"])
user_lst = chat_data.user.unique()
user_grouped_data= pd.DataFrame(columns=["user","words"])
user_grouped_data['user']=user_lst
for i,row in user_grouped_data.iterrows():
id = row["user"]
temp = chat_data[chat_data["user"]==id]
user_grouped_data.loc[i,"words"] = ",".join(temp["words"].tolist())
result = pd.DataFrame(columns=[ "user", "hello", "nice", "like","...500 other keywords...", "dinner", "no"])
result["user"]= user_lst
for i, row in result.iterrows():
id = row["user"]
temp = user_grouped_data[user_grouped_data["user"]==id]
words = temp.values.tolist()[0][1]
word_lst = words.split(",")
word_lst = [item[0:-1] for item in word_lst]
t_dict = Counter(word_lst)
keys = t_dict.keys()
for word in keys:
result.at[i,word]= t_dict.get(word)
result.to_csv("user_word_counts.csv")
这适用于小数据,但当我的 chat_data 超过 1gb 时,此解决方案变得非常缓慢且无法使用。
下面有什么我可以改进的部分可以帮助我更快地处理数据吗?
您可以将以逗号分隔的列split
为一个列表,通过该列表列、 groupby
名称和分解列表中的值分解为explode
,将unstack
分解或pivot_table
为您想要的格式,并在 multi-使用droplevel()
、 reset_index()
等索引列。
以下所有都是向量化的 pandas 方法,所以希望它很快。 注意:当我从剪贴板读取并传递headers=None
时,下面代码中的三列是 [0,1,2]
输入:
df = pd.DataFrame({0: {0: 'john', 1: 'tom', 2: 'mary', 3: 'john', 4: 'mary'},
1: {0: '12-02-1999',
1: '12-02-1999',
2: '12-03-1999',
3: '12-02-2000',
4: '12-03-2000'},
2: {0: 'hello#,there#,how#,are#,you#,tom$ ',
1: 'hey#,john$,hows#, it#, goin#',
2: "hello#,boys#,fancy#,meetin#,ya'll#,here#",
3: 'well#,its#,been#,nice#,catching#,up#,with#,you#,and#, mary$',
4: 'catch#,you#,on#,the#,flipside#,tom$,and#,john$'}})
代码:
df[2] = df[2].replace(['\#', '\$'],'', regex=True).str.split(',')
df = (df.explode(2)
.groupby([0, 2])[2].count()
.rename('Count')
.reset_index()
.set_index([0,2])
.unstack(1)
.fillna(0))
df.columns = df.columns.droplevel()
df = df.reset_index()
df
Out[1]:
2 0 goin it mary and are been boys catch catching ... on \
0 john 0.0 0.0 1.0 1.0 1.0 1.0 0.0 0.0 1.0 ... 0.0
1 mary 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 0.0 ... 1.0
2 tom 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0
2 the there tom tom up well with ya'll you
0 0.0 1.0 0.0 1.0 1.0 1.0 1.0 0.0 2.0
1 1.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0
您也可以使用.pivot_table
而不是.unstack()
,这样可以节省这行代码: df.columns = df.columns.droplevel()
:
df[2] = df[2].replace(['\#', '\$'],'', regex=True).str.split(',')
df = (df.explode(2)
.groupby([0, 2])[2].count()
.rename('Count')
.reset_index()
.pivot_table(index=0, columns=2, values='Count')
.fillna(0)
.astype(int)
.reset_index())
df
Out[45]:
2 0 goin it mary and are been boys catch catching ... on \
0 john 0 0 1 1 1 1 0 0 1 ... 0
1 mary 0 0 0 1 0 0 1 1 0 ... 1
2 tom 1 1 0 0 0 0 0 0 0 ... 0
2 the there tom tom up well with ya'll you
0 0 1 0 1 1 1 1 0 2
1 1 0 1 0 0 0 0 1 1
2 0 0 0 0 0 0 0 0 0
[3 rows x 31 columns]
如果您能够使用scikit-learn
,那么使用CountVectorizer
就非常容易
from sklearn.feature_extraction.text import CountVectorizer
s = df['words'].str.replace("#|\$|\s+", "")
model = CountVectorizer(tokenizer=lambda x: x.split(','))
df_final = pd.DataFrame(model.fit_transform(s).toarray(),
columns=model.get_feature_names(),
index=df.user).sum(level=0)
Out[279]:
and are been boys catch catching fancy flipside goin hello \
user
john 1 1 1 0 0 1 0 0 0 1
tom 0 0 0 0 0 0 0 0 1 0
mary 1 0 0 1 1 0 1 1 0 1
here hey how hows it its john mary meetin nice on the there \
user
john 0 0 1 0 0 1 0 1 0 1 0 0 1
tom 0 1 0 1 1 0 1 0 0 0 0 0 0
mary 1 0 0 0 0 0 1 0 1 0 1 1 0
tom up well with ya'll you
user
john 1 1 1 1 0 2
tom 0 0 0 0 0 0
mary 1 0 0 0 1 1
我不确定这种方法在大型 DataFrame 上的速度有多快,但您可以试一试。 首先,删除特殊字符并将字符串拆分为单词列表,从而形成另一列:
from itertools import chain
from collections import Counter
df['lists'] = df['words'].str.replace("#|\$", "").str.split(",")
现在,按用户分组),将列表收集到一个列表中,并使用Counter
计算出现次数:
df.groupby('user')['lists'].apply(chain.from_iterable)\
.apply(Counter)\
.apply(pd.Series)\
.fillna(0).astype(int)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.