![](/img/trans.png)
[英]pandas:calculate jaccard similarity for every row based on the value in multiple columns
[英]pandas:calculate jaccard similarity for every row based on the value in another column
我有一个 dataframe 如下,只有更多的行:
import pandas as pd
data = {'First': ['First value', 'Second value','Third value'],
'Second': [['old','new','gold','door'], ['old','view','bold','door'],['new','view','world','window']]}
df = pd.DataFrame (data, columns = ['First','Second'])
为了计算 Jaccard 相似度,我在网上找到了这篇文章(不是我的解决方案):
def lexical_overlap(doc1, doc2):
words_doc1 = set(doc1)
words_doc2 = set(doc2)
intersection = words_doc1.intersection(words_doc2)
union = words_doc1.union(words_doc2)
return float(len(intersection)) / len(union) * 100
因此,我想要得到的结果是度量将第二列的每一行作为 doc,并迭代地比较每一对并输出具有第一列中行名称的度量,如下所示:
First value and Second value = 80
First value and Third value = 95
Second value and Third value = 90
由于您的数据不大,您可以尝试使用稍微不同的方法进行广播:
# dummy for each rows
s = pd.get_dummies(df.Second.explode()).sum(level=0).values
# pair-wise jaccard
(s@s.T)/(s|s[:,None,:]).sum(-1) * 100
Output:
array([[100. , 33.33333333, 14.28571429],
[ 33.33333333, 100. , 14.28571429],
[ 14.28571429, 14.28571429, 100. ]])
好吧,我会这样做:
from itertools import combinations
for val in list(combinations(range(len(df)), 2)):
firstlist = df.iloc[val[0],1]
secondlist = df.iloc[val[1],1]
value = round(lexical_overlap(firstlist,secondlist),2)
print(f"{df.iloc[val[0],0]} and {df.iloc[val[1],0]}'s value is: {value}")
Output:
First value and Second value's value is: 33.33
First value and Third value's value is: 14.29
Second value and Third value's value is: 14.29
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.