[英]Precision, recall, F1 score equal with sklearn
我正在尝试比较k近邻算法中不同的距离计算方法和不同的投票系统。 当前,我的问题是,无论我做什么,从scikit-learn的precision_recall_fscore_support
方法产生的精度,召回率和fscore都完全相同。 这是为什么? 我已经在不同的数据集(虹膜,玻璃杯和葡萄酒)上进行了尝试。 我究竟做错了什么? 到目前为止的代码:
#!/usr/bin/env python3
from collections import Counter
from data_loader import DataLoader
from sklearn.metrics import precision_recall_fscore_support as pr
import random
import math
import ipdb
def euclidean_distance(x, y):
return math.sqrt(sum([math.pow((a - b), 2) for a, b in zip(x, y)]))
def manhattan_distance(x, y):
return sum(abs([(a - b) for a, b in zip(x, y)]))
def get_neighbours(training_set, test_instance, k):
names = [instance[4] for instance in training_set]
training_set = [instance[0:4] for instance in training_set]
distances = [euclidean_distance(test_instance, training_set_instance) for training_set_instance in training_set]
distances = list(zip(distances, names))
print(list(filter(lambda x: x[0] == 0.0, distances)))
sorted(distances, key=lambda x: x[0])
return distances[:k]
def plurality_voting(nearest_neighbours):
classes = [nearest_neighbour[1] for nearest_neighbour in nearest_neighbours]
count = Counter(classes)
return count.most_common()[0][0]
def weighted_distance_voting(nearest_neighbours):
distances = [(1/nearest_neighbour[0], nearest_neighbour[1]) for nearest_neighbour in nearest_neighbours]
index = distances.index(min(distances))
return nearest_neighbours[index][1]
def weighted_distance_squared_voting(nearest_neighbours):
distances = list(map(lambda x: 1 / x[0]*x[0], nearest_neighbours))
index = distances.index(min(distances))
return nearest_neighbours[index][1]
def main():
data = DataLoader.load_arff("datasets/iris.arff")
dataset = data["data"]
# random.seed(42)
random.shuffle(dataset)
train = dataset[:100]
test = dataset[100:150]
classes = [instance[4] for instance in test]
predictions = []
for test_instance in test:
prediction = weighted_distance_voting(get_neighbours(train, test_instance[0:4], 15))
predictions.append(prediction)
print(pr(classes, predictions, average="micro"))
if __name__ == "__main__":
main()
问题在于您使用的是“微型”平均值。
如前所述这里 :
如文档中所写:“请注意,对于多类设置中的“微”平均,将产生相等的精度,查全率和[image:F],而“加权”平均可能产生的F得分不在精度和召回。” http://scikit-learn.org/stable/modules/model_evaluation.html
但是,如果使用标签参数删除多数标签,则微平均与准确性不同,精度与召回率也不同。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.