简体   繁体   English

一个 class SVM 玩具例子看不懂

[英]One class SVM toy example not understood

I've been playing around with one class SVM.我一直在玩弄一台 class SVM。 I think I understand the theory behind it (that it attempts to separate the data from the origin).我想我理解它背后的理论(它试图将数据与来源分开)。 I attempted to run a toy example which the algorithm should fit perfectly;我试图运行一个算法应该完全适合的玩具示例; however it seems like I'm missing something since the algorithm doesn't classify all the training examples as non anomalies.但是,我似乎遗漏了一些东西,因为该算法并未将所有训练示例都归类为非异常。 Code:代码:

import numpy as np
from sklearn import svm
import matplotlib.pyplot as plt
import matplotlib.font_manager

xx, yy = np.meshgrid(np.linspace(-5, 5, 500), np.linspace(-5, 5, 500))

x = np.array([1, 3, 1, 3, 2])
y = np.array([1, 1, 3, 3, 2])
feature = np.vstack((x, y)).T
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
clf.fit(feature)
y_pred_train = clf.predict(feature)
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) 
Z = Z.reshape(xx.shape)
print(y_pred_train)
print(feature)
plt.title("Novelty Detection")
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.PuBu)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='darkred')
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='palevioletred')

b1 = plt.scatter(feature[:, 0], feature[:, 1], c='white', s=40, 
edgecolors='k')

plt.show()

the prediction on the training set is [ 1 -1 1 1 1], which does not make sense.对训练集的预测是[1 -1 1 1 1],这没有意义。

Please advise.请指教。

One Class SVM is taking at least one outlier (anomaly) among your dataset.一个 Class SVM 在您的数据集中至少有一个异常值(异常)。 This is why you see one "wrong" label in the predicted labels.这就是为什么您会在预测标签中看到一个“错误”的 label。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM