[英]How to create colormap of confidence estimates for k-Nearest Neighbor Classification
[英]How can we find the optimum K value in K-Nearest Neighbor?
我正在從 udemy 學習 Ml,下面是講師在講座中使用的代碼。但我對這段代碼並不完全滿意,因為它給出了許多 eror_rate 幾乎相同的 k 值(我必須手動檢查錯誤率為的 k 值微不足道)。
是否有任何其他方法可以找到最佳 k 值( n_neighbor
)?
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train,y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
使用 plot 顯示錯誤率與 K 值。
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
sklearn中提供了GridSearchCV和其他類似的算法,可用於進行交叉驗證並找到最佳參數
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X = iris.data
y = iris.target
k_range = list(range(1,100))
weight_options = ["uniform", "distance"]
param_grid = dict(n_neighbors = k_range, weights = weight_options)
knn = KNeighborsClassifier()
grid = GridSearchCV(knn, param_grid, cv = 10, scoring = 'accuracy')
grid.fit(X,y)
print (grid.best_score_)
print (grid.best_params_)
print (grid.best_estimator_)
# 0.9800000000000001
# {'n_neighbors': 13, 'weights': 'uniform'}
# KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
# metric_params=None, n_jobs=None, n_neighbors=13, p=2,
# weights='uniform')
所有算法都可以在這里找到。 https://scikit-learn.org/stable/modules/classes.html#hyper-parameter-optimizers
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.