簡體   English   中英

將sklearn流水線+嵌套交叉驗證放在一起進行KNN回歸

[英]Putting together sklearn pipeline+nested cross-validation for KNN regression

我試圖弄清楚如何為sklearn.neighbors.KNeighborsRegressor構建工作流,其中包括:

  • 歸一化特征
  • 特征選擇(20個數字特征的最佳子集,無特定總數)
  • 在1到20的范圍內交叉驗證超參數K
  • 交叉驗證模型
  • 使用RMSE作為誤差指標

scikit-learn中有很多不同的選項,我在決定我需要的課程時有點不知所措。

除了sklearn.neighbors.KNeighborsRegressor ,我想我還需要:

sklearn.pipeline.Pipeline  
sklearn.preprocessing.Normalizer
sklearn.model_selection.GridSearchCV
sklearn.model_selection.cross_val_score

sklearn.feature_selection.selectKBest
OR
sklearn.feature_selection.SelectFromModel

有人可以告訴我定義此管道/工作流程的樣子嗎? 我認為應該是這樣的:

import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score, GridSearchCV

# build regression pipeline
pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# outer cross-validation on model, inner cross-validation on hyperparameters
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10), 
                         X, y, cv=10, scoring="neg_mean_squared_error", verbose=2)

rmses = np.abs(scores)**(1/2)
avg_rmse = np.mean(rmses)
print(avg_rmse)

它似乎沒有出錯,但是我的一些擔憂是:

  • 我是否正確執行了嵌套的交叉驗證,以使我的RMSE不受偏見?
  • 如果我想根據最佳RMSE選擇最終模型,是否應該對cross_val_scoreGridSearchCV使用scoring="neg_mean_squared_error"
  • SelectKBest, f_classif是用於選擇KNeighborsRegressor模型的功能的最佳選項嗎?
  • 我怎么看:
    • 哪個功能子集被選為最佳
    • 哪個K被選為最佳

任何幫助是極大的贊賞!

您的代碼似乎還可以。

對於cross_val_scoreGridSearchCVscoring="neg_mean_squared_error" ,我將執行相同的操作以確保一切正常,但是唯一的測試方法是刪除兩者之一,然后查看結果是否更改。

SelectKBest是一種很好的方法,但是您也可以使用SelectFromModel或什至可以在此處找到的其他方法

最后,為了獲得最佳的參數功能分數,我對您的代碼進行了一些修改,如下所示:

import ...


pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# changes here

grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error")

grid.fit(X, y)

# get the best parameters and the best estimator
print("the best estimator is \n {} ".format(grid.best_estimator_))
print("the best parameters are \n {}".format(grid.best_params_))

# get the features scores rounded in 2 decimals
pip_steps = grid.best_estimator_.named_steps['kbest']

features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ]
print("the features scores are \n {}".format(features_scores))

feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_]
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues))

# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple"

featurelist = ['age', 'weight']

features_selected_tuple=[(featurelist[i], features_scores[i], 
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)]

# Sort the tuple by score, in reverse order

features_selected_tuple = sorted(features_selected_tuple, key=lambda 
feature: float(feature[1]) , reverse=True)

# Print
print 'Selected Features, Scores, P-Values'
print features_selected_tuple

使用我的數據的結果:

the best estimator is
Pipeline(steps=[('normalize', Normalizer(copy=True, norm='l2')), ('kbest', SelectKBest(k=2, score_func=<function f_classif at 0x0000000004ABC898>)), ('regressor', KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
         metric_params=None, n_jobs=1, n_neighbors=18, p=2,
         weights='uniform'))])

the best parameters are
{'kbest__k': 2, 'regressor__n_neighbors': 18}

the features scores are
['8.98', '8.80']

the feature_pvalues is
['0.000', '0.000']

Selected Features, Scores, P-Values
[('correlation', '8.98', '0.000'), ('gene', '8.80', '0.000')]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM