[英]ModuleNotFoundError: No module named 'sklearn.cross_validation'
[英]Bug in sklearn.cross_validation
使用LeaveOneOut
在sklearn.cross_validation
可能存在错误。 该x_test
和y_test
未在使用LeaveOneOut
。 而是使用x_train
和y_train
完成验证。
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut, cross_val_predict
x = np.array([[1,2],[3,4],[5,6],[7,8],[9,10]])
y = np.array([12,13,19,18,15])
clf = LinearRegression().fit(x,y)
cv = LeaveOneOut(len(y))
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
y_pred_USING_x_test = clf.predict(x_test)
y_pred_USING_x_train = clf.predict(x_train)
print 'y_pred_USING_x_test: ', y_pred_USING_x_test, 'y_pred_USING_x_train: ', y_pred_USING_x_train
y_pred_USING_x_test: [ 13.2] y_pred_USING_x_train: [ 14.3 15.4 16.5 17.6]
y_pred_USING_x_test: [ 14.3] y_pred_USING_x_train: [ 13.2 15.4 16.5 17.6]
y_pred_USING_x_test: [ 15.4] y_pred_USING_x_train: [ 13.2 14.3 16.5 17.6]
y_pred_USING_x_test: [ 16.5] y_pred_USING_x_train: [ 13.2 14.3 15.4 17.6]
y_pred_USING_x_test: [ 17.6] y_pred_USING_x_train: [ 13.2 14.3 15.4 16.5]
y_pred_USING_x_test
在每个for循环中给出一个值,这没有任何意义!
y_pred_USING_x_train
是使用LeaveOneOut
寻找的LeaveOneOut
。
以下代码的结果完全不相关!
bug = cross_val_predict(clf, x, y, cv=cv)
print 'bug: ', bug
bug: [ 15. 14.85714286 14.5 15.85714286 21.5 ]
欢迎任何辩护。
每个样本都用作测试集(单个)
这意味着x_test
将是一个元素的数组,而clf.predict(x_test)
将返回一个(预测的)元素的数组。 这可以在您的输出中看到。
x_train
将是没有为x_test
选择一个元素的训练集。 可以通过在for循环中添加以下行来确认
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
if len(x_test)!=1 or ( len(x_train)+1!=len(x) ): # Confirmation
raise Exception
y_pred_USING_x_test = clf.predict(x_test)
y_pred_USING_x_train = clf.predict(x_train)
print 'predicting for',x_test,'and expecting',y_test, 'and got', y_pred_USING_x_test
print 'predicting for',x_train,'and expecting',y_train, 'and got', y_pred_USING_x_train
print
print
注意这不是正确的验证,因为您正在训练和测试相同数据的模型。 您应该在for循环的迭代中创建新的LinearRegression
对象,并使用x_train
, y_train
对其进行y_train
。 用它来预测x_test
然后比较y_test
和y_pred_USING_x_test
x = np.array([[1,2],[3,4],[5,6],[7,8],[9,10]])
y = np.array([12,13,19,18,15])
cv = LeaveOneOut(len(y))
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
if len(x_test)!=1 or ( len(x_train)+1!=len(x) ):
raise Exception
clf = LinearRegression()
clf.fit(x_train, y_train)
y_pred_USING_x_test = clf.predict(x_test)
print 'predicting for',x_test,'and expecting',y_test, 'and got', y_pred_USING_x_test
没有错误。 两件事情:
您正在执行交叉验证拆分,但是您从未在训练集中进行训练! 您需要先调用clf.fit(x_train, y_train)
然后才能调用predict()
使其表现出预期的效果。
通过设计,在测试组LeaveOneOut
是单个样品(即, 一种被排除在外),所以预测结果也将是一个单个数字。 cross_val_predict()
函数是一个便捷例程,将这些单个输出拼接在一起。
一旦您考虑了这两件事,我相信您的代码输出将更有意义。
结果如下:
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut, cross_val_predict
x = np.array([[1,2],[3,4],[5,6],[7,8],[9,10]])
y = np.array([12,13,19,18,15])
clf = LinearRegression().fit(x,y)
cv = LeaveOneOut(len(y))
for train, test in cv:
x_train, y_train = x[train], y[train]
x_test, y_test = x[test], y[test]
clf.fit(x_train, y_train) # <--------------- note added line!
y_pred_USING_x_test = clf.predict(x_test)
y_pred_USING_x_train = clf.predict(x_train)
print('y_pred_USING_x_test: ', y_pred_USING_x_test,
'y_pred_USING_x_train: ', y_pred_USING_x_train)
print()
print(cross_val_predict(clf, x, y, cv=cv))
输出:
y_pred_USING_x_test: [ 15.] y_pred_USING_x_train: [ 15.5 16. 16.5 17. ]
y_pred_USING_x_test: [ 14.85714286] y_pred_USING_x_train: [ 13.94285714 15.77142857 16.68571429 17.6 ]
y_pred_USING_x_test: [ 14.5] y_pred_USING_x_train: [ 12.3 13.4 15.6 16.7]
y_pred_USING_x_test: [ 15.85714286] y_pred_USING_x_train: [ 13.2 14.08571429 14.97142857 16.74285714]
y_pred_USING_x_test: [ 21.5] y_pred_USING_x_train: [ 11.9 14.3 16.7 19.1]
[ 15. 14.85714286 14.5 15.85714286 21.5 ]
如您所见,手动循环中的测试输出与cross_val_predict()
的输出匹配。
clf = LinearRegression().fit(x,y)' after the for loop, it gives the same answer ascross_val_predict(clf, x, y, cv=cv)
执行clf = LinearRegression().fit(x,y)' after the for loop, it gives the same answer ascross_val_predict(clf, x, y, cv=cv)
不再有错误。 该程序使用每个循环的一个左样本进行预测。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.