简体   繁体   中英

How to predict labels using cross-validation (Kfold) with sklearn

I would like to compare the predictions of the same classifier. As an example , I picked the Linear Discriminant Analysis classifier.

Therefore, I took a look in the documentation of sklearn. I found these two websites: Link 1 Link 2

I would like to link them together: prediction of labels with the help of cross-validation (for example Kfold).

However, I cannot manage to make my code work.

from sklearn.model_selection import cross_val_predict, KFold
from sklearn.model_selection import train_test_split
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis

X = np.array([[1, 2], [2, 4], [3, 2], [4, 4], [5, 2], [6, 4], [7, 2], [8, 4]])
Y = np.array([1, 2, 3, 4, 5, 6, 7, 8])

Xtrain,Xtest,Ytrain,Ytest = train_test_split(X,Y,train_size=0.5,random_state=1)

clf = LinearDiscriminantAnalysis()
clf.fit(Xtrain, Ytrain)
LinearDiscriminantAnalysis(n_components=None, priors=None, shrinkage='auto', solver='lsqr', store_covariance=False, tol=0.001)

# without cross-valdidation
prediction = clf.predict(Xtest)

# with cross-valdidation
cv = KFold(n_splits=2)
prediction_cv = cross_val_predict(clf, X, Y, cv=cv)

Hopefully, someone can help me.

EDIT:

I think I need to explain more. At the moment, I have 232 datapoints (X). Each point consists of 16 values and is assigned to a specific class. I hope that I can improve the predictions (=less classification mistakes for unseen data points), when I am using cross-validation, like Kfold or Leave One Out .

With the line cross_val_predict(clf, X, Y, cv=cv) , Python does a Kfold cross-validation.

Now, let's say, I get new datapoints ( X_new ). How can I classify them?

I presume you are getting a Traceback when you run your code that looks similar to this:

    376         # avoid division by zero in normalization
    377         std[std == 0] = 1.
--> 378         fac = 1. / (n_samples - n_classes)
    379 
    380         # 2) Within variance scaling

ZeroDivisionError: float division by zero

That's what I get when I run your code. The reason this happens is because you have 1 data point for each class, and so n_samples - n_classes will equal zero.

You can alleviate this by populating more examples or reducing the number of classes:

import numpy as np

from sklearn.model_selection import cross_val_predict, KFold
from sklearn.model_selection import train_test_split
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis

X = np.array([[1, 2], [2, 4], [3, 2], [4, 4], [5, 2], [6, 4], [7, 2], [8, 4], [1, 2], [2, 4], [3, 2], [4, 4], [5, 2], [6, 4], [7, 2], [8, 4]])
Y = np.array([1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8])

Xtrain,Xtest,Ytrain,Ytest = train_test_split(X,Y,train_size=0.5,random_state=1)

clf = LinearDiscriminantAnalysis()
clf.fit(X, Y)

# without cross-valdidation
prediction = clf.predict(Xtest)

# with cross-valdidation
cv = KFold(n_splits=2)
prediction_cv = cross_val_predict(clf, X, Y, cv=cv)

If you have another problem, update your question.

EDIT :

For your updated question, it is a duplicate of this question: Using cross_val_predict against test data set

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM