i don't understand what is going on
this is my code
from sklearn import datasets
from sklearn import svm
import matplotlib.pyplot as plt
# Load digits dataset
digits = datasets.load_digits()
# Create support vector machine classifier
clf = svm.SVC(gamma=0.001, C=100.)
# fit the classifier
X, y = digits.data[:-1], digits.target[:-1]
clf.fit(X, y)
pred = clf.predict(digits.data[-1]) # error goes at this line
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
this code is showing the sklearn digits image and show its prediction
when i execute this code it show me this error
Traceback (most recent call last):
File "detect.py", line 15, in <module>
pred = clf.predict(digits.data[-1])
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 548, in predict
y = super(BaseSVC, self).predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 308, in predict
X = self._validate_for_predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 439, in _validate_for_predict
X = check_array(X, accept_sparse='csr', dtype=np.float64, order="C")
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 410, in check_array
"if it contains a single sample.".format(array))
ValueError: Expected 2D array, got 1D array instead:
array=[ 0. 0. 10. 14. 8. 1. 0. 0. 0. 2. 16. 14. 6. 1. 0.
0. 0. 0. 15. 15. 8. 15. 0. 0. 0. 0. 5. 16. 16. 10.
0. 0. 0. 0. 12. 15. 15. 12. 0. 0. 0. 4. 16. 6. 4.
16. 6. 0. 0. 8. 16. 10. 8. 16. 8. 0. 0. 1. 8. 12.
14. 12. 1. 0.].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
The error messsage says what's wrong: You're feeding the classifier's predict method a single column and it's expecting as many columns as was used when fitting. If you change you're code to
pred = clf.predict(digits.data[:-1])
it works. Of course that makes no sense because now you're predicting the same data you fitted with (And you're omitting one column from the features). A more reasonable thing to do is to split the dataset to train and test datasets and the fit with train and predict the test. Like this:
from sklearn import datasets
from sklearn import svm
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Load digits dataset
digits = datasets.load_digits()
# Create support vector machine classifier
clf = svm.SVC(gamma=0.001, C=100.)
# split the data to train and test sets
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, test_size=0.2, random_state=2017)
# fit the classifier with train data
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.