I am trying to plot the decision boundary of logistic regression in scikit learn
features_train_df : 650 columns, 5250 rows
features_test_df : 650 columns, 1750 rows
class_train_df = 1 column (class to be predicted), 5250 rows
class_test_df = 1 column (class to be predicted), 1750 rows
classifier code;
tuned_logreg = LogisticRegression(penalty = 'l2', tol = 0.0001,C = 0.1,max_iter = 100,class_weight = "balanced")
tuned_logreg.fit(x_train[sorted_important_features_list[0:650]].values, y_train['loss'].values)
y_pred_3 = tuned_logreg.predict(x_test[sorted_important_features_list[0:650]].values)
I am getting the correct output for the classifier code.
Got this code online:
code:
X = features_train_df.values
# evenly sampled points
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50),
np.linspace(y_min, y_max, 50))
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
#plot background colors
ax = plt.gca()
Z = tuned_logreg.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
cs = ax.contourf(xx, yy, Z, cmap='RdBu', alpha=.5)
cs2 = ax.contour(xx, yy, Z, cmap='RdBu', alpha=.5)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14)
# Plot the points
ax.plot(Xtrain[ytrain == 0, 0], Xtrain[ytrain == 0, 1], 'ro', label='Class 1')
ax.plot(Xtrain[ytrain == 1, 0], Xtrain[ytrain == 1, 1], 'bo', label='Class 2')
# make legend
plt.legend(loc='upper left', scatterpoints=1, numpoints=1)
error:
ValueError: X has 2 features per sample; expecting 650
Please suggest me on where am I going wrong
I got the problem in your code. Please take a careful look at the following discussion.
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50), np.linspace(y_min, y_max, 50))
grid = np.c_[xx.ravel(), yy.ravel()]
Z = tuned_logreg.predict_proba(grid)[:, 1]
Think about the shapes of variables here:
np.linspace(x_min, x_max, 50)
returns a list with 50 values. Then applying np.meshgrid
makes the shape of xx
and yy
(50, 50)
. Finally applying np.c_[xx.ravel(), yy.ravel()]
makes the shape of variable grid (2500, 2)
. You are giving 2500 instances with 2 feature values to predict_proba
function.
Thats why you are getting the error: ValueError: X has 2 features per sample; expecting 650
ValueError: X has 2 features per sample; expecting 650
. You must pass a structure which contains 650 column (features) values.
During predict
you did it correctly.
y_pred_3 = tuned_logreg.predict(x_test[sorted_important_features_list[0:650]].values)
So, make sure the number of features in the instances passed to fit()
, predict()
and predict_proba()
methods are same.
Explanation of the example from your provided SO post :
X, y = make_classification(200, 2, 2, 0, weights=[.5, .5], random_state=15)
clf = LogisticRegression().fit(X[:100], y[:100])
Here the shape of X is (200, 2)
but when classifier is trained, they are using X[:100]
that means only 100 features with 2 classes. For prediction, they are using:
xx, yy = np.mgrid[-5:5:.01, -5:5:.01]
grid = np.c_[xx.ravel(), yy.ravel()]
Here, shape of xx
is (1000, 1000)
and grid is (1000000, 2)
. So, the number of features used both for training and testing is 2.
Also, you can use the internal value of the learned model:
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
import matplotlib.pyplot as plt
X, y = make_classification(200, 2, 2, 0, weights=[.5, .5], random_state=15)
clf = LogisticRegression().fit(X, y)
points_x=[x/10. for x in range(-50,+50)]
line_bias = clf.intercept_
line_w = clf.coef_.T
points_y=[(line_w[0]*x+line_bias)/(-1*line_w[1]) for x in points_x]
plt.plot(points_x, points_y)
plt.scatter(X[:,0], X[:,1],c=y)
plt.show()
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.