繁体   English   中英

如何在 Python 中绘制逻辑回归的决策边界?

[英]How to plot decision boundary for logistic regression in Python?

我试图在逻辑回归中绘制边界分类的决策边界,但我不太明白应该如何做。

这是我生成的数据集,我在其上使用 numpy 应用逻辑回归

import numpy as np
import matplotlib.pyplot as plt


# class 0:
# covariance matrix and mean
cov0 = np.array([[5,-4],[-4,4]])
mean0 = np.array([2.,3])
# number of data points
m0 = 1000

# class 1
# covariance matrix
cov1 = np.array([[5,-3],[-3,3]])
mean1 = np.array([1.,1])
# number of data points
m1 = 1000

# generate m gaussian distributed data points with
# mean and cov.
r0 = np.random.multivariate_normal(mean0, cov0, m0)
r1 = np.random.multivariate_normal(mean1, cov1, m1)

X = np.concatenate((r0,r1))

在此处输入图片说明

在应用逻辑回归之后,我发现最好的 theta 是:

thetas = [1.2182441664666837, 1.3233825647558795, -0.6480886684022018]

我尝试按以下方式绘制决策边界:

yy = -(thetas[0] + thetas[1]*X)/thetas[1][2]
plt.plot(X,yy)

然而,出来的图形与预期的相反: 在此处输入图片说明

提前致谢

我认为你有 2 个错误:

  • yy = -(thetas[0] + thetas[1]*X)/thetas[1][2]

    为什么是thetas[1][2]而不是theta[2]

  • 为什么要转换 X 你的完整数据集?

您只能将转换应用于 min x 和 max x :

minx = np.min(X[:, 0])
maxx = np.max(X[:, 1])

## compute transformation : 

y1 = -(thetas[0] + thetas[1]*minx) / thetas[2]
y2 = -(thetas[0] + thetas[1]*maxx) / thetas[2]

## then plot the line [(minx, y1), (maxx, y2)]

plt.plot([minx, maxx], [y1, y2], c='black')

使用 sklearn LogisticRegression 完成工作代码:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression

# Youre job :
# =============

# class 0:
# covariance matrix and mean
cov0 = np.array([[5,-4],[-4,4]])
mean0 = np.array([2.,3])
# number of data points
m0 = 1000

# class 1
# covariance matrix
cov1 = np.array([[5,-3],[-3,3]])
mean1 = np.array([1.,1])
# number of data points
m1 = 1000

# generate m gaussian distributed data points with
# mean and cov.
r0 = np.random.multivariate_normal(mean0, cov0, m0)
r1 = np.random.multivariate_normal(mean1, cov1, m1)

X = np.concatenate((r0,r1))

## Added lines :

Y = np.concatenate((np.zeros(m0), np.ones(m1)))

model = LogisticRegression().fit(X,Y)

coefs =list(model.intercept_)
coefs.extend(model.coef_[0].tolist())
xmin = np.min(X[:, 0])
xmax = np.max(X[:, 0])


def bound(x):
    return -(coefs[0] + coefs[1] * x) / coefs[2]

p1 = np.array([xmin, bound(xmin)])
p2 = np.array([xmax, bound(xmax)])

plt.plot(r0[:, 0], r0[:, 1], ls='', marker='.', c='red')
plt.plot(r1[:, 0], r1[:, 1], ls ='', marker='.', c='blue')
plt.plot([p1[0], p1[1]], [p2[0], p2[1]], c='black')
plt.show()

更新

最终剧情链接

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM