[英]Implementing gradient descent in python
I was trying to build a gradient descent function in python.我试图在 python 中构建梯度下降 function。 I have used the binary-crossentropy as the loss function and sigmoid as the activation function.我使用二元交叉熵作为损失 function 和 sigmoid 作为激活 function。
def sigmoid(x):
return 1/(1+np.exp(-x))
def binary_crossentropy(y_pred,y):
epsilon = 1e-15
y_pred_new = np.array([max(i,epsilon) for i in y_pred])
y_pred_new = np.array([min(i,1-epsilon) for i in y_pred_new])
return -np.mean(y*np.log(y_pred_new) + (1-y)*np.log(1-y_pred_new))
def gradient_descent(X, y, epochs=10, learning_rate=0.5):
features = X.shape[0]
w = np.ones(shape=(features, 1))
bias = 0
n = X.shape[1]
for i in range(epochs):
weighted_sum = w.T@X + bias
y_pred = sigmoid(weighted_sum)
loss = binary_crossentropy(y_pred, y)
d_w = (1/n)*(X@(y_pred-y).T)
d_bias = np.mean(y_pred-y)
w = w - learning_rate*d_w
bias = bias - learning_rate*d_bias
print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{loss}')
return w, bias
So, as input I gave所以,作为我给的输入
X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4],
[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]])
y = 2*X[0] - 3*X[1] + 0.4
and then w, bias = gradient_descent(X, y, epochs=100)
the output was w = array([[-20.95],[-29.95]])
, b = -55.50000017801383
, and loss:40.406546076763014
.然后w, bias = gradient_descent(X, y, epochs=100)
output 为 w = array([[-20.95],[-29.95]])
, b = -55.50000017801383
和loss:40.406546076763014
The weights are decreasing(becoming more -ve) and bias is also decreasing for more epochs.权重正在减少(变得更-ve),并且偏差也在减少更多时期。 Expected output was w = [[2],[-3]], and b = 0.4.预期的 output 为 w = [[2],[-3]] 和 b = 0.4。
I don't know what I am doing wrong, the loss is also not converging.我不知道我做错了什么,损失也没有收敛。 It is constant throughout all the epochs.它在所有时代都是恒定的。
Usually, binary cross-entropy
loss is used for binary classification task.通常, binary cross-entropy
损失用于二元分类任务。 However, here your task is a linear regression so I would prefer using Mean Square Error
as loss function.但是,这里你的任务是线性回归,所以我更喜欢使用Mean Square Error
作为损失 function。 Here is my suggesstion:这是我的建议:
def gradient_descent(X, y, epochs=1000, learning_rate=0.5):
w = np.ones((X.shape[0], 1))
bias = 1
n = X.shape[1]
for i in range(epochs):
y_pred = w.T @ X + bias
mean_square_err = (1.0 / n) * np.sum(np.power((y - y_pred), 2))
d_w = (-2.0 / n) * (y - y_pred) @ X.T
d_bias = (-2.0 / n) * np.sum(y - y_pred)
w -= learning_rate * d_w.T
bias -= learning_rate * d_bias
print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{mean_square_err}')
return w, bias
X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4],
[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]])
y = 2*X[0] - 3*X[1] + 0.4
w, bias = gradient_descent(X, y, epochs=5000, learning_rate=0.5)
print(f'w = {w}')
print(f'bias = {bias}')
Output: Output:
w = [[ 1.99999999], [-2.99999999]]
bias = 0.40000000041096756
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.