简体   繁体   English

Python 3中的渐变下降

[英]Gradient Descent in Python 3

I have a code Gradient Descent in Python 3 and error appear I do not know how I can solve it ... 我在Python 3中有一个代码Gradient Descent,出现错误我不知道如何解决它...

and I have a question why the result of the dataset give me a zero for accuracy anyone can help me, please :( 我有一个问题为什么数据集的结果给我一个零的准确性任何人都可以帮助我,请:(

I try to change some things but no benefit. 我试图改变一些事情,但没有任何好处。

I divide my dataset into two parts for the training set and for the test set 我将数据集分为训练集和测试集两部分

 import csv import numpy as np import matplotlib.pyplot as plt N_EPOCHS = 10 LEARNING_RATE = .1 PLOT = False LAMBDA = .00000001 def load_data(filename): X = [] Y = [] with open(filename, 'r') as csvfile: X = [[float(x) for x in line] for line in csv.reader(csvfile, delimiter=',')] for line in X: Y.append([line[-1]]) line.pop() print(X[0]) print(Y) X = np.array(X, dtype=np.longdouble) Y = np.array(Y, dtype=np.longdouble) return X, Y def sigmoid(weight_param, x_param): denom_sigmoid = np.longdouble(1 + np.exp(np.dot(-weight_param, x_param))) sig = np.longdouble(np.divide(1, denom_sigmoid, where=denom_sigmoid!=0.0)) return sig def gradient_descent(X, Y, L2_Regularization=False): example_accuracy = [] X = np.c_[np.ones((X.shape[0])), X] # Add bias of 1 to each example feature_len = X.shape[1] example_count = np.long(X.shape[0]) print("X.shape ", X.shape) # Random weight vector with shape equal to number of features w = np.zeros(feature_len) l2_reg = 0 step = 0 correct_count = 0 while(step < N_EPOCHS): print("Iteration: ", step) grad = np.zeros(feature_len, dtype=np.float) for example in range(example_count): # y_hat is the predicted output y_hat = sigmoid(wT, X[example]) if L2_Regularization: l2_reg = LAMBDA * w # = d/dw(.5*lambda*||w^2||) if y_hat >= .5: y_hat = 1 loss = y_hat - Y[example] if loss[0] == 0: correct_count += 1 print(correct_count) grad += loss[0] * X[example] + l2_reg w += -LEARNING_RATE * grad step += 1 example_accuracy.append(np.float(correct_count / example_count)) correct_count = 0 print(" Accuracy per Epoch: ", example_accuracy) return w, example_accuracy def main(): X, Y = load_data("/Users/mahaalmotiri/PycharmProjects/desktop/GradientDescent /data.csv") X_test, Y_test = load_data("/Users/mahaalmotiri/PycharmProjects/desktop/GradientDescent /data2.csv") w, example_accuracy = gradient_descent(X, Y) epoch_list = [epoch for epoch in range(N_EPOCHS)] if PLOT: plt.plot(epoch_list, example_accuracy) plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.show() w_L2_train, example_accuracy_L2_train = gradient_descent(X, Y, L2_Regularization=True) w_L2_test, example_accuracy_L2_test = gradient_descent(X_test, Y_test, L2_Regularization=True) print("Example accuracy no L2_Regularization: ", example_accuracy) print("example_accuracy_L2_train: ", example_accuracy_L2_train) print("example_accuracy_L2_test: ", example_accuracy_L2_test) if __name__ == "__main__": main() 

error is 错误是 在此输入图像描述

This warning means that degree np.dot(-weight_param, x_param) is too big here: 这个警告意味着程度np.dot(-weight_param, x_param)在这里太大了:

denom_sigmoid = np.longdouble(1 + np.exp(np.dot(-weight_param, x_param)))

so you get large number after computing exponent in the degree. 所以你在计算度数后得到大数。

If you want to get rid of this you just need to make it smaller. 如果你想摆脱这个,你只需要让它变小。 Maybe you want to normalize you input data using sklearn . 也许您想使用sklearn规范化输入数据。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM