簡體   English   中英

MNIST 數據集的多類神經網絡不起作用

[英]Multi class neural network for MNIST data set not working

嗨,我正在嘗試在 MNIST 手寫數據集上訓練我自己設計的神經網絡,每次運行此代碼時,准確度都會開始增加然后降低,並且會收到溢出警告。 有人可以解釋我的代碼是否只是糟糕和混亂,或者我是否錯過了一些小東西。 提前致謝

import numpy as np
import pandas as pd
df = pd.read_csv('../input/digit-recognizer/train.csv')
data = np.array(df.values)
data = data.T
data
Y = data[0,:]
X = data[1:,:]
Y_train = Y[:41000]
X_train = X[:,:41000]
X_train = X_train/255
Y_val = Y[41000:]
X_val = X[:,41000:]
X_val = X_val/255
print(np.max(X_train))
class NeuralNetwork:
    def __init__(self, n_in, n_out):
        self.w1, self.b1 = self.Generate_Weights_Biases(10,784)
        self.w2, self.b2 = self.Generate_Weights_Biases(10,10)
    def Generate_Weights_Biases(self, n_in, n_out):
        weights = 0.01*np.random.randn(n_in, n_out)
        biases = np.zeros((n_in,1))
        return weights, biases
    def forward(self, X):
        self.Z1 = self.w1.dot(X) + self.b1
        self.a1 = self.ReLu(self.Z1)
        self.z2 = self.w2.dot(self.a1) + self.b1
        y_pred = self.Softmax(self.z2)
        return y_pred
    def ReLu(self, Z):
        return np.maximum(Z,0)
    def Softmax(self, Z):
        #exponentials = np.exp(Z)
        #sumexp = np.sum(np.exp(Z), axis=0) 
        #print(Z)
        return np.exp(Z)/np.sum(np.exp(Z))
        
    def ReLu_Derv(self, x):
        return np.greaterthan(x, 0).astype(int)
    def One_hot_encoding(self, Y):
        one_hot = np.zeros((Y.size, 10))
        rows = np.arange(Y.size)
        one_hot[rows, Y] = 1
        one_hot = one_hot.T
        return one_hot
    def Get_predictions(self, y_pred):
        return np.argmax(y_pred, 0)
    def accuracy(self, pred, Y):
        return np.sum(pred == Y)/Y.size
    def BackPropagation(self, X, Y, y_pred, lr=0.01):
        m = Y.size
        one_hot_y = self.One_hot_encoding(Y)
        e2 = y_pred - one_hot_y
        derW2 = (1/m)* e2.dot(self.a1.T)
        derB2 =(1/m) * np.sum(e2,axis=1)
        derB2 = derB2.reshape(10,1)
        e1 = self.w2.T.dot(e2) * self.ReLu(self.a1)
        derW1 = (1/m) * e1.dot(X.T)
        derB1 = (1/m) * np.sum(e1, axis=1)
        derB1 = derB1.reshape(10,1)
        self.w1 = self.w1 - lr*derW1
        self.b1 = self.b1 - lr*derB1
        self.w2 = self.w2 - lr*derW2
        self.b2 = self.b2 - lr*derB2
    def train(self, X, Y, epochs = 1000):
        for i in range(epochs):
            y_pred = self.forward(X)
            predict = self.Get_predictions(y_pred)
            accuracy = self.accuracy(predict, Y)
            print(accuracy)
            self.BackPropagation(X, Y, y_pred)
        return self.w1, self.b1, self.w2, self.b2
    
NN = NeuralNetwork(X_train, Y_train)
w1,b1,w2,b2 = NN.train(X_train,Y_train)

您應該對第二層使用不同的偏差

self.z2 = self.w2.dot(self.a1) + self.b1 # not b1
self.z2 = self.w2.dot(self.a1) + self.b2 # but b2

當做這樣的事情

derB2 =(1/m) * np.sum(e2,axis=1)

您想使用(keepdims = True)來確保 derB2.shape 是(something,1)但不是(something, ) 它使您的代碼更加嚴格。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM