简体   繁体   中英

Why does Dropout deteriorates my model accuracy?

The Code below gives about 95 % accuracy if I do not use dropout in training. The accuracy drops to 11 % if I use dropout.

The network is built using Numpy. I have used a class Neural Networks which contains many layer objects. The last layer has sigmoid activation and the rest have Relu. The code is:

import numpy as np 
import idx2numpy as idx
import matplotlib.pyplot as plt

np.random.seed(0)
img = r"C:\Users\Aaditya\OneDrive\Documents\ML\train-image"
lbl = r'C:\Users\Aaditya\OneDrive\Documents\ML\train-labels-idx1-ubyte'
t_lbl = r'C:\Users\Aaditya\OneDrive\Documents\ML\t10k-labels.idx1-ubyte'
t_img = r'C:\Users\Aaditya\OneDrive\Documents\ML\t10k-images.idx3-ubyte'
image = idx.convert_from_file(img)
iput = np.reshape(image, (60000,784))/255
otput = np.eye(10)[idx.convert_from_file(lbl)]
test_image = idx.convert_from_file(t_img)
test_input = np.reshape(test_image, (10000,784))/255
test_output = idx.convert_from_file(t_lbl)

def sigmoid(x):
    sigmoid = 1/(1+ np.exp(-x)) 
    return sigmoid
    
def tanh(x):
    return np.tanh(x)
def relu(x):
    return np.where(x>0,x,0)

def reluprime(x):
    return (x>0).astype(x.dtype)

def sigmoid_prime(x):
    return sigmoid(x)*(1-sigmoid(x))
    
def tanh_prime(x):
    return 1 - tanh(x)**2
class Layer_Dense:
    def __init__(self,n_inputs,n_neurons,activation="sigmoid",keep_prob=1):
        self.n_neurons=n_neurons
        if activation == "sigmoid":
            self.activation = sigmoid
            self.a_prime = sigmoid_prime
        elif activation == "tanh":
            self.activation = tanh
            self.a_prime = tanh_prime
        else :
            self.activation = relu
            self.a_prime = reluprime
        self.keep_prob = keep_prob
        self.weights = np.random.randn(n_inputs ,n_neurons)*0.1
        self.biases = np.random.randn(1,n_neurons)*0.1 
    
    def cal_output(self,input,train=False):        
        output = np.array(np.dot(input,self.weights) + self.biases,dtype="float128")
        
        if train == True:
            D = np.random.randn(1,self.n_neurons)
            self.D = (D>self.keep_prob).astype(int)
            output = output * self.D  
        return output
    def forward(self,input):
        return self.activation(self.cal_output(input))
    def back_propagate(self,delta,ap,lr=1,keep_prob=1):
        dz =  delta
        self.weights -= 0.001*lr*(np.dot(ap.T,dz)*self.D)
        self.biases -= 0.001*lr*(np.sum(dz,axis=0,keepdims=True)*self.D)
        return np.multiply(np.dot(dz,self.weights.T),(1-ap**2))
        

class Neural_Network:
    def __init__(self,input,output):
        self.input=input
        self.output=output
        self.layers = []
    def Add_layer(self,n_neurons,activation="relu",keepprob=1):
        if len(self.layers) != 0:    
            newL = Layer_Dense(self.layers[-1].n_neurons,n_neurons,activation,keep_prob=keepprob)
        else:
            newL = Layer_Dense(self.input.shape[1],n_neurons,activation,keep_prob=keepprob)
        self.layers.append(newL)
    def predict(self,input):
        output = input
        for layer in self.layers:
            output = layer.forward(output)
        return output
    def cal_zs(self,input):
        self.activations = []
        self.activations.append(input)
        output = input
        for layer in self.layers:
            z = layer.cal_output(output,train=True)
            activation = layer.activation(z)
            self.activations.append(activation)
            output = activation
    def train(self,input=None,output=None,lr=10):
        if input is None:
            input=self.input
            output=self.output
            
        if len(input)>1000:
            indices = np.arange(input.shape[0])
            np.random.shuffle(indices)
            input = input[indices]
            output = output[indices]
            for _ in range(100):
                self.lr = lr
                for i in range(int(len(input)/100)):
                    self.lr *=0.99
                    self.train(input[i*100:i*100+100],output[i*100:i*100+100],self.lr)
            return
        self.cal_zs(input)
        for i in range(1,len(self.layers)+1):
            if i==1:
                delta = self.activations[-1] - output
                self.delta = self.layers[-1].back_propagate(delta,self.activations[-2],lr)
            else:
                self.delta = self.layers[-i].back_propagate(self.delta,self.activations[-i-1],lr)
    def MSE(self):
        predict = self.predict(self.input)
        error = (predict - self.output)**2
        mse = sum(sum(error))
        print(mse)
    def Logloss(self):
        predict = self.predict(self.input)
        error = np.multiply(self.output,np.log(predict)) + np.multiply(1-self.output,np.log(1-predict))
        logloss = -1*sum(sum(error))
        print(logloss)
    def accuracy(self):
        predict = self.predict(test_input)
        prediction = np.argmax(predict,axis=1)
        correct = np.mean(prediction == test_output)
        print(correct*100)
            
    # def train(self,input,output):
        
model = Neural_Network(iput,otput)
# model.Add_layer(4)
model.Add_layer(64)
model.Add_layer(16)
model.Add_layer(10,"sigmoid")
lrc= 6
for _ in range(10):
    model.accuracy()
    model.Logloss()
    model.train(lr=lrc)
model.accuracy()

I have used MNIST database the link is THIS

One of the reason can be that you might be dropping too much neurons. In below code

D = np.random.randn(1,self.n_neurons)
self.D = (D>self.keep_prob).astype(int)

Matrix generated in first line might contain many values which are less then zero. Because of that when comparing it with self.keep_prob (which has value 1) lot of neurons are getting dropped

Please try with one change

self.D = (D < self.keep_prob).astype(int)

There could be various reasons for that. One was specified by @anuragal.

Basically dropout is used to reduce overfitting and to help the network correct errors. But when you use dropout before your final layer, it could be that the network is unable to correct itself, thus leading to a lower accuracy

Another reason could be that I see your network is small. Usually, shallow networks aren't benefitted by dropouts

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM