简体   繁体   中英

How can I reduce the error in my trained values while implementing Artificial Neural Network?

The problem is that I'm getting an error of almost 0.8-1.0 in my trained value which is not acceptable. How do I figure out a way to reduce that error? I've tried reducing the training rate but it didn't work. I'm currently training my system using a dataset in an excel sheet. Here is the link to the sample data set that I'm using : http://www.mediafire.com/download/j9o676nvqr32fnb/dataset1.xlsx Here is the code that I'm using :

import numpy as np
import xlrd
def nonlin(x,deriv=False):
    if(deriv==True):
        return x*(1-x)
    return 1/(1+np.exp(-x))
addr="/home/shashwat08/pycodes/ann/dataset1.xlsx"
wb=xlrd.open_workbook(addr)
sheet=wb.sheet_by_index(0)

output=[[sheet.cell_value(r,1) for r in range(sheet.nrows)]]  #output array
mv=[[sheet.cell_value(r,0) for r in range(sheet.nrows)]]    #input array

output=np.array(output)
mv=np.array(mv)

op=output.ravel()
ip=mv.ravel()

np.random.seed(1)

syn0=2*np.random.random((1,4))-1
syn1=2*np.random.random((4,1))-1

for i in range(sheet.nrows):
    for j in xrange(100000):
        l0=ip[i]
        l1=nonlin(np.dot(l0,syn0))
        l2=nonlin(np.dot(l1,syn1))

        l2_err=op[i]-l2

        if(j%10000)==0:
            print("Error "+ str(np.mean(np.abs(l2_err))))
        l2_delta=l2_err*nonlin(l2,deriv=True)               #delta value
        l1_err=l2_delta.dot(syn1.T)
        l1_delta=l1_err*nonlin(l1,deriv=True)

        #syn1=syn1+l1.T.dot(l2_delta)
        #syn0=syn0+l0.T.dot(l1_delta)
        L1=l1.T
        L0=l0.T

        syn1=syn1+0.2*L1*l2_delta
        syn0=syn0+0.2*L0*l1_delta

print("Trained values\n")
print l2

Your help will be appreciated. Thanks. :)

An artificial neural network accepts a set of hyperparameters that decides the accuracy of classification of your test dataset given that your neural network has been trained on a training dataset.

These hyperparameters are:

1. Learning rate (most commonly represented by the symbol alpha)

2. Number of epochs (One epoch is the training of weights and biases by iterating over the training dataset atleast once).

3. Mini batch size (If you are using a Stochastic Gradient Descent method with back propagation with training, then the size of the mini batch plays a huge role in deciding the accuracy of classification of the neural network).

4. The accuracy with which your training dataset is annotated.

And I don't think you have included the implementation of an Artificial Neural Network. If you are relatively new to this field, you can take a look at Artificial Neural Network from this repository.

An Artificial Neural Network has been implemented from scratch for the problem of sound event detection and classification.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM