简体   繁体   English

实施人工神经网络时,如何减少训练值中的误差?

[英]How can I reduce the error in my trained values while implementing Artificial Neural Network?

The problem is that I'm getting an error of almost 0.8-1.0 in my trained value which is not acceptable. 问题是我的训练值得到了几乎0.8-1.0的误差,这是不可接受的。 How do I figure out a way to reduce that error? 如何找到减少该错误的方法? I've tried reducing the training rate but it didn't work. 我曾尝试降低培训率,但没有成功。 I'm currently training my system using a dataset in an excel sheet. 我目前正在使用Excel工作表中的数据集来训练我的系统。 Here is the link to the sample data set that I'm using : http://www.mediafire.com/download/j9o676nvqr32fnb/dataset1.xlsx Here is the code that I'm using : 这是我正在使用的示例数据集的链接: http : //www.mediafire.com/download/j9o676nvqr32fnb/dataset1.xlsx这是我正在使用的代码:

import numpy as np
import xlrd
def nonlin(x,deriv=False):
    if(deriv==True):
        return x*(1-x)
    return 1/(1+np.exp(-x))
addr="/home/shashwat08/pycodes/ann/dataset1.xlsx"
wb=xlrd.open_workbook(addr)
sheet=wb.sheet_by_index(0)

output=[[sheet.cell_value(r,1) for r in range(sheet.nrows)]]  #output array
mv=[[sheet.cell_value(r,0) for r in range(sheet.nrows)]]    #input array

output=np.array(output)
mv=np.array(mv)

op=output.ravel()
ip=mv.ravel()

np.random.seed(1)

syn0=2*np.random.random((1,4))-1
syn1=2*np.random.random((4,1))-1

for i in range(sheet.nrows):
    for j in xrange(100000):
        l0=ip[i]
        l1=nonlin(np.dot(l0,syn0))
        l2=nonlin(np.dot(l1,syn1))

        l2_err=op[i]-l2

        if(j%10000)==0:
            print("Error "+ str(np.mean(np.abs(l2_err))))
        l2_delta=l2_err*nonlin(l2,deriv=True)               #delta value
        l1_err=l2_delta.dot(syn1.T)
        l1_delta=l1_err*nonlin(l1,deriv=True)

        #syn1=syn1+l1.T.dot(l2_delta)
        #syn0=syn0+l0.T.dot(l1_delta)
        L1=l1.T
        L0=l0.T

        syn1=syn1+0.2*L1*l2_delta
        syn0=syn0+0.2*L0*l1_delta

print("Trained values\n")
print l2

Your help will be appreciated. 您的帮助将不胜感激。 Thanks. 谢谢。 :) :)

An artificial neural network accepts a set of hyperparameters that decides the accuracy of classification of your test dataset given that your neural network has been trained on a training dataset. 假设您的神经网络已经在训练数据集上进行训练,那么人工神经网络会接受一组超参数,这些参数决定测试数据集分类的准确性。

These hyperparameters are: 这些超参数是:

1. Learning rate (most commonly represented by the symbol alpha) 1.学习率(最常用的符号是alpha)

2. Number of epochs (One epoch is the training of weights and biases by iterating over the training dataset atleast once). 2.时期数(一个时期是通过对训练数据集进行至少一次迭代来训练权重和偏差)。

3. Mini batch size (If you are using a Stochastic Gradient Descent method with back propagation with training, then the size of the mini batch plays a huge role in deciding the accuracy of classification of the neural network). 3.小型批处理的大小(如果您使用的是带有训练的反向传播的随机梯度下降方法,那么小型批处理的大小在决定神经网络分类的准确性中起着巨大的作用)。

4. The accuracy with which your training dataset is annotated. 4.标注训练数据集的准确性。

And I don't think you have included the implementation of an Artificial Neural Network. 而且我认为您还没有包括人工神经网络的实现。 If you are relatively new to this field, you can take a look at Artificial Neural Network from this repository. 如果你是比较新的这个领域,你可以从一个看看人工神经网络这个资源库。

An Artificial Neural Network has been implemented from scratch for the problem of sound event detection and classification. 对于声音事件检测和分类问题,已经从头开始实现了人工神经网络。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM