简体   繁体   English

神经网络的周期函数逼近失败

[英]periodic function approximation with neural network fails

I am trying to approximate simple one-dimensional functions with neural networks (1 input layer only.) I experimented with different settings (learning rate, momentum, number of hidden nodes etc.) and I seem to be able to achieve good results when approximating only 1 period (0;2PI) of - let's say - sine function. 我试图用神经网络(仅1个输入层)来近似简单的一维函数。我尝试了不同的设置(学习率,动量,隐藏节点的数量等),并且在近似时我似乎能够取得良好的结果假设正弦函数只有1个周期(0; 2PI)。 When I try the sine function with multiple periods, things go bad very fast. 当我尝试使用多个周期的正弦函数时,情况会很快恶化。 The network does seem to be able to approximate the first period in a decent way, but after that the output values from the network go into a constant-value linear line (the line is somewhere between 0 and -0.5, depending on the setup). 网络似乎确实能够以不错的方式近似第一个周期,但是之后,网络的输出值将进入恒定值线性线(该线介于0到-0.5之间,具体取决于设置) 。 I tried with many setups, but even with a very large number of interations, it does not get better. 我尝试了许多设置,但即使进行了大量交互,也不会变得更好。 What seems to be the problem here? 这里似乎是什么问题? Isn't this an easy task for an ANN with dozens of hidden layer neurons? 对于具有数十个隐藏层神经元的人工神经网络来说,这不是一件容易的事吗?

I use Python with PyBrain package. 我将Python与PyBrain包一起使用。 The relevant code is here: 相关代码在这里:

from pybrain.tools.shortcuts import buildNetwork
from pybrain.datasets import SupervisedDataSet 
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.structure import TanhLayer
import numpy as np
import random

random.seed(0)

def rand(a, b): #returns random number between (a,b) 
    return (b-a)*random.random() + a

def singenerate_check(points_num,bottom,top): #generates random testing data
    pat_check = SupervisedDataSet(1,1)
    for i in range(points_num):
        current = rand(bottom,top)
        pat_check.addSample(current,np.sin(current))
    return pat_check    

ds = SupervisedDataSet(1,1) #initalizing dataset
points_num = 100
element = 10 * np.pi / points_num
for i in range(points_num): #generating data
    ds.addSample(element*i+0.01,np.sin(element*i+0.01)+0.05*rand(-1,+1)) 
net = buildNetwork(1,20,1,bias=True)
trainer = BackpropTrainer(net,ds,learningrate=0.25,momentum=0.1,verbose=True)            
trainer.trainOnDataset(ds,30000) #number of iterations

testsample_count = 500
pat_check = singenerate_check(testsample_count, 0, 10*np.pi)
for j in range(testsample_count):
    print "Sample: " + str(pat_check.getSample(j)) 
error = trainer.testOnData(pat_check, verbose=True) #verifying    

A neural network with one hidden layer can approximate any continuous functions on a finite interval with any required accuracy. 具有一个隐藏层的神经网络可以以任何所需的精度在有限的间隔内近似任何连续函数。 However the length of the interval and accuracy will depend on the number of hidden neurons. 但是,间隔的长度和准确性将取决于隐藏的神经元的数量。

When you increase the interval, you decrease the accuracy if the number of the hidden neurons stays the same. 当增加间隔时,如果隐藏神经元的数量保持不变,则会降低准确性。 If you want to increase the interval without decreasing accuracy you need to add more hidden neurons. 如果要增加间隔而不降低准确性,则需要添加更多隐藏的神经元。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM