简体   繁体   中英

Make Your Own Neural Network Back Propagation

I'm reading through the Make Your Own Neural Network book and in the chapter where the author describes about the back propagation, I find myself confused. I would like to relate the way the author explains with an example where he shows a 2 node, 3 layer network sh shown in the image below:

反向传播

If I construct the Matrix representation of the above Neural Network for back propagation, it would look like this:

矩阵反向传播

Where W T hidden_output is a Matrix transpose of the input weights, thus the Matrix representation in details is:

矩阵细节

So if I want to now calculate the hidden Errors (e 1_hidden and e 2_hidden ), I have the following:

e 1_hidden = W 11 * e 1 + W 12 * e 2

e 2_hidden = W 21 * e 1 + W 22 * e 2

But if I apply the values as given in the example., where e 1 = 1.5 and e 2 = 0.5, I do not get the e 1_hidden = 0.7 and e 2_hidden = 1.3

Where am I going wrong with my understanding / calculation? Any help?

You are describing the error back propagation based on simply multiplying by link weights , whereas the picture splits the error in proportion to the link weights . Both differences are described eg on this webpage :

在此处输入图片说明

In the picture, you see that the error is split in proportion to the link weights , eg e 1 = 1.5 is split into 0.6 and 0.9 according to the weights of W 11 = 1.0 and W 21 = 3.0. (Note further, that the subscripts of the weights are also wrong in the picture because there is only W 11 and all others are denoted by W 12 ...)

This split up error is then added to the final error of the hidden layer, eg:

e hidden 1 = 0.6 + 0.1 = 0.7.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM