简体   繁体   English

具有2D阵列输入和1D阵列输出的神经网络

[英]Neural network with 2D array input and 1D array output

I have got some problems writing my simple neural network. 我在编写简单的神经网络时遇到了一些问题。 I was learning about neural networks in python by "Neural network in 11 lines" guide ( https://www.kdnuggets.com/2015/10/neural-network-python-tutorial.html ). 我正在通过“ 11行神经网络”指南( https://www.kdnuggets.com/2015/10/neural-network-python-tutorial.html )了解python中的神经网络。 There was 2D array as input (in first dimension was example number, and in second - example) As an output there was a 1D array. 有2D数组作为输入(在第一维中是示例编号,在第二维-示例中)有1D数组作为输出。 So now I tried to do something similar. 所以现在我尝试做类似的事情。 I had input array for learning with 1000 examples and each example has 64 neurons: 我有用于学习的输入数组,包含1000个示例,每个示例都有64个神经元:

n0 = np.zeros((1000, 64)) 

After that I filled array with data from dataset. 之后,我用数据集中的数据填充数组。 My weights were like: 我的体重像:

w0 = 2 * np.random.random((64, 120))-1
w1 = 2 * np.random.random((120, 240))-1
w2 = 2 * np.random.random((240, 240))-1
w3 = 2 * np.random.random((240, 240))-1
w4 = 2 * np.random.random((240, 120))-1
w5 = 2 * np.random.random((120, 44))-1

And forward-function was: 转发功能为:

n1 = sigmoid(np.dot(n0, w0))
n2 = sigmoid(np.dot(n1, w1))
#...
n6 = sigmoid(np.dot(n5, w5))

After that n6 size is 1000x44. 之后,n6的大小为1000x44。 And how can I get 1D array, not 2D array? 以及如何获得一维阵列而不是二维阵列? Also after weights correction, neurons can get strange numbers like 6.72853722e-172... And in n6 answers are 1.00000000e+000 and 0.00000000e-000, how that could be after sigmoid function? 同样在权重校正之后,神经元可以得到像6.72853722e-172这样的奇数。在n6中,答案是1.00000000e + 000和0.00000000e-000,在乙状结肠功能之后会怎样呢?

After that n6 size is 1000x44. 之后,n6的大小为1000x44。 And how can I get 1D array, not 2D array? 以及如何获得一维阵列而不是二维阵列?

The reason you're getting an output array with dimensions 1000x44 is because n6 has 44 output nodes, and your input data has 1000 examples (meaning, you're training the network on all examples at once). 之所以要得到尺寸为1000x44的输出数组,是因为n6有44个输出节点,而您的输入数据有1000个示例(这意味着您要同时训练所有示例上的网络)。

In other words, your output layer is producing an "activation" for every example in your dataset <-- that's normal & expected. 换句话说,您的输出层正在为数据集中的每个示例产生“激活” <-这是正常的和预期的。 If you were training the network one example at a time, the output array would be 1x44 (or just, 44 ). 如果您一次训练网络一个示例,则输出数组将为1x44 (或仅为44 )。


Also after weights correction, neurons can get strange numbers like 6.72853722e-172... And in n6 answers are 1.00000000e+000 and 0.00000000e-000, how that could be after sigmoid function? 同样在权重校正之后,神经元可以得到像6.72853722e-172这样的奇数。在n6中,答案是1.00000000e + 000和0.00000000e-000,在乙状结肠功能之后会怎样呢?

Sigmoid produces values between 0 and 1 . 乙状结肠产生01之间的值。 So: 6.72853722e-172 (or, 6.72 * 10 -172 ), 1.00000000e+000 , and 0.00000000e-000 are all between 0 and 1 , so that's normal too 因此: 6.72853722e-172 (或6.72 * 10 -172 ), 1.00000000e + 0000.00000000e-000都在01之间,所以这也是正常的

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM