[英]Neural network activation
Neural networks have a so called "activation function", it's usually some form of a sigmoid-like function to map the inputs into separate outputs. 神经网络具有所谓的“激活函数”,通常是类似于S形函数的某种形式,用于将输入映射到单独的输出中。
http://zephyr.ucd.ie/mediawiki/images/b/b6/Sigmoid.png http://zephyr.ucd.ie/mediawiki/images/b/b6/Sigmoid.png
For you it happens to be either 0 or 1 and using a comparison instead of a sigmoid function, so your activation curve will be even sharper than the graph above. 对您来说,它恰好是0或1,并且使用比较而不是S型函数,因此您的激活曲线将比上图更清晰。 In the said graph, your t
, the threshold, is 0 on the X axis. 在上述图形中,阈值t
在X轴上为0。
So as pseudo code : 因此作为伪代码:
sum = w1 * I1 + w2 + I2 + ... + wn * In
sum
is the weighted sum of all in the inputs the neuron, now all you have to do is compare that sum to t
, the threshold : sum
是神经元输入中所有sum
的加权总和,现在您要做的就是将该总和与t
进行比较,即阈值:
if sum >= t then y = 1 // Your neuron is activated
else y = 0
You can use the last neuron's output as the networks output to predict something into 1/0, true/false etc. 您可以将最后一个神经元的输出用作网络输出,以将某些东西预测为1/0,true / false等。
If you're studying NNs, I'd suggest you start with the XOR problem, then it will all make sense. 如果您正在研究NN,我建议您从XOR问题入手,那么这将很有意义。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.