[英]How To Use 3 Neuron in Neural Network?
This is a classical visualization of the perceptron learning model with 1 neuron.这是具有 1 个神经元的感知器学习 model 的经典可视化。 Let's say that I'd like to use 3 neuron or 5 neuron for training, can I do it without hidden layer?
假设我想使用 3 个神经元或 5 个神经元进行训练,我可以不使用隐藏层吗? I just can't picture in my head.
我只是无法在脑海中想象。 Here is the code;
这是代码;
import numpy as np
def tanh(x):
return (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
def tanh_derivative(x):
return 1-x**2
#inputs
training_inputs = np.array([[0,0,0],[0,0,1],[0,1,0],[0,1,1],[1,0,0],[1,0,1],[1,1,0],[1,1,1]])
#outputs
training_outputs =np.array([[1,0,0,1,0,1,1,0]]).T
#3 input 1 output //
synaptic_weights = 2* np.random.random((3,1))-1
print('Random weights :{}'.format(synaptic_weights))
for i in range(20000):
input_layer = training_inputs
outputs = tanh(np.dot(input_layer,synaptic_weights))
error = training_outputs - outputs
weight_adjust = error * tanh_derivative(outputs)
synaptic_weights += np.dot(input_layer.T, weight_adjust)
print('After training Synaptic Weights: {}'.format(synaptic_weights))
print('\n')
print('After training Outputs :\n{}'.format(outputs))
If you have 3 neurons in the output layer, you have three outputs.如果 output 层中有 3 个神经元,则有 3 个输出。 This makes sense for some problems - imagine a color with RGB components.
这对于某些问题是有意义的——想象一下带有 RGB 分量的颜色。
The size of your input determines your number input nodes;输入的大小决定了输入节点的数量; the size of your output determines your number of output nodes.
output 的大小决定了 output 节点的数量。 Only hidden layers sizes can be chosen freely.
只有隐藏层大小可以自由选择。 But any interesting network has at least one hidden layer.
但是任何有趣的网络都至少有一个隐藏层。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.