简体   繁体   English

如何在输入层和指定的第一个隐藏层之间具有固定对应关系的情况下实现神经网络模型?

[英]How to implement a neural network model, with fixed correspondence between the input layer and the first hidden layer specified?

I would like to implement a feed-forward neural network, with the only difference from a usual one that I'd manually control the correspondence between input features and the first hidden layer neurons. 我想实现一个前馈神经网络,它与通常的神经网络的唯一区别在于,我将手动控制输入特征和第一个隐藏层神经元之间的对应关系。 For example, in the input layer I have features f1, f2, ..., f100, and in the first hidden layer I have h1, h2, ..., h10. 例如,在输入层中,我具有特征f1,f2,...,f100,在第一隐藏层中,我具有h1,h2,...,h10。 I want the first 10 features f1-f10 fed into h1, and f11-f20 fed into h2, etc. 我希望将前10个特征f1-f10馈入h1,并将f11-f20馈入h2,依此类推。

Graphically, unlike the common deep learning technique dropout which is to prevent over-fitting by randomly omit hidden nodes for a certain layer, here what I want is to statically (fixed) omit certain hidden edges between input and hidden. 在图形上,不同于通常的深度学习技术辍学(即通过为特定层随机忽略隐藏节点来防止过度拟合)的方法,这里我要的是静态(固定)忽略输入和隐藏之间的某些隐藏边缘

I am implementing it using Tensorflow and didn't find a way of specifying this requirement. 我正在使用Tensorflow实施它,但没有找到指定此要求的方法。 I also looked into other platforms such as pytourch and theano, but still haven't got an answer. 我也研究了pytourch和theano等其他平台,但仍然没有答案。 Any idea of implementation using Python would be appreciated! 任何使用Python实现的想法都将不胜感激!

Take the snippet below: 请阅读以下代码段:

#!/usr/bin/env python3

import tensorflow as tf

features = tf.constant([1, 2, 3, 4])

hidden_1 = tf.constant([1, 1])
hidden_2 = tf.constant([2, 2])

res1 = hidden_1 * tf.slice(features, [0], [2])
res2 = hidden_2 * tf.slice(features, [2], [2])

final = tf.concat([res1, res2], axis=0)

sess = tf.InteractiveSession()
print(sess.run(final))

Assume features are your input features, with tf.slice they are split into individual slices, and each slice is at that point a separate graph (in this example they become multiplied with hidden_1 and hidden_2) and in the end they are merged back together with tf.concat . 假设要素是您的输入要素 ,使用tf.slice将其拆分为单独的切片,并且每个切片在该点处都是单独的图形(在此示例中,它们与hidden_​​1和hidden_​​2相乘),最后将它们与tf.concat

The result is [1, 2, 6, 8] because [1, 2] are multiplied with [1, 1] and [2, 3] are multiplied with [2, 2]. 结果为[1、2、6、8],因为[1,2]乘以[1,1],而[2,3]乘以[2,2]。

Below is the graph produced: 下面是生成的图: 在此处输入图片说明

I finally implemented the requirement by forcing certain blocks of the weight matrix corresponding to the first layer to be constant zero. 我最终通过将与第一层相对应的权重矩阵的某些块强制为恒定零来实现此要求。 That is, rather than just define w1 = tf.Variables(tf.random_normal([100,10])) , I define ten 10 by 1 weight vectors and concatenate them with zeros to form a block diagonal matrix as final w1. 也就是说,我不只是定义w1 = tf.Variables(tf.random_normal([100,10])) ,而是定义了10个10 x 1权重向量,并将它们与零连接起来,以形成块对角矩阵作为最终w1。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 神经网络中的隐藏层 - Hidden Layer in a Neural Network 如何丢弃神经网络中的整个隐藏层? - How to dropout entire hidden layer in a neural network? 如何实现以非全连接层作为最后一层的神经网络? - How to implement a neural network with a not-fully-connected layer as the final layer? 神经网络隐藏层 vs. 卷积隐藏层直觉 - Neural network hidden layer vs. Convolutional hidden layer intuition 如何在Tensorflow的神经网络层中实现不同的激活功能? - How to implement different activation functions in a layer of a neural network in Tensorflow? 如何训练带有负和正元素作为第一层输入的卷积神经网络? - How do I train the Convolutional Neural Network with negative and positive elements as the input of the first layer? 如何将 output 层连接到另一个神经网络的输入层? - How to connect output layers to the input layer of another neural network? 构建简单的神经网络:ValueError: Input 0 of layer sequence is in compatible with the layer - Building a simple Neural Network: ValueError: Input 0 of layer sequential is incompatible with the layer 如何在输入层和“模型作为层”之间放置层? - How to put layers between an input layer and a “Model as layer”? 具有1个隐藏层的神经网络无法学习棋盘功能? - Neural network with 1 hidden layer cannot learn checkerboard function?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM