[英]How can I create a locally-connected layer with both locally-connected and dense parent layers?
I currently have a network ( image of toy example ) with two input layers. 我目前有一个带有两个输入层的网络( 玩具示例的图像 )。
in1
is just a short, flat vector of values but in2
is a 27-channel image. in1
只是值的简短平面向量,而in2
是27通道图像。 I want my network to be structured on locally-connected layers but I don't know a good way to sprinkle in in1
's data with in2
. 我希望将网络构建在本地连接的层上,但是我不知道将
in1
的数据撒入in2
的好方法。 I currently flatten in2
's branch after a few layers, merge with in1
, and add dense layers onward. 目前,我将
in2
的分支平整了几层,与in1
合并,然后继续添加密集的层。
How can I densely introduce in1
's data while maintaining the locally-connected architecture? 如何在保持本地连接体系结构的同时密集引入
in1
的数据? The image linked above shows this goal with a red arrow. 上面链接的图像用红色箭头显示了此目标。
One possible solution that I came up with is to copy in1
's vector as channels to in2
such that in2
's dimension would be width * height * (num_original_channels + len(in1) )
. 我想出的一种可能的解决方案是将
in1
的向量作为通道复制到in2
,以使in2
的尺寸为width * height * (num_original_channels + len(in1) )
。 This seems inelegant because it would be copying in1
many times. 这似乎不雅,因为它会被复制
in1
多次 。 There must be a better way. 一定会有更好的办法。
I'm new to keras so please pardon my shaky vocabulary. 我是keras的新手,所以请原谅我摇摇欲坠的词汇。 Also, this is a toy example just to illustrate my idea so there may be some other/unrelated architectural criticisms.
另外,这只是一个玩具示例,仅用于说明我的想法,因此可能会有其他/无关的建筑批评。
Thanks in advance for any advice! 在此先感谢您的任何建议!
fwiw, here is the code I am using: 首先,这是我正在使用的代码:
input1 = Input( ... ) #small flat vec
input2 = Input( ... ) #deep picture
pre = Reshape( ... )( input2 )
l1 = LocallyConnected2D( ... )( pre )
l2 = LocallyConnected2D( ... )( l1 )
l3 = LocallyConnected2D( ... )( l2 )
flat = Flatten( ... )( l3 )
merge = tensorflow.keras.layers.concatenate( [flat, input1], ... )
l4 = Dense( ... )( merge )
l5 = Dense( ... )( l4 )
output = Dense( ... )( l5 )
Answering my own question here. 在这里回答我自己的问题。 It seems like the best solution is to have both input1 and input2 create two separate tensors with the same layer and no activation funciton, sum them together, then add the activation.
似乎最好的解决方案是让input1和input2都创建两个具有相同层且没有激活函数的单独的张量,将它们加在一起,然后添加激活。
Using my example from before it would look something like this: 使用之前的示例,它看起来像这样:
(I'm adding example dimensions to hopefully clarify what I mean. They're made up from thin air) (我正在添加示例尺寸以希望阐明我的意思。它们是凭空制作的)
input1 = Input( ... ) #small flat vec, 1x200
input2 = Input( ... ) #deep picture, 50x50x10
l1 = LocallyConnected2D( activation=None, ... )( input2 ) # 40x40x5
num_elements = 40 * 40 * 5
d1 = Dense( units=num_elements, activation=None, ... )( input1 ) # 1x8000
d1_3D = Reshape( target_shape=(40, 40, 5,) )( d1 ) #40x40x5
merge = Add()([ l1, d1_3D ]) #40x40x5
l2 = LeakyReLU( ... )( merge ) #Or whatever activation function you want, 40x40x5
# ...
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.