简体   繁体   English

重用一组Keras图层

[英]Reusing a group of Keras layers

I know that you can reuse Keras layers. 我知道您可以重复使用Keras图层。 For eg I declare two layers for a decoder network: 例如,我为解码器网络声明了两层:

decoder_layer_1 = Dense(intermediate_dim,activation='relu',name='decoder_layer_1')
decoder_layer_2 = Dense(intermediate_dim,activation='relu',name='decoder_layer_2')

Use in first model: 在第一个模型中使用:

decoded = decoder_layer_1(z)
decoded = decoder_layer_2(decoded)

Use in second model: 在第二个模型中使用:

_decoded = decoder_layer_1(decoder_input)
_decoded = decoder_layer_2(_decoded)

The above method is ok if I need to reuse only a couple of layers, cumbersome if I want to reuse a large number of layers (for eg. a decoder network with 10 layers). 如果我只需要重复使用几层,上面的方法就可以了,如果我想重用大量的层(例如,一个10层的解码器网络),那就很麻烦。 Is there a more efficient means to do it other than explicitly declaring each layer. 除了明确声明每个图层之外,还有更有效的方法吗? Is there a means to implement it as shown below: 有没有办法实现它,如下所示:

decoder_layers = group_of_layers() 

Reuse in the first model: 在第一个模型中重用:

decoded = group_of_layers(z)

Reuse in the second model: 在第二个模型中重用:

_decoded = group_of_layers(decoder_input)

I struggled with this problem too. 我也在努力解决这个问题。 What works for me is to wrap shared parts in a model, with its own input definition: 对我有用的是将共享部分包装在模型中,并使用自己的输入定义:

def group_of_layers(intermediate_dim):
    shared_model_input = keras.layers.Input(shape=...)
    shared_internal_layer = keras.layers.Dense(intermediate_dim, activation='relu', name='shared_internal_layer')(shared_model_input)
    shared_model_output = keras.layers.Dense(intermediate_dim, activation='relu', name='shared_model_output')(shared_internal_layer)
    return keras.models.Model(shared_model_input, shared_model_output)

In Functional API, you can use the shared model in the same way a single layer as long as the model's input layer matches shape of layers you apply to it: 在Functional API中,只要模型的输入图层与您应用的图层的形状相匹配,就可以像单个图层一样使用共享模型:

group = group_of_layers(intermediate_dim)
result1 = group(previous_layer)
result2 = group(different_previous_layer)

The weights are going to be shared then. 然后将分享权重。

This is nicely described in the documentation, see Shared vision model . 这在文档中有详细描述,请参阅共享视觉模型

You can try: 你可以试试:

def group_of_layers(x, intermediate_dim):
    x = Dense(intermediate_dim,activation='relu',name='decoder_layer_1')(x)
    x = Dense(intermediate_dim,activation='relu',name='decoder_layer_2')(x)
    return x

And then: 然后:

decoded = group_of_layers(z, intermediate_dim)
_decoded = group_of_layers(decoder_input, intermediate_dim)

You must declare input and outputs of the model afterwards though, eg for second model: 之后您必须声明模型的输入和输出,例如,对于第二个模型:

model = Model(inputs = decoder_input, outputs = _decoded)

You could also append a final layer like: 您还可以附加最终图层,如:

final_layer = Dense(...)(_decoded)
model = Model(inputs = decoder_input, outputs = final_layer)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM