简体   繁体   中英

Custom layers in tensorflow/keras - are these two options equal?

According to https://www.tensorflow.org/guide/keras/custom_layers_and_models#layers_are_recursively_composable custom layers in tensorflow2 / keras can be defined like this

class CustomLayer(layers.Layer):
    def __init__(self, ntimes):
        super().__init__()
        self.convs = [layers.Conv2D(10, (3, 3),  padding='same') for i in range(ntimes)]

    def call(self, x, **kwargs):
        for conv in self.convs:
            x = conv(x)
        return x

However, recently I came across another notation:

def CustomLayer(ntimes):
    def layer(x):
        for i in range(ntimes):
            x = layers.Conv2D(10, (3, 3), padding='same')(x)
        return x

    return layer

Obviously it's not an instance of the Layer-class, but the resulting operations seem to be identical. Or am I missing something? Are there any downsides to this approach?


Ps: Such layers would be used in the following exemplary context:

xin = layers.Input((10,10,3))
layer = CustomLayer(5)
xout = layer(xin)
model = models.Model(xin, xout)
model.summary()

In my opinion, what you are doing is creating a block of Layers with predefined functionality instead of creating a new Custom Layer. To elaborate you could make individual access to each layer in the block and hence it's not a Layer. Creating blocks can be done using both but in case you want to implement your own functions inside a layer or some modification like printing the results of that layer at each epoch or something else to that particular layer such then you would need to use the first method.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM