简体   繁体   中英

Transfer Learning - How can I change only the output layer in TensorFlow?

I am trying to apply one idea proposed by Rusu et al. in https://arxiv.org/pdf/1511.06295.pdf , which consists in training a NN changing the output layer according to the class of the input, ie, provided that we know the id of the input, we would pick the corresponding output layer. This way, all the hidden layers would be trained with all the data, but each output layer would only be trained with its corresponding type of input data.

This is meant to achieve good results in a transfer learning framework.

How can I implement this "change of the last layer" in tensorflow 2.0?

If you use model subclassing , you can actually define you forward pass.

class MyModel(tf.keras.Model):

    def __init__(self):
        super(Model, self).__init__()
        self.block_1 = BlockA()
        self.block_2 = BlockB()
        self.global_pool = layers.GlobalAveragePooling2D()
        self.classifier = Dense(num_classes)

    def call(self, inputs):
        if condition:
            x = self.block_1(inputs)
        else:
            x = self.block_2(inputs)
        x = self.global_pool(x)
        return self.classifier(x)

You'll still have the backprop part to figure out, but I think it's fairly easy if you use a multioutput model and train all your "last layers" at the same time.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM