简体   繁体   中英

What does training = False actually do for Tensorflow Transfer Learning?

I have this code right here:

base_model = tf.keras.applications.resnet_v2.ResNet50V2(input_shape=input_shape, include_top=False, weights='imagenet')

base_model.trainable = False

inputs = tf.keras.Input(shape=input_shape)

x = data_augmentation(inputs)

x = tf.keras.applications.resnet_v2.preprocess_input(x)

x = base_model(x, training = False)

What does training = False actually do when we use it for base_model? I know that training is a boolean value to specify we want to run during training on inference mode, but following the Transfer Learning guide on Tensorflow, I can't figure out what it actually does.

We set base_model.trainable = False, this means that the layers won't learn and we are just going to use what they learnt from imagenet. But what does base_model(x, training = False) does? I know that this won't run during training, does so when I am calling the fit() method, what is happening to base_model since training is set to False?

I've read that it has something to do with Fine Tuning and batch norm layers but I am a bit lost.

Also should I use fine-tuning? If I am planning not use it because the model is performing well anyway should I set trainining = True? Or not set that value at all?

In general, that depends on your layers. For example, the dropout layer only sets values to 0, when training=True . Another example is the BatchNormalization layer, which works different during training and inference. For other layers, like the classical dense layer, it does not make a difference. If you really want to know all the details, you will have to read all the used layers and their specific behavior.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM