简体   繁体   中英

different output of model.fit (after loading model no training) and model.predict in keras

the def get_vgg_twoeyes() is the definition of my model. I have loaded a pretrained model,which is trained on the same computer,then I want to fine-tune the model. Before retrain the model, I set the model.trainable false that assure the weights of model are fixed. Before my training, the weights are the same to the saved weights. I found the output of model.fit is different from the output of model.predict. As I assume that the model.fit with same weights as model.predict should output the same result because model.trainable is false that means the model.fit behave the model.predict.

def get_vgg_twoeyes(optimizer='adam', model_type='VGG16',fc1_size=1024, fc2_size=512, fc3_size=256):

kern_init = initializers.glorot_normal()
img_input = Input(shape=(36, 60, 3), name='img_input')
headpose_input = Input(shape=(2,), name='headpose_input')

# create the base pre-trained model
if model_type == 'VGG19':
    base_model = VGG19(input_tensor=img_input, weights='imagenet', include_top=False)
elif model_type == 'VGG16':
    base_model = VGG16(input_tensor=img_input, weights='imagenet', include_top=False)
else:
    raise Exception('Unknown model type in get_vgg_twoeyes')

# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)

# let's add a fully-connected layer
x = Dense(fc1_size, kernel_initializer=kern_init)(x)
x = concatenate([x, headpose_input])
x = BatchNormalization()(x)
x = Activation('relu')(x)

x = Dense(fc2_size, kernel_initializer=kern_init)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)

gaze_predictions = Dense(2, kernel_initializer=kern_init, name='pred_gaze')(x)

# this is the model we will train
model = Model(inputs=[img_input, headpose_input], outputs=gaze_predictions)
model.compile(optimizer=optimizer, loss=angle_loss, metrics=['accuracy', accuracy_angle])
return model

# fine-tune the model
models=load_model(model_path + "15Fold" + prefix + ''+str(i) + 
    suffix + ".h5",custom_objects={'accuracy_angle':accuracy_angle, 
     'angle_loss': angle_loss}))

model.trainable=False
adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.95)
model.compile(optimizer=adam, loss=angle_loss, metrics= ['accuracy', accuracy_angle]) 
model.fit({'img_input':cal_images,'headpose_input':cal_headposes},
cal_gazes,shuffle=False,batch_size=32,epochs=1,callbacks= 
  [losshistory()])

predgaze=model.predict({'img_input': cal_images, 'headpose_input': 
   cal_headposes},  batch_size=2,verbose=1)

You have to probably compile the model again after setting model.trainable=False . Else, you can manually freeze the layers individually like

for l in model.layers: l.trainable=False

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM