简体   繁体   中英

Why can't I load my saved siamese model in Keras?

I'm trying to build a personal Re-ID system, and I use siamese architecture for model training. I use callbacks.ModelCheckpoint to save the model in each epoch. An error occurred while loading the saved model.

I use the VGG16 pre-trained model for training:

input_shape = (160,60,3)
conv_base = VGG16(weights='imagenet',
                  include_top=False,
                  input_shape=(160, 60, 3))

output = conv_base.layers[-5].output

x=Flatten()(output)
x=Dense(512,activation='relu')(x)
out=Dense(512,activation='relu')(x)

conv_base = Model(conv_base.input, output=out)

for layer in conv_base.layers[:-11]:
    layer.trainable = False

create a siamese model:

# We have 2 inputs, 1 for each picture
left_input = Input((160,60,3))
right_input = Input((160,60,3))

# We will use 2 instances of 1 network for this task
convnet = Sequential([
    InputLayer(input_shape=(160, 60, 3)),
    conv_base
])
# Connect each 'leg' of the network to each input
# Remember, they have the same weights
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)

# Getting the L1 Distance between the 2 encodings
L1_layer = Lambda(lambda tensor:K.abs(tensor[0] - tensor[1]))

# Add the distance function to the network
L1_distance = L1_layer([encoded_l, encoded_r])

prediction = Dense(1,activation='sigmoid')(L1_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)

#optimizer = Adam(0.00006, decay=2.5e-4)
sgd = optimizers.RMSprop(lr=1e-4)
#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss="binary_crossentropy", optimizer=sgd, metrics=['accuracy'])

Train network:

checkpoint = ModelCheckpoint('drive/My Drive/thesis/new change parametr/model/model-{epoch:03d}.h5', verbose=1, save_weights_only=False,monitor='val_loss', mode='auto')

newmodel=siamese_net.fit([left_train,right_train], targets,
          batch_size=64,
          epochs=2,
          verbose=1,shuffle=True, validation_data=([valid_left,valid_right],valid_targets),callbacks=[checkpoint])

Models are stored in each epoch but give the following error when load:

loaded_model= load_model('drive/My Drive/thesis/new change parametr/model/model-001.h5')

Error:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-6-8de2283b355f> in <module>()
      1 
----> 2 loaded_model= load_model('drive/My Drive/thesis/new change parametr/model/model-001.h5')
      3 print('Load succesfuly')
      4 
      5 #siamese_net.load_weights('drive/My Drive/thesis/new change parametr/weight/model-{epoch:03d}.h5')

7 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py in preprocess_weights_for_loading(layer, weights, original_keras_version, original_backend, reshape)
    939                                  str(weights[0].size) + '. ')
    940             weights[0] = np.reshape(weights[0], layer_weights_shape)
--> 941         elif layer_weights_shape != weights[0].shape:
    942             weights[0] = np.transpose(weights[0], (3, 2, 0, 1))
    943             if layer.__class__.__name__ == 'ConvLSTM2D':

IndexError: list index out of range

My codes are executed on Google Colaboratory. I've searched online and the issue is probably because of the use of the siamese architecture. Any help would be appreciated!

An error occurred while loading the saved model when the network is created as follows:

input_shape = (160,60,3)
conv_base = VGG16(weights='imagenet',
                  include_top=False,
                  input_shape=(160, 60, 3))

output = conv_base.layers[-5].output

x=Flatten()(output)
x=Dense(512,activation='relu')(x)
out=Dense(512,activation='relu')(x)

conv_base = Model(conv_base.input, output=out)

for layer in conv_base.layers[:-11]:
    layer.trainable = False

# We have 2 inputs, 1 for each picture
left_input = Input((160,60,3))
right_input = Input((160,60,3))

# We will use 2 instances of 1 network for this task
convnet = Sequential([
    InputLayer(input_shape=(160, 60, 3)),
    conv_base
])

The problem was solved by making changes to the creation model:

# We have 2 inputs, 1 for each picture
left_input = Input((160,60,3))
right_input = Input((160,60,3))

conv_base = VGG16(weights='imagenet',
                  include_top=False,
                  input_shape=(160, 60, 3))

output = conv_base.layers[-5].output

x=Flatten()(output)
x=Dense(512,activation='relu')(x)
out=Dense(512,activation='relu')(x)

for layer in conv_base.layers[:-11]:
    layer.trainable = False 

convnet = Model(conv_base.input, output=out)

Then:

loaded_model= load_model('drive/My Drive/thesis/new change parametr/model/model-001.h5')
print('Load successfully')

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM