[英]Triplet embedded layer with custom (None) loss on Keras 2
I've been looking for simple implementations of triplet embedding in deep learning. 我一直在寻找深度学习中三重嵌入的简单实现。 I wanted to use Keras as it is what I am slightly more familiar (although still very inexperienced in it). 我想使用Keras,因为它是我稍为熟悉的(尽管它仍然很缺乏经验)。
Here is a reference on one of the inspiration works: paper on embedded triplets 以下是其中一项灵感作品的参考: 嵌入式三胞胎上的论文
I've found a pretty good example to start off with, working with the mnist dataset, as far as I can tell it is working pretty well. 据我所知,使用mnist数据集可以很好地开始工作,这是一个很好的示例。 Problems arise on the implementation of the merge of the 3 embedded layers. 在3个嵌入式层的合并的实现中会出现问题。
def build_model(input_shape):
base_input = Input(input_shape)
x = Conv2D(32, (3, 3), activation='relu')(base_input)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu')(x)
x = MaxPooling2D((2, 2))(x)
x = Dropout(0.25)(x)
x = Flatten()(x)
x = Dense(2, activation='linear')(x)
embedding_model = Model(base_input, x, name='embedding')
anchor_input = Input(input_shape, name='anchor_input')
positive_input = Input(input_shape, name='positive_input')
negative_input = Input(input_shape, name='negative_input')
anchor_embedding = embedding_model(anchor_input)
positive_embedding = embedding_model(positive_input)
negative_embedding = embedding_model(negative_input)
inputs = [anchor_input, positive_input, negative_input]
outputs = [anchor_embedding, positive_embedding, negative_embedding]
triplet_model = Model(inputs, outputs)
triplet_model.add_loss(K.mean(triplet_loss(outputs)))
triplet_model.compile(loss=None, optimizer='adam') # <-- CRITICAL LINE
return embedding_model, triplet_model
With the currently implementation the loss is added through model.add_loss
and I haven't find many examples like this. 在当前的实现中,损失是通过model.add_loss
添加的,我没有找到很多这样的示例。 The real issue though, is that I cannot load the saved model. 但是,真正的问题是我无法加载保存的模型。 The lines 线
triplet_model.save('triplet.h5')
model = load_model('triplet.h5')
return: 返回:
ValueError: The model cannot be compiled because it has no loss to optimize.
Adding a parameter to the 'loss' argument raises another error when I try to compile the model. 当我尝试编译模型时,向'loss'参数添加参数会引发另一个错误。 I wanted to ask how can I circumvent this issue or if there is a better way to create the model with the embedded models (without the empty loss function, maybe). 我想问一下如何解决这个问题,或者是否有更好的方法来创建带有嵌入式模型的模型(也许没有空损失函数)。
Here is the triplet_loss function for reference: 这是Triplet_loss函数供参考:
def triplet_loss(inputs, dist='sqeuclidean', margin='maxplus'):
anchor, positive, negative = inputs
positive_distance = K.square(anchor - positive)
negative_distance = K.square(anchor - negative)
if dist == 'euclidean':
positive_distance = K.sqrt(K.sum(positive_distance, axis=-1, keepdims=True))
negative_distance = K.sqrt(K.sum(negative_distance, axis=-1, keepdims=True))
elif dist == 'sqeuclidean':
positive_distance = K.mean(positive_distance, axis=-1, keepdims=True)
negative_distance = K.mean(negative_distance, axis=-1, keepdims=True)
loss = positive_distance - negative_distance
if margin == 'maxplus':
loss = K.maximum(0.0, 1 + loss)
elif margin == 'softplus':
loss = K.log(1 + K.exp(loss))
return K.mean(loss)
The problem here is that you leave triplet_model.compile(loss=None)
, but keras
does not know how to deal with it properly in load_model()
. 这里的问题是您留下了triplet_model.compile(loss=None)
,但是keras
不知道如何在load_model()
正确处理它。 I understand that you have to do so, but you can load the model in a different way to solve your current issue. 我知道您必须这样做,但是您可以采用其他方式加载模型来解决当前问题。
In short, don't load the entire model through load_model()
, but just the weights through load_weights()
. 简而言之,不要通过load_model()
加载整个模型,而只需通过load_weights()
权重。
For example, you can do 例如,您可以
# save only weights
triplet_model.save_weights('tmp.h5')
# load saved weights
new_embedding_model, new_triplet_model = build_model(input_shape)
new_triplet_model.load_weights('tmp.h5') # load only weights
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.