[英]In case of 2 keras models sharing layers, which model to compile after setting trainable=False?
I have 2 keras models I need to train.我有 2 个需要训练的 keras 模型。 Lets say first model has 5 layers.
假设第一个模型有 5 层。 Now I call the last 3 layers of the first model to be another model.
现在我将第一个模型的最后 3 层称为另一个模型。
Like this:像这样:
input=Input(shape=(100,))
x1=Dense(50, activation='relu')(input)
x2=Dense(50, activation='relu')(x1)
x3=Dense(50, activation='relu')(x2)
x4=Dense(50, activation='relu')(x3)
output=Dense(10, activation='softmax')(x4)
model1=Model(inputs=input, outputs=output)
model2=Model(inputs=x3, outputs=output)
model1.compile(optimizer='rmsprop', loss='cross_entropy')
model2.compile(optimizer='rmsprop', loss='cross_entropy')
Now for some reason, I need to train the model1 on batches ie I can't call fit() method and do the training in 1 pass.现在出于某种原因,我需要批量训练模型 1,即我不能调用 fit() 方法并在 1 次传递中进行训练。
for epoch in range(10):
model1.train_on_batch(x, y).
Now coming to the problem.现在问题来了。 I need to toggle the model2's training parameter inside each epoch multiple times.
我需要在每个时期内多次切换模型 2 的训练参数。 Think of GAN like scenario.
把 GAN 想象成场景。 So I need to do this inside loop
所以我需要在循环内执行此操作
model2.trainable=False // sometimes
model2.trainable=True // other times
However keras says that after toggling the trainable parameter of a model, you need to re-compile the model for the changes to take effect.但是 keras 表示,在切换模型的可训练参数后,您需要重新编译模型以使更改生效。 But I cannot understand which model to compile?
但我不明白要编译哪个模型? The layers are shared between model1 and model2.
这些层在模型 1 和模型 2 之间共享。 Should compiling any of them be fine?
编译它们中的任何一个都可以吗? Or I need to compile both of them.
或者我需要编译它们。
So I mean to say that whether the following are equivalent or not?所以我的意思是说以下是否等价?
Case 1:情况1:
model2.trainable=False
model1.compile(optimizer='rmsprop', loss='cross_entropy')
Case 2:案例2:
model2.trainable=False
model2.compile(optimizer='rmsprop', loss='cross_entropy')
Case 3:案例3:
model2.trainable=False
model1.compile(optimizer='rmsprop', loss='cross_entropy')
model2.compile(optimizer='rmsprop', loss='cross_entropy')
You need to compile both models separately before training (otherwise you will be filling your memory for nothing): one with layers frozen, the other w/o.您需要在训练前分别编译这两个模型(否则您将一无所获):一个层冻结,另一个没有。 If you are only fitting input to output, there is no reason to compile the part with frozen layers.
如果您只是将输入拟合到输出,则没有理由编译具有冻结层的部件。
Also, keras will complain if you try to define a Model with an intermediate layer as input, you would need to create two models and then put them one after the other in the pipeline:此外,如果您尝试定义一个带有中间层作为输入的模型,keras 会抱怨,您需要创建两个模型,然后将它们一个接一个地放入管道中:
input=Input(shape=(100,))
x1=Dense(50, activation='relu')(input)
x2=Dense(50, activation='relu')(x1)
x3=Dense(50, activation='relu')(x2)
aux_model1 = Model(inputs=input, outputs=x3)
x3_input= Input(shape=x3.shape.as_list()[1:])
x4=Dense(50, activation='relu')(x3_input)
output=Dense(10, activation='softmax')(x4)
aux_model2 = Model(inputs=x3_input, outputs=output)
x3 = aux_model1(input)
output = aux_model2(x3)
model1 = Model(inputs=input, outputs=output)
Now compile to train w/ all trainable:现在编译以训练所有可训练的:
model1.compile(optimizer='rmsprop', loss='cross_entropy')
Now compile to train w/ layers in aux_model2 non trainable:现在编译以在 aux_model2 不可训练中使用层进行训练:
for layer in aux_model2.layers:
layer.trainable=False
model2 = Model(inputs=input, outputs=output)
model2.compile(optimizer='rmsprop', loss='cross_entropy')
And then train either model1 or model2 depending on the condition:然后根据条件训练模型 1 或模型 2:
for epoch in range(10):
if training_layers:
model1.train_on_batch(x, y)
else:
model2.train_on_batch(x, y)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.