簡體   English   中英

如何在不重新初始化 model 的情況下訓練神經網絡兩次?

[英]How to train a neural network twice without re-initialize the model?

假設我有這個 model:

def mask_layer(tensor):
return layers.Multiply()([tensor, tf.ones([1, 128])])


def get_model():

inp_1 = keras.Input(shape=(64, 101, 1), name="input")
x = layers.Conv2D(256, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(inp_1)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(128, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Flatten()(x)
x = layers.Dense(512)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Dense(256)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x= layers.Dense(128, name="output1")(x)
mask = layers.Lambda(mask_layer, name="lambda_layer")(x)
out2 = layers.Dense(40000, name="output2")(mask)

model = keras.Model(inp_1, [mask, output2], name="2_out_model")

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), 
              loss="mean_squared_error")
plot_model(model, to_file='model.png', show_shapes=True, show_layer_names=True)
model.summary()
return model

然后,我訓練我的網絡:

model = get_model()
es = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50)
history = model.fit(X_train, [Y_train, Z_train], validation_data=(X_val, [Y_val, Z_val]), epochs=500,
                    batch_size=32,
                    callbacks=[es])
test_loss, _, _ = model.evaluate(X_test, [Y_test, Z_test], verbose=1)

我想用另一個訓練集重新訓練已經訓練過的網絡,但是改變 Lambda 層的定義,假設這次是 function 返回:

return layers.Multiply()([tensor, tf.ones([1, 128])*1.2])

我是否需要召回 function "get_model()" (因為我重新定義了一個層)然后再次適合? 不存在重新初始化 model 的權重的風險嗎? 先感謝您:)

您的 Lambda 層不是可訓練層,因此您可以安全地將訓練后的權重移動到另一個 model (具有相同的結構)但更改您的 Lambda 層

在示例下方:

def mask_layer1(tensor):
    return layers.Multiply()([tensor, tf.ones([1, 128])])

def mask_layer2(tensor):
    return layers.Multiply()([tensor, tf.ones([1, 128])*1.2])


def get_model(mask_kind):

    inp = keras.Input(shape=(64, 101, 1), name="input")
    
    x = layers.Conv2D(256, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), 
                      strides=(3, 3), padding="same")(inp)
    x = layers.LeakyReLU(alpha=0.3)(x)
    x = layers.Conv2D(128, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), 
                      strides=(3, 3), padding="same")(x)
    x = layers.LeakyReLU(alpha=0.3)(x)
    x = layers.Flatten()(x)
    x = layers.Dense(512)(x)
    x = layers.LeakyReLU(alpha=0.3)(x)
    x = layers.Dense(256)(x)
    x = layers.LeakyReLU(alpha=0.3)(x)
    x = layers.Dense(128, name="output1")(x)
    
    if mask_kind == 1:
        mask = layers.Lambda(mask_layer1, name="lambda_layer")(x)
    elif mask_kind == 2:
        mask = layers.Lambda(mask_layer2, name="lambda_layer")(x)
    else:
        mask = x
    
    out = layers.Dense(40000, name="output2")(mask)

    model = keras.Model(inp, [mask, out], name="2_out_model")
    model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), 
                  loss="mean_squared_error")
    
    return model


model1 = get_model(mask_kind=1)
model1.fit(...)

model2 = get_model(mask_kind=2)
model2.set_weights(model1.get_weights())
model2.fit(...)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM