簡體   English   中英

keras:如何阻止卷積層權重

[英]keras: how to block convolution layer weights

在這里,我有一個針對Keras的GoogleNet模型。 是否有任何可能的方式來阻止網絡各個層的更改? 我想阻止預訓練模型的前兩層進行更改。

通過“阻止更改各個圖層”,我假設您不想訓練那些圖層,也就是說,您不想修改加載的權重(可能是在以前的訓練中學到的)。

如果是這樣,您可以將trainable=False傳遞給該層,並且該參數將不會用於訓練更新規則。

例:

from keras.models import Sequential
from keras.layers import Dense, Activation

model = Sequential([
    Dense(32, input_dim=100),
    Dense(output_dim=10),
    Activation('sigmoid'),
])

model.summary()

model2 = Sequential([
    Dense(32, input_dim=100,trainable=False),
    Dense(output_dim=10),
    Activation('sigmoid'),
])

model2.summary()

您可以在第二個模型的模型摘要中看到參數被視為不可訓練的參數。

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
dense_1 (Dense)                  (None, 32)            3232        dense_input_1[0][0]              
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 10)            330         dense_1[0][0]                    
____________________________________________________________________________________________________
activation_1 (Activation)        (None, 10)            0           dense_2[0][0]                    
====================================================================================================
Total params: 3,562
Trainable params: 3,562
Non-trainable params: 0
____________________________________________________________________________________________________
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
dense_3 (Dense)                  (None, 32)            3232        dense_input_2[0][0]              
____________________________________________________________________________________________________
dense_4 (Dense)                  (None, 10)            330         dense_3[0][0]                    
____________________________________________________________________________________________________
activation_2 (Activation)        (None, 10)            0           dense_4[0][0]                    
====================================================================================================
Total params: 3,562
Trainable params: 330
Non-trainable params: 3,232 

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM