[英]CNN Negative Number of Parameters
我正在嘗試使用keras構建CNN模型。 當我添加兩個Conv3D和MaxPooling塊時,一切正常。 但是,一旦添加了第三個塊(如代碼中所示),則可訓練參數的數量將變為負值。 知道如何發生嗎?
model = keras.models.Sequential()
# # # First Block
model.add(Conv2D(filters=16, kernel_size=(5, 5), padding='valid', input_shape=(157, 462, 14), activation = 'tanh' ))
model.add(MaxPooling2D( (2,2) ))
# # # Second Block
model.add(Conv2D(filters=32, kernel_size=(5, 5), padding='valid', activation = 'tanh'))
model.add(MaxPooling2D( (2, 2) ))
# # # Third Block
model.add(Conv2D(filters=64, kernel_size=(5, 5), padding='valid', activation = 'tanh'))
model.add(MaxPooling2D( (2, 2) ))
model.add(Flatten())
model.add(Dense(157 * 462))
model.compile(loss='mean_squared_error',
optimizer=keras.optimizers.Adamax(),
metrics=['mean_absolute_error'])
print(model.summary())
該代碼的結果如下:
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 153, 458, 16) 5616
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 76, 229, 16) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 72, 225, 32) 12832
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 36, 112, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 108, 64) 51264
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 54, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 55296) 0
_________________________________________________________________
dense_1 (Dense) (None, 72534) -284054698
=================================================================
Total params: -283,984,986
Trainable params: -283,984,986
Non-trainable params: 0
_________________________________________________________________
None
是的,當然,您的Dense
圖層的權重矩陣大小為55296 x 72534
,其中包含4010840064數字,即401,000萬個參數。
在Keras代碼中的某個地方,參數數量存儲為int32,這意味着它可以存儲的數量是有限制的,即2^32 - 1 = 2147483647
,現在您可以看到,您的401,000萬個參數更大大於2^32 - 1
,因此數字溢出到整數的負數側。
我建議您不要建立具有如此大量參數的模型,否則,如果沒有大量的RAM,您將無法進行訓練。
問題是因為您正在CPU中運行代碼,因此keras tensorflow或theano的后端可以正常工作。 我能夠在Google colab中使用GPU完美地運行您的代碼,這就是我得到的
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 153, 458, 16) 5616
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 76, 229, 16) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 72, 225, 32) 12832
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 36, 112, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 108, 64) 51264
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 54, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 55296) 0
_________________________________________________________________
dense_1 (Dense) (None, 72534) 4010912598
=================================================================
Total params: 4,010,982,310
Trainable params: 4,010,982,310
Non-trainable params: 0
我建議您使用GPU來訓練如此龐大的網絡。
希望這可以幫助!
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.