簡體   English   中英

刪除整個輸入層

[英]Dropout entire input layer

假設我有兩個輸入(每個輸入都有許多功能),我想將它們輸入到Dropout層中。 我希望每次迭代都刪除整個輸入及其所有相關功能,並保留所有其他輸入。

連接輸入之后,我認為我需要對Dropout使用noise_shape參數,但是連接層的形狀實際上並不能讓我這樣做。 對於形狀(15,)的兩個輸入,連接的形狀是(None,30),而不是(None,15,2),所以其中一個軸丟失了,我無法沿着它掉落。

有什么建議我可以做什么? 謝謝。

from keras.layers import Input, concatenate, Dense, Dropout

x = Input((15,))  # 15 features for the 1st input
y = Input((15,))  # 15 features for the 2nd input
xy = concatenate([x, y])
print(xy._keras_shape)
# (None, 30)

layer = Dropout(rate=0.5, noise_shape=[xy.shape[0], 1])(xy)
...

編輯:

好像我誤解了您的問題,這是根據您的要求更新的答案。

為了實現您想要的效果,x和y有效地成為了時間步長,根據noise_shape=(batch_size, 1, features)文檔,如果您輸入的形狀為(batch_size, timesteps, features) ,則noise_shape=(batch_size, 1, features) (batch_size, timesteps, features)

x = Input((15,1))  # 15 features for the 1st input
y = Input((15,1))  # 15 features for the 2nd input
xy = concatenate([x, y])

dropout_layer = Dropout(rate=0.5, noise_shape=[None, 1, 2])(xy)
...

要測試您是否獲得正確的行為,可以使用以下代碼( 參考鏈接 )檢查中間xy層和dropout_layer

### Define your model ###

from keras.layers import Input, concatenate, Dropout
from keras.models import Model
from keras import backend as K

# Learning phase must be set to 1 for dropout to work
K.set_learning_phase(1)

x = Input((15,1))  # 15 features for the 1st input
y = Input((15,1))  # 15 features for the 2nd input
xy = concatenate([x, y])

dropout_layer = Dropout(rate=0.5, noise_shape=[None, 1, 2])(xy)

model = Model(inputs=[x,y], output=dropout_layer)

# specify inputs and output of the model

x_inp = model.input[0]                                           
y_inp = model.input[1]
outp = [layer.output for layer in model.layers[2:]]        
functor = K.function([x_inp, y_inp], outp)

### Get some random inputs ###

import numpy as np

input_1 = np.random.random((1,15,1))
input_2 = np.random.random((1,15,1))

layer_outs = functor([input_1,input_2])
print('Intermediate xy layer:\n\n',layer_outs[0])
print('Dropout layer:\n\n', layer_outs[1])

您應該看到,根據您的要求,整個x或y被隨機丟棄(機會為50%):

Intermediate xy layer:

 [[[0.32093528 0.70682645]
  [0.46162075 0.74063486]
  [0.522718   0.22318116]
  [0.7897043  0.7849486 ]
  [0.49387926 0.13929296]
  [0.5754296  0.6273373 ]
  [0.17157765 0.92996144]
  [0.36210892 0.02305864]
  [0.52637625 0.88259524]
  [0.3184462  0.00197006]
  [0.67196816 0.40147918]
  [0.24782693 0.5766827 ]
  [0.25653633 0.00514544]
  [0.8130438  0.2764429 ]
  [0.25275478 0.44348967]]]

Dropout layer:

 [[[0.         1.4136529 ]
  [0.         1.4812697 ]
  [0.         0.44636232]
  [0.         1.5698972 ]
  [0.         0.2785859 ]
  [0.         1.2546746 ]
  [0.         1.8599229 ]
  [0.         0.04611728]
  [0.         1.7651905 ]
  [0.         0.00394012]
  [0.         0.80295837]
  [0.         1.1533654 ]
  [0.         0.01029088]
  [0.         0.5528858 ]
  [0.         0.88697934]]]

如果你想知道為什么所有的元素都乘以2,看看tensorflow如何實現輟學這里

希望這可以幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM