I understand that dropout doesn't have the same effect for kernels of convolutional filters of a neural network, as it does for FC layers :
But does the same fact apply, if you dropout the whole filter?
Let's assume a network structure like: Input, Conv2D, Conv2D, ..., Conv2D, Conv2D, Sigmoid. So there is no fully connected Layer in the whole network.
Question 1 Is it reasonable to apply conv filter dropouts to avoid co-adaptation between filters for improving the results of filter visualization .
Question 2 Is there a quick way to do dropout filters in keras.
Answer 1 Maybe.
Without Dropout:
With Dropout:
Answer 2 According to the keras documentations states, use keras.layers.Dropout(rate, noise_shape=None, seed=None)
with noise_shape=(batch_size, 1, 1, features)
. Use 1 if you want the dropout mask to be the same for a complete dimension.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.