繁体   English   中英

在Keras中手动分配辍学层

[英]Manually Assign Dropout Layer in Keras

我正在尝试学习NN中辍学正则化的内部工作原理。 我主要从Francois Chollet的“ Python深度学习”中学习。

假设我正在使用IMDB电影评论情感数据,并建立一个如下所示的简单模型:

# download IMDB movie review data
# keeping only the first 10000 most freq. occurring words to ensure manageble sized vectors
from keras.datasets import imdb

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(
    num_words=10000)

# prepare the data
import numpy as np
# create an all 0 matrix of shape (len(sequences), dimension)
def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        # set specific indices of results[i] = 1
        results[i, sequence] = 1.
    return results

# vectorize training data
x_train = vectorize_sequences(train_data)
# vectorize test data
x_test = vectorize_sequences(test_data)

# vectorize response labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

# build a model with L2 regularization
from keras import regularizers
from keras import models
from keras import layers

model = models.Sequential()
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                       activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                       activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

本书提供了使用以下行手动设置随机辍学权重的示例:

# at training time, zero out a random fraction of the values in the matrix
layer_output *= np.random.randint(0, high=2, size=layer_output.shape)

我将如何1)实际将其集成到我的模型中,以及2)如何在测试时删除辍学?

编辑:我知道使用丢失的集成方法,如下面的行,我实际上正在寻找一种手动实现上述方法

model.add(layers.Dropout(0.5))

可以使用Lambda层来实现。

from keras import backend as K
def dropout(input):
    training = K.learning_phase()
    if training is 1 or training is True:
        input *= K.cast(K.random_uniform(K.shape(input), minval=0, maxval=2, dtype='int32'), dtype='float32')
        input /= 0.5    
    return input

def get_model():
        model = models.Sequential()
        model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                               activation='relu', input_shape=(10000,)))
        model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                               activation='relu'))
        model.add(layers.Lambda(dropout)) # add dropout using Lambda layer
        model.add(layers.Dense(1, activation='sigmoid'))
        print(model.summary())
        return model

K.set_learning_phase(1)
model = get_model()
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
weights = model.get_weights()
K.set_learning_phase(0)
model = get_model()
model.set_weights(weights)
print('model prediction is {}, label is {} '.format(model.predict(x_test[0][None]), y_test[0]))

模型预测为[[0.1484453]],标签为0.0

我将如何1)将其实际整合到我的模型中

实际上,那段使用numpy库的Python代码仅用于说明辍学的工作方式。 这不是在Keras模型中实现Dropout的方式。 相反,要在Keras模型中使用Dropout,您需要使用Dropout层,并为其指定一个比率数(介于0和1之间),该比率数表示辍学率:

from keras import layers

# ...
model.add(layers.Dropout(dropout_rate))
# add the rest of layers to the model ...

2)如何在测试时删除辍学?

您无需手动执行任何操作。 它由Keras自动处理,当您使用predict()方法时,它将在预测阶段关闭。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM