簡體   English   中英

在Keras中手動分配輟學層

[英]Manually Assign Dropout Layer in Keras

我正在嘗試學習NN中輟學正則化的內部工作原理。 我主要從Francois Chollet的“ Python深度學習”中學習。

假設我正在使用IMDB電影評論情感數據,並建立一個如下所示的簡單模型:

# download IMDB movie review data
# keeping only the first 10000 most freq. occurring words to ensure manageble sized vectors
from keras.datasets import imdb

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(
    num_words=10000)

# prepare the data
import numpy as np
# create an all 0 matrix of shape (len(sequences), dimension)
def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        # set specific indices of results[i] = 1
        results[i, sequence] = 1.
    return results

# vectorize training data
x_train = vectorize_sequences(train_data)
# vectorize test data
x_test = vectorize_sequences(test_data)

# vectorize response labels
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

# build a model with L2 regularization
from keras import regularizers
from keras import models
from keras import layers

model = models.Sequential()
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                       activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                       activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

本書提供了使用以下行手動設置隨機輟學權重的示例:

# at training time, zero out a random fraction of the values in the matrix
layer_output *= np.random.randint(0, high=2, size=layer_output.shape)

我將如何1)實際將其集成到我的模型中,以及2)如何在測試時刪除輟學?

編輯:我知道使用丟失的集成方法,如下面的行,我實際上正在尋找一種手動實現上述方法

model.add(layers.Dropout(0.5))

可以使用Lambda層來實現。

from keras import backend as K
def dropout(input):
    training = K.learning_phase()
    if training is 1 or training is True:
        input *= K.cast(K.random_uniform(K.shape(input), minval=0, maxval=2, dtype='int32'), dtype='float32')
        input /= 0.5    
    return input

def get_model():
        model = models.Sequential()
        model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                               activation='relu', input_shape=(10000,)))
        model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001),
                               activation='relu'))
        model.add(layers.Lambda(dropout)) # add dropout using Lambda layer
        model.add(layers.Dense(1, activation='sigmoid'))
        print(model.summary())
        return model

K.set_learning_phase(1)
model = get_model()
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
weights = model.get_weights()
K.set_learning_phase(0)
model = get_model()
model.set_weights(weights)
print('model prediction is {}, label is {} '.format(model.predict(x_test[0][None]), y_test[0]))

模型預測為[[0.1484453]],標簽為0.0

我將如何1)將其實際整合到我的模型中

實際上,那段使用numpy庫的Python代碼僅用於說明輟學的工作方式。 這不是在Keras模型中實現Dropout的方式。 相反,要在Keras模型中使用Dropout,您需要使用Dropout層,並為其指定一個比率數(介於0和1之間),該比率數表示輟學率:

from keras import layers

# ...
model.add(layers.Dropout(dropout_rate))
# add the rest of layers to the model ...

2)如何在測試時刪除輟學?

您無需手動執行任何操作。 它由Keras自動處理,當您使用predict()方法時,它將在預測階段關閉。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM