繁体   English   中英

Keras/Tensorflow 中 dropout 的动态切换

[英]Dynamic switching of dropout in Keras/Tensorflow

我正在 Tensorflow 中构建一个强化学习算法,我希望能够在一次调用session.run()动态关闭然后打开 dropout。

理由:我需要(1)做一个没有辍学的前向传递来计算目标; (2) 对生成的目标进行训练。 如果我在对session.run()不同调用中执行这两个步骤,则一切正常。 但我想通过一次调用session.run() (使用tf.stop_gradients(targets) )来完成。

在尝试了几个没有取得很大成功的解决方案后,我找到了一个解决方案,我用一个变量替换了 Keras 使用的learning_phase占位符(因为占位符是张量并且不允许赋值)并使用自定义层将该变量设置为 True 或假如所愿。 此解决方案显示在下面的代码中。 分别获取m1m2的值(例如,运行sess.run(m1, feed_dict={ph:np.ones((1,1))})按预期工作,没有错误。但是,获取的值m3 ,或同时获取m1m2的值,有时有效,有时无效(并且错误消息不提供信息)。

你知道我做错了什么或做我想做的更好的方法吗?

编辑:代码显示了一个玩具示例。 实际上,我只有一个模型,我需要运行两次前向传球(一次关闭退出,另一次打开退出)和一次向后传球。 我想在不返回 python 的情况下完成所有这一切。

from tensorflow.keras.layers import Dropout, Dense, Input, Layer
from tensorflow.python.keras import backend as K
from tensorflow.keras import Model
import tensorflow as tf
import numpy as np

class DropoutSwitchLayer(Layer):
  def __init__(self, stateful=True, **kwargs):
    self.stateful = stateful
    self.supports_masking = True
    super(DropoutSwitchLayer, self).__init__(**kwargs)

  def build(self, input_shape):
    self.lph = tf.Variable(True, dtype=tf.bool, name="lph", trainable=False)
    K._GRAPH_LEARNING_PHASES[tf.get_default_graph()] = self.lph
    super(DropoutSwitchLayer, self).build(input_shape)

  def call(self, inputs, mask=None):
    data_input, training = inputs
    op = self.lph.assign(training[0], use_locking=True)
    # ugly trick here to make the layer work
    data_input = data_input + tf.multiply(tf.cast(op, dtype=tf.float32), 0.0)
    return data_input

  def compute_output_shape(self, input_shape):
    return input_shape[0]


dropout_on = np.array([True], dtype=np.bool)
dropout_off = np.array([False], dtype=np.bool)
input_ph = tf.placeholder(tf.float32, shape=(None, 1))

drop = Input(shape=(), dtype=tf.bool)
input = Input(shape=(1,))
h = DropoutSwitchLayer()([input, drop])
h = Dense(1)(h)
h = Dropout(0.5)(h)
o = Dense(1)(h)
m = Model(inputs=[input, drop], outputs=o)

m1 = m([input_ph, dropout_on])
m2 = m([input_ph, dropout_off])
m3 = m([m2, dropout_on])

sess = tf.Session()
K.set_session(sess)
sess.run(tf.global_variables_initializer())

编辑 2: Daniel Möller 下面的解决方案在使用Dropout层时有效,但如果在LSTM层内使用LSTM呢?

input = Input(shape=(1,))
h = Dense(1)(input)
h = RepeatVector(2)(h)
h = LSTM(1, dropout=0.5, recurrent_dropout=0.5)(h)
o = Dense(1)(h)

为什么不制作一个单一的连续模型?

#layers
inputs = Input(shape(1,))
dense1 = Dense(1)
dense2 = Dense(1)

#no drop pass:
h = dense1(inputs)
o = dense2(h)
#optionally:
o = Lambda(lambda x: K.stop_gradient(x))(o)

#drop pass:
h = dense1(o)
h = Dropout(.5)(h)
h = dense2(h)

modelOnlyFinalOutput = Model(inputs,h)
modelOnlyNonDrop = Model(inputs,o)
modelBothOutputs = Model(inputs, [o,h])

选择一项进行培训:

model.fit(x_train,y_train) #where y_train = [targets1, targets2] if using both outputs

事实证明,Keras 支持开箱即用的我想做的事情。 调用Dropout/LSTM 层时使用训练参数,结合 Daniel Möller 构建模型的方法(谢谢!),可以解决问题。

在下面的代码中(只是一个玩具示例), o1o3应该等于并且不同于o2

from tensorflow.keras.layers import Dropout, Dense, Input, Lambda, Layer, Add, RepeatVector, LSTM
from tensorflow.python.keras import backend as K
from tensorflow.keras import Model
import tensorflow as tf
import numpy as np

repeat = RepeatVector(2)
lstm = LSTM(1, dropout=0.5, recurrent_dropout=0.5)

#Forward pass with dropout disabled
next_state = tf.placeholder(tf.float32, shape=(None, 1), name='next_state')
h = repeat(next_state)
# Use training to disable dropout
o1 = lstm(h, training=False)
target1 = tf.stop_gradient(o1)

#Forward pass with dropout enabled
state = tf.placeholder(tf.float32, shape=(None, 1), name='state')
h = repeat(state)
o2 = lstm(h, training=True)
target2 = tf.stop_gradient(o2)

#Forward pass with dropout disabled
ph3 = tf.placeholder(tf.float32, shape=(None, 1), name='ph3')
h = repeat(ph3)
o3 = lstm(h, training=False)

loss = target1 + target2 - o3
opt = tf.train.GradientDescentOptimizer(0.1)
train = opt.minimize(loss)

sess = tf.Session()
K.set_session(sess)
sess.run(tf.global_variables_initializer())

data = np.ones((1,1))
sess.run([o1, o2, o3], feed_dict={next_state:data, state:data, ph3:data})

这个怎么样 :

class CustomDropout(tf.keras.layers.Layer):
    def __init__(self):
        super(CustomDropout, self).__init__()
        self.dropout1= Dropout(0.5)
        self.dropout2= Dropout(0.1)

    def call(self, inputs):
       if xxx:
           return self.dropout1(inputs)
       else:
           return self.dropout2(inputs)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM