繁体   English   中英

Tensorflow:如何正确使用Adam优化器

[英]Tensorflow: How to use Adam optimizer properly

有人已经问过类似的问题 ,但是在那里给出的解决方案对我不起作用。

我正在尝试在Tensorflow中使用Adam优化器。 这是我的代码的一部分:

adamOptimizer = tf.train.AdamOptimizer(learning_rate=0.001, beta1=0.9,
           beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')

print('Optimizer was created!')

# Create a variable to track the global step.
global_step = tf.Variable(0, name='global_step', trainable=False)

#Initialize variables
vars_to_init = ae.get_variables_to_init(n)
vars_to_init.append(global_step)
vars_to_init.append

sess.run(tf.variables_initializer(vars_to_init))

# create an optimizer
train_op = adamOptimizer.minimize(loss, global_step=global_step)

第一次使用train_op后,会引发以下错误:

FailedPreconditionError(请参阅上面的回溯):尝试使用未初始化的值pretrain_1 / beta2_power [[节点:pretrain_1 / beta2_power / read = IdentityT = DT_FLOAT,_class = [“ loc:@ autoencoder_variables / weights1”],_device =“ / job:localhost / replica:0 / task:0 / cpu:0“]]

如果我尝试添加一行

vars_to_init.append(beta2_power)

我收到以下错误:

NameError:全局名称'beta2_power'未定义

如果我遵循类似问题的建议,并用sess.run(tf.initialize_all_variables())替换sess.run(tf.variables_initializer(vars_to_init)),则在运行此行后出现以下错误:

FailedPreconditionError:尝试使用未初始化的值autoencoder_variables / biases1 [[节点:autoencoder_variables / biases1 / read = IdentityT = DT_FLOAT,_class = [“ loc:@ autoencoder_variables / biases1”],_device =“ / job:localhost / replica:0 / task :0 / cpu:0“]]

使用Gradient Descent优化器时我没有问题...

我究竟做错了什么? 使用此优化器的正确方法是什么?

编辑有关该类的更多详细信息,以阐明autoencoder_variables:

class AutoEncoder(object):

_weights_str = "weights{0}"
_biases_str = "biases{0}"

def __init__(self, shape, sess):

  self.__shape = shape 
  self.__num_hidden_layers = len(self.__shape) - 2

  self.__variables = {}
  self.__sess = sess

  self._setup_variables()

@property
def shape(self):
  return self.__shape

@property
def num_hidden_layers(self):
   return self.__num_hidden_layers

@property
def session(self):
   return self.__sess

def __getitem__(self, item):

return self.__variables[item]

def __setitem__(self, key, value):

self.__variables[key] = value

def _setup_variables(self):
with tf.name_scope("autoencoder_variables"):
  for i in xrange(self.__num_hidden_layers + 1):
    # Train weights
    name_w = self._weights_str.format(i + 1)
    w_shape = (self.__shape[i], self.__shape[i + 1])
    a = tf.mul(4.0, tf.sqrt(6.0 / (w_shape[0] + w_shape[1])))
    w_init = tf.random_uniform(w_shape, -1 * a, a)
    self[name_w] = tf.Variable(w_init,
                               name=name_w,
                               trainable=True)
    # Train biases
    name_b = self._biases_str.format(i + 1)
    b_shape = (self.__shape[i + 1],)
    b_init = tf.zeros(b_shape)
    self[name_b] = tf.Variable(b_init, trainable=True, name=name_b)

    if i <= self.__num_hidden_layers:

      # Hidden layer fixed weights (after pretraining before fine tuning)
      self[name_w + "_fixed"] = tf.Variable(tf.identity(self[name_w]),
                                            name=name_w + "_fixed",
                                            trainable=False)

      # Hidden layer fixed biases
      self[name_b + "_fixed"] = tf.Variable(tf.identity(self[name_b]),
                                            name=name_b + "_fixed",
                                            trainable=False)

      # Pretraining output training biases
      name_b_out = self._biases_str.format(i + 1) + "_out"
      b_shape = (self.__shape[i],)
      b_init = tf.zeros(b_shape)
      self[name_b_out] = tf.Variable(b_init,
                                     trainable=True,
                                     name=name_b_out)

def _w(self, n, suffix=""):
  return self[self._weights_str.format(n) + suffix]

def _b(self, n, suffix=""):
  return self[self._biases_str.format(n) + suffix]

def get_variables_to_init(self, n):
  assert n > 0
  assert n <= self.__num_hidden_layers + 1

  vars_to_init = [self._w(n), self._b(n)]

  if n <= self.__num_hidden_layers:
    vars_to_init.append(self._b(n, "_out"))

  if 1 < n <= self.__num_hidden_layers+1:
    # Fixed matrices for learning of deeper layers
    vars_to_init.append(self._w(n - 1, "_fixed"))
    vars_to_init.append(self._b(n - 1, "_fixed"))

  return vars_to_init

问题是我在使用一个变量值来初始化其他变量(这引发了在初始化期间使用未初始化变量的错误)。

而不是在初始化过程中使用另一个变量

self[name_b + "_fixed"] = tf.Variable(tf.identity(self[name_b]),
                                            name=name_b + "_fixed",
                                            trainable=False)

我随机初始化

self[name_b + "_fixed"] = tf.Variable(init_b,
                                            name=name_b + "_fixed",
                                            trainable=False)

训练后将其分配给另一个变量:

 ae[name_w + "_fixed"] = tf.identity(ae[name_w])

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM