简体   繁体   English

如何在公式中使用张量流张量值?

[英]How to use a tensorflow tensor value in a formula?

I have a quick question. 我有一个快速的问题。 I am developing a model in tensorflow, and need to use the iteration number in a formula during the construction phase. 我正在张量流中开发模型,并且需要在构造阶段在公式中使用迭代数。 I know how to use global_step, but I am not using an already existing optimizer. 我知道如何使用global_step,但是我没有使用已经存在的优化器。 I am calculating my own gradients with 我正在计算自己的渐变

grad_W, grad_b = tf.gradients(xs=[W, b], ys=cost)
grad_W = grad_W +rnd.normal(0,1.0/(1+epoch)**0.55)

and then using 然后使用

new_W = W.assign(W - learning_rate * (grad_W))
new_b = b.assign(b - learning_rate * (grad_b))

and would like to use the epoch value in the formula before updating my weights. 并希望在更新体重之前在公式中使用纪元值。 How can I do it in the best way possible? 如何以最佳方式做到这一点? I have a sess.run() part and would like to pass to the model the epoch number, but cannot directly use a tensor. 我有一个sess.run()部分,想将纪元数传递给模型,但不能直接使用张量。 From my run call 从我的跑步电话

_, _, cost_ = sess.run([new_W, new_b ,cost], 
      feed_dict = {X_: X_train_tr, Y: labels_, learning_rate: learning_r})

I would like to pass the epoch number. 我想传递纪元号。 How do you usually do it? 您通常如何做?

Thanks in advance, Umberto 预先感谢,翁贝托

EDIT : 编辑

Thanks for the hints. 感谢您的提示。 So seems to work 所以似乎工作

grad_W = grad_W + tf.random_normal(grad_W.shape, 
      0.0,1.0/tf.pow(0.01+tf.cast(epochv, tf.float32),0.55))

but I still have to see if that is what I need and if is working as intended. 但我仍然必须查看这是否是我需要的,以及是否按预期工作。 Ideas and Feedback would be great! 想法和反馈会很棒!

You can define epoch as a non-trainable tf.Variable in your graph and increment it at the end of each epoch. 您可以将epoch定义为不可训练的tf.Variable在图中,并在每个历元末尾将其递增。 You can define an operation with tf.assign_add to do the incrementation and run it end of each epoch. 您可以使用tf.assign_add定义一个操作来进行递增,并在每个时期结束时运行它。

Instead of rnd.normal you will also need to use tf.random_normal then. 除了rnd.normal您还需要使用tf.random_normal

Example: 例:

epoch = tf.Variable(0, trainable=False) # 0 is initial value
# increment by 1 when the next op is run
epoch_incr_op = tf.assign_add(epoch, 1, name='incr_epoch')

# Define any operations that depend on 'epoch'
# Note we need to cast the integer 'epoch' to float to use in tf.pow
grad_W = grad_W + tf.random_normal(grad_W.shape, 0.0,
                                  1.0/tf.pow(1+tf.cast(epoch, tf.float32), 0.55))

# Training loop
while running_epoch:
    _, _, cost_ = sess.run([new_W, new_b ,cost], 
       feed_dict = {X_: X_train_tr, Y: labels_, learning_rate: learning_r})

# At end of epoch, increment epoch counter
sess.run(epoch_incr_op)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM