[英]assign on a tf.Variable tensor slice
I am trying to do the following 我正在尝试执行以下操作
state[0,:] = state[0,:].assign( 0.9*prev_state + 0.1*( tf.matmul(inputs, weights) + biases ) )
for i in xrange(1,BATCH_SIZE):
state[i,:] = state[i,:].assign( 0.9*state[i-1,:] + 0.1*( tf.matmul(inputs, weights) + biases ) )
prev_state = prev_state.assign( state[BATCH_SIZE-1,:] )
with 同
state = tf.Variable(tf.zeros([BATCH_SIZE, HIDDEN_1]), name='inner_state')
prev_state = tf.Variable(tf.zeros([HIDDEN_1]), name='previous_inner_state')
As a follow-up for this question . 作为此问题的后续措施。 I get an error that Tensor
does not have an assign
method. 我收到一个错误,认为Tensor
没有assign
方法。
What is the correct way to call the assign
method on a slice of a Variable
tensor? 在Variable
张量的切片上调用assign
方法的正确方法是什么?
Full current code: 完整的当前代码:
import tensorflow as tf
import math
import numpy as np
INPUTS = 10
HIDDEN_1 = 20
BATCH_SIZE = 3
def create_graph(inputs, state, prev_state):
with tf.name_scope('h1'):
weights = tf.Variable(
tf.truncated_normal([INPUTS, HIDDEN_1],
stddev=1.0 / math.sqrt(float(INPUTS))),
name='weights')
biases = tf.Variable(tf.zeros([HIDDEN_1]), name='biases')
updated_state = tf.scatter_update(state, [0], 0.9 * prev_state + 0.1 * (tf.matmul(inputs[0,:], weights) + biases))
for i in xrange(1, BATCH_SIZE):
updated_state = tf.scatter_update(
updated_state, [i], 0.9 * updated_state[i-1, :] + 0.1 * (tf.matmul(inputs[i,:], weights) + biases))
prev_state = prev_state.assign(updated_state[BATCH_SIZE-1, :])
output = tf.nn.relu(updated_state)
return output
def data_iter():
while True:
idxs = np.random.rand(BATCH_SIZE, INPUTS)
yield idxs
with tf.Graph().as_default():
inputs = tf.placeholder(tf.float32, shape=(BATCH_SIZE, INPUTS))
state = tf.Variable(tf.zeros([BATCH_SIZE, HIDDEN_1]), name='inner_state')
prev_state = tf.Variable(tf.zeros([HIDDEN_1]), name='previous_inner_state')
output = create_graph(inputs, state, prev_state)
sess = tf.Session()
# Run the Op to initialize the variables.
init = tf.initialize_all_variables()
sess.run(init)
iter_ = data_iter()
for i in xrange(0, 2):
print ("iteration: ",i)
input_data = iter_.next()
out = sess.run(output, feed_dict={ inputs: input_data})
Tensorflow Variable
objects have limited support for updating slices, using the tf.scatter_update()
, tf.scatter_add()
, and tf.scatter_sub()
ops. Tensorflow Variable
对象使用tf.scatter_update()
, tf.scatter_add()
和tf.scatter_sub()
ops对更新切片的支持有限。 Each of these ops allows you to specify a variable, a vector of slice indices (representing indices in the 0th dimension of the variable, which indicate the contiguous slices to be mutated) and a tensor of values (representing the new values to be applied to the variable, at the corresponding slice indices). 每个操作都允许您指定变量,切片索引的向量(表示变量第0维的索引,指示要突变的连续切片)和值的张量(表示要应用于的新值)变量,在相应的切片索引处)。
To update a single row of the variable, you can use tf.scatter_update()
. 要更新变量的单行,可以使用tf.scatter_update()
。 For example, to update the 0th row of state
, you would do: 例如,要更新state
的第0行,您可以执行以下操作:
updated_state = tf.scatter_update(
state, [0], 0.9 * prev_state + 0.1 * (tf.matmul(inputs, weights) + biases))
To chain multiple updates, you can use the mutable updated_state
tensor that is returned from tf.scatter_update()
: 把多个更新,您可以使用可变updated_state
是从返回的张量tf.scatter_update()
for i in xrange(1, BATCH_SIZE):
updated_state = tf.scatter_update(
updated_state, [i], 0.9 * updated_state[i-1, :] + ...)
prev_state = prev_state.assign(updated_state[BATCH_SIZE-1, :])
Finally, you can evaluate the resulting updated_state.op
to apply all of the updates to state
: 最后,您可以评估生成的updated_state.op
以将所有更新应用于state
:
sess.run(updated_state.op) # or `sess.run(updated_state)` to fetch the result
PS. PS。 You might find it more efficient to use tf.scan()
to compute the intermediate states, and just materialize prev_state
in a variable. 您可能会发现使用tf.scan()
来计算中间状态,并且仅在变量中实现prev_state
会更有效。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.