简体   繁体   中英

index vector that fit tensor at runtime using Tensorflow

I've got this python function that uses Tensorflow framework :

def compute_ap(gain_vector):

    #this vector must fit the dimension of the gain_vector
    index_vector = tf.range(1, gain_vector.get_shape()[0],dtype=tf.float32)

    ap = tf.div(tf.reduce_sum(tf.div(tf.cast(gain_vector,tf.float32), index_vector), 1),tf.reduce_sum(tf.cast(gain_vector,tf.float32), 1))
    return ap 

when i run the program i get this error:

ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: 'Tensor("inputs/strided_slice:0", shape=(), dtype=int32)'

seems that gain_vector.get_shape()[0] doesn't get the vector of the gain vector, what is the problem?

tf.range() accepts arguments only of type int32 .

Args:
start: A 0-D (scalar) of type int32 . First entry in sequence.
Defaults to 0.

So, you could just create an int32 tensor and cast it to float32 later on. So, use something like this:

In [80]: index_vector = tf.range(1, tf.shape(gain_vector)[0])
In [81]: vec_float32 = tf.cast(index_vector, dtype=tf.float32)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM