简体   繁体   中英

Tensorflow - How to perform tf.gather with batch dimension

Unfortunately I dont know how to formulate the title of this questions, maybe someone can change it?

How to replace the following for loop in an elegant way?

#tensor.shape -> (batchsize,100)
#indices.shape -> (batchsize,100) 
liste = []
for i in range(tensor.shape[0]):
        liste.append(tf.gather(tensor[i,:], indices[i,:10]))

new_tensor = tf.stack(liste)
        

This should do the trick:

new_tensor = tf.gather(tensor, axis=-1, indices=indices[:, :10], batch_dims=1)

Here with a minimal reproducible example:

import tensorflow as tf

# for version 1.x
#tf.enable_eager_execution()

tensor = tf.random.normal((2, 10))
indices = tf.random.uniform(shape=[2, 10], minval=0, maxval=4, dtype=tf.int32)

liste = []
for i in range(tensor.shape[0]):
        liste.append(tf.gather(tensor[i,:], indices[i,:5]))

new_tensor = tf.stack(liste)

print('tensor: ')
print(tensor)

print('new_tensor: ')
print(new_tensor)

new_tensor_v2 = tf.gather(tensor, axis=-1, indices=indices[:, :5], batch_dims=1)
print('new_tensor_v2: ')
print(new_tensor_v2)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM