I've got two tensors,
x = shape(batchsize, 29, 64),
y = shape(batchsize, 29, 29, 64)
I want to iterate row-wise over y, perform an elementwise multiplication with x, sum_reduce the result, and stack those results to a new tensor. The result should be of a shape (batchsize, 29, 64). It's quite similar to a convolution.
How I would program it sequentially:
for batchnr in range(x.shape[0]):
for n in range(y.shape[1]):
temp = tf.multiply(x[batchnr][n], y[batchnr]) #shape(29,64)
prod = tf.reduce_sum(temp) # shape(1,64)
res[batchnr][n] = prod
I've created that explanation figure: Since reduce_sum is done for each row, the result is a tensor of shape (batchsize, 29, 64) again.
I can't figure out how to do it right and efficient. Thank you.
I think i've found the solution. Instead of iterating y:
In code it looks like:
m = K.tf.constant([1, 29, 1], dtype=K.tf.int32)
x = K.tf.tile(Z_RBF[0], m) #vermehrfache Z und stacke es zu RBF shape
x = K.tf.reshape(x, shape=(-1, *Z_RBF[1].shape[1:]))
x = K.tf.multiply(x, Z_RBF[1])
x = K.tf.reduce_sum(x, axis=2) # shape (batchsize, 29, 64)
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.