简体   繁体   中英

Convert to numpy a tensor without eager mode

I am defining a custom layer as the last one of my network. Here I need to convert a tensor, the input one, into a numpy array to define a function on it. In particular, I want to define my last layer similarly to this:

import tensorflow as tf
def hat(x):
  A = tf.constant([[0.,-x[2],x[1]],[x[2],0.,-x[0]],[-x[1],x[0],0.]])
  return A

class FinalLayer(layers.Layer):
  def __init__(self, units):
    super(FinalLayer, self).__init__()
    self.units = units
  

  def call(self, inputs):
    p = tf.constant([1.,2.,3.])
    q = inputs.numpy()
    p = tf.matmul(hat(q),p)
    return p

The weights do not matter to my question, since I know how to manage them. The problem is that this layer works perfectly in eager mode, however with this option the training phase is to slow. My question is: is there something I can do to implement this layer without eager mode? So, alternatively, can I access the single component x[i] of a tensor without converting it into a numpy array?

You can rewrite your hat function a bit differently, so it accepts a Tensor instead of a numpy array. For example:

def hat(x):
  zero = tf.zeros(())
  A = tf.concat([zero,-x[2],x[1],x[2],zero,-x[0],-x[1],x[0],zero], axis=0)
  return tf.reshape(A,(3,3))

Will results in

>>> p = tf.constant([1.,2.,3.])
>>> hat(p)
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[ 0., -3.,  2.],
       [ 3.,  0., -1.],
       [-2.,  1.,  0.]], dtype=float32)>

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM