I wonder when I need to use tf.shape() and x.shape(). I'm currently using tensorflow 2.0 rc0
The following is an example code.
#!/usr/bin/python3
import tensorflow as tf
a = tf.zeros((4, 3, 1))
print (tf.shape(a).numpy())
print (a.shape)
The result of the above code is as follows:
[4 3 1]
(4, 3, 1)
tf.shape(a).numpy()
returns the numpy array whereas a.shape
returns a tuple, but I cannot easily find which one is better and which one should be preferred.
Could anyone please give some advice on this?
.numpy()
on any Tensor
or Tensorflow ops
will return you numpy.ndarray
.
Example:
a = tf.constant([1,2,3])
print(a.numpy())
print(tf.shape(a).numpy())
print(type(tf.shape(a)))
[1 2 3]
[3]
<class 'tensorflow.python.framework.ops.EagerTensor'>
But Tensor.shape()
will be having type TensorShape
which will return a tuple.
print(type(a.shape))
<class 'tensorflow.python.framework.tensor_shape.TensorShape'>
Even NumPy arrays have a shape attribute that returns a tuple of the length of each dimension of the array.
data = np.array([11, 22, 33, 44, 55])
print(data.shape)
(5,)
The ideal way to use tensor shape
in any of your operations would be tf.shape(a)
without having to convert into .numpy()
or use Tensor.shape
Hope this answers your question, Happy Learning!
I think one important difference is also that if you try to access the shape of tensor dimension which is not known in advance (eg None) using
tensor.shape
you will fail when building the graph of the network.
However, using
tf.shape(tensor)
will work as it will return the size in the execution. Might be useful for example if someone provides you batches of unknown sizes which can easily happen if you don't have data divisible by your batch_size and you need to work with the dimension of your batch.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.