Suppose during training a network, we resize all images to 512*512, so there might be a tf.Tensor
named input:0
, which is of shape (batch_size, 512, 512, 3)
.
However, when making predictions, it is possible to feed images of multiple sizes into the network. So the shape of tensor input:0
should be something like (batch_size, None, None, 3)
, since the size of images are unknown.
So if I have a Tensor of shape (batch_size, 512, 512, 3)
, how do I "reshape" it to (batch_size, None, None, 3)
? I tried
inputs=tf.reshape(inputs, (batch_size, tf.shape(inputs)[1], tf.shape(inputs)[2], 3)
but the output is still of shape (batch_size, 512, 512, 3)
.
I don't believe you can resize/rescale the weight/bias terms in a neural network. But it would be pretty easy to resize your image input to 512*512. Have you considered that?
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.