[英]the correct way to input images for testing the tensorflow model
I have been trying to test the fcn implementation posted here . 我一直在尝试测试此处发布的fcn实现。 The only thing I changed is the way for setting up the input images for test against the model.
我唯一更改的是设置输入图像以针对模型进行测试的方法。 My modification is marked with red curve in the following figure.
下图用红色曲线标记了我的修改。
However, running the program caused the following error message TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
但是,运行该程序会导致以下错误消息
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
, happened at down, up = sess.run(tensors, feed_dict=feed_dict)
. ,发生在
down, up = sess.run(tensors, feed_dict=feed_dict)
。 I am curious to know what are something wrong in my implementation, and how to modify it. 我很好奇我的实现中有什么问题以及如何修改它。 In the original post, the author uses
img1 = skimage.io.imread("./test_data/tabby_cat.png")
to input the image. 在原始帖子中,作者使用
img1 = skimage.io.imread("./test_data/tabby_cat.png")
输入图像。
If I change batch_images=tf.expand_dims(images,0)
to batch_images=tf.expand_dims(img1,0)
the program will output the following error messages. 如果我将
batch_images=tf.expand_dims(images,0)
更改为batch_images=tf.expand_dims(img1,0)
,程序将输出以下错误消息。
As the error indicates, the types of values that you can use as feeds are Python scalars, strings, lists or numpy arrays. 如错误所示,可用作提要的值的类型为Python标量,字符串,列表或numpy数组。 What you are trying to use as feed is
img1
, the output of tf.image.decode_png
, which is of type tf.Tensor
. 您试图用作提要的是
img1
,它是tf.image.decode_png
的输出,类型为tf.Tensor
。 Hence the error. 因此,错误。 You have two options :
您有两种选择:
1) Convert img1
to a numpy array before feeding it. 1)在输入
img1
之前将其转换为numpy数组。 You can do that by simply evaluating img1
as follows : 您可以通过简单地评估
img1
来做到这一点,如下所示:
feed_dict = {images:img1.eval()}
2) Use img1
itself as the input to the rest of the model. 2)使用
img1
本身作为模型其余部分的输入。 You can do that as follows : 您可以按照以下步骤进行操作:
batch_images = tf.expand_dims(img1, 0)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.