简体   繁体   English

tf.image.decode_jpeg - 内容必须是标量,有形状 [1]

[英]tf.image.decode_jpeg - contents must be scalar, got shape [1]

I have build a server/client demo for image classification by tensorflow serving, following this tutorial https://github.com/tmlabonte/tendies/blob/master/minimum_working_example/tendies-basic-tutorial.ipynb我已经按照本教程https://github.com/tmlabonte/tendies/blob/master/minimum_working_example/tendies-basic-tutorial.ipynb构建了一个用于通过 tensorflow 服务进行图像分类的服务器/客户端演示

The Client客户端

It accepts an image as input, convert it to Base64, pass it to the server using JSON它接受图像作为输入,将其转换为 Base64,使用 JSON 将其传递给服务器

input_image = open(image, "rb").read()
print("Raw bitstring: " + str(input_image[:10]) + " ... " + str(input_image[-10:]))

# Encode image in b64
encoded_input_string = base64.b64encode(input_image)
input_string = encoded_input_string.decode("utf-8")
print("Base64 encoded string: " + input_string[:10] + " ... " + input_string[-10:])

# Wrap bitstring in JSON
instance = [{"images": input_string}]
data = json.dumps({"instances": instance})
print(data[:30] + " ... " + data[-10:])

r = requests.post('http://localhost:9000/v1/models/cnn:predict', data=data)
  #json.loads(r.content)
print(r.text)

The Server服务器

Once loaded the model as.h5 the server must be saved as SavedModel.将模型加载为 .h5 后,服务器必须另存为 SavedModel。 the image must pass from the client to the server as a Base64 encoded string.图像必须作为 Base64 编码字符串从客户端传递到服务器。

model=tf.keras.models.load_model('./model.h5')
  input_bytes = tf.placeholder(tf.string, shape=[], name="input_bytes")
#  input_bytes = tf.reshape(input_bytes, [])
    # Transform bitstring to uint8 tensor
  input_tensor = tf.image.decode_jpeg(input_bytes, channels=3)

    # Convert to float32 tensor
  input_tensor = tf.image.convert_image_dtype(input_tensor, dtype=tf.float32)
  input_tensor = input_tensor / 127.5 - 1.0

    # Ensure tensor has correct shape
  input_tensor = tf.reshape(input_tensor, [64, 64, 3])

    # CycleGAN's inference function accepts a batch of images
    # So expand the single tensor into a batch of 1
  input_tensor = tf.expand_dims(input_tensor, 0)


#  x = model.input
  y = model(input_tensor)

then the input_bytes become the input for the predition_signature in the SavedModel然后 input_bytes 成为 SavedModel 中 predition_signature 的输入

 tensor_info_x = tf.saved_model.utils.build_tensor_info(input_bytes)

At the end the server result like:最后服务器结果如下:

§ saved_model_cli show --dir ./ --all

signature_def['predict']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['images'] tensor_info:
        dtype: DT_STRING
        shape: ()
        name: input_bytes:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 4)
        name: sequential_1/dense_2/Softmax:0
  Method name is: tensorflow/serving/predict

Sending Image发送图像

When I send the image base64 I received a run-time error from the server about the shape of the input that seems not scalar:当我发送图像 base64 时,我从服务器收到了一个关于输入形状的运行时错误,该错误似乎不是标量:

Using TensorFlow backend.
Raw bitstring: b'\xff\xd8\xff\xe0\x00\x10JFIF' ... b'0;s\xcfJ(\xa0h\xff\xd9'
Base64 encoded string: /9j/4AAQSk ... 9KKKBo/9k=
{"instances": [{"images": "/9j ... Bo/9k="}]}
{ "error": "contents must be scalar, got shape [1]\n\t [[{{node DecodeJpeg}} = DecodeJpeg[_output_shapes=[[?,?,3]], acceptable_fraction=1, channels=3, dct_method=\"\", fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](_arg_input_bytes_0_0)]]" }

As you see from the server the input_bytes is scalar as the shape=[] , I have also tried to reshape it with tf.reshape(input_bytes, []) but no way, I got always the same error.正如您从服务器上看到的那样, input_bytesshape=[]的标量,我也尝试用tf.reshape(input_bytes, [])重塑它,但没办法,我总是遇到同样的错误。 I did not find any solution in internet and here in Stackoverflow about this error.我没有在互联网上和 Stackoverflow 中找到有关此错误的任何解决方案。 Can you please suggest how to fix it?你能建议如何解决它吗? Thanks!谢谢!

I solved the issue and I would like to comment how so you can benefit of the solution! 我已经解决了这个问题,我想评论一下如何才能从该解决方案中受益!

When you send a json like this: 当您发送这样的json时:

{"instances": [{"images": "/9j ... Bo/9k="}]}

actually you are sending an array of size 1 as you put the [] in case you would like to send 2 images you should write like that 实际上,您在发送[]时发送的是大小为1的数组,以防您要发送2张图像,应这样写

{"instances": [{"images": "/9j ... Bo/9k="}, {"images": "/9j ... Bo/9k="}]}

here the size is 2 (shape = [2]) 这里的大小是2(形状= [2])

so the solution is to state in the placeholder to accept any type of size with shape=[None] 因此解决方案是在占位符中声明接受shape = [None]的任何类型的大小

input_bytes = tf.placeholder(tf.string, shape=[None], name="input_bytes")

then if you are sending only 1 image the vector 1 can be converted to a scalar by: 那么,如果您仅发送1张图像,则可以通过以下方式将矢量1转换为标量:

input_scalar = tf.reshape(input_bytes, [])

Also there were another error in my code, I did not consider that in tensorflow/serving there is a feature to decode the base64 by explicitly stating 'b64' in the json please refere to RESTful API Encoding binary values , so if you send 我的代码中还有另一个错误,我不认为在tensorflow / serving中有通过在json中显式声明'b64'来解码base64的功能,请参考RESTful API编码二进制值 ,因此如果发送

{"instances": [{"images": {"b64": "/9j ... Bo/9k="}}]}

the server will automatically decode the base64 input and the correct bit-stream will reach the tf.image.decode_jpeg 服务器将自动解码base64输入,正确的比特流将到达tf.image.decode_jpeg

input_bytes = tf.placeholder(tf.string, shape=[], name="input_bytes")
input_tensor = tf.image.decode_jpeg(input_bytes, channels=3)

the "tf.image.decode_jpeg" can only take scalar “tf.image.decode_jpeg”只能采用标量

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM