简体   繁体   中英

Tensorflow 1.15 is not loading frozen graph, and has given me the same error for the past week

Okay, so I'm working on a large project in Google Colab, where I have to detect a certain object from all the others.

Now, for the better half of the past week, I've been working tirelessly trying to get the graph to load, but oh my goodness.... nothing seems to be working.

So, the block of code that I'm having trouble with is:

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

The error arises in the last line. For some reason, it says: NotFoundError: Op type not registered 'TFLite_Detection_PostProcess' in binary running on e32766609f28. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (eg) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed. NotFoundError: Op type not registered 'TFLite_Detection_PostProcess' in binary running on e32766609f28. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (eg) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

Also, I just wanna add that the PATH_TO_FROZEN_GRAPH is: /content/ssd_mobilenet_v3_small_coco_2020_01_14/frozen_inference_graph.pb . I can see the file there as well. So, I don't know what the problem is.

Are there any solutions? Thank you!

TFLite_Detection_PostProcess is included by default in newer TFLite versions like 2.3.0 or 2.4.0. Could you try those version?

Otherwise, you have to add it to the TFLite interpreter. The way to do that depend on what language you are using. Please see https://www.tensorflow.org/lite/guide/ops_custom#register_the_operator_with_the_kernel_library

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM