简体   繁体   中英

Tensorflow TocoConverter gives toco_from_protos error

I basically used freeze_graph to freeze the model and then tried to use TocoConverter to convert the model to tflite, however it gives me the error:

RuntimeError: TOCO failed see console for info.
b'/bin/sh: 1: toco_from_protos: not found\n'
None 

Below is my code:

graph_def_file = './frozen_model2.pb'
input_arrays = ['IteratorGetNext']
output_arrays = ['model/fc_result/prediction/BiasAdd']

converter = tf.contrib.lite.TocoConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

Any help would be much appreciated!!!

The error b'/bin/sh: 1: toco_from_protos: not found\\n' indicates that the shell command toco_from_protos is not found. I'm guessing toco --help doesn't work for you either.

I built Tensorflow 1.9.0 from source so I assume I must have screwed up a step where shell commands like toco and toco_from_protos get set up.

Anyway, my traceback was this:

  File "/home/casey/anaconda3/envs/mnist/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 330, in convert
    dump_graphviz_video=self.dump_graphviz_video)
  File "/home/casey/anaconda3/envs/mnist/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 263, in toco_convert
    input_data.SerializeToString())
  File "/home/casey/anaconda3/envs/mnist/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 107, in toco_convert_protos
    (stdout, stderr))
RuntimeError: TOCO failed see console for info.
b'/bin/sh: 1: toco_from_protos: not found\n'
None

If you follow the traceback to line 107 in [python path]/site-packages/tensorflow/contrib/lite/python/convert.py , you'll see there is some logic in the definition for toco_convert_protos() to run toco_from_protos as a shell command:

# TODO(aselle): When toco does not use fatal errors for failure, we can
# switch this on.
if not _toco_from_proto_bin:
    return _toco_python.TocoConvert(
        model_flags_str, toco_flags_str, input_data_str)

with _tempfile.NamedTemporaryFile() as fp_toco, \
           _tempfile.NamedTemporaryFile() as fp_model, \
           _tempfile.NamedTemporaryFile() as fp_input, \
           _tempfile.NamedTemporaryFile() as fp_output:
    fp_model.write(model_flags_str)
    fp_toco.write(toco_flags_str)
    fp_input.write(input_data_str)
    fp_model.flush()
    fp_toco.flush()
    fp_input.flush()

    cmd = [
        _toco_from_proto_bin, fp_model.name, fp_toco.name, fp_input.name,
        fp_output.name
    ]
    cmdline = " ".join(cmd)
    proc = _subprocess.Popen(
        cmdline,
        shell=True,
        stdout=_subprocess.PIPE,
        stderr=_subprocess.STDOUT,
        close_fds=True)
    stdout, stderr = proc.communicate()
    exitcode = proc.returncode
    if exitcode == 0:
      stuff = fp_output.read()
      return stuff
    else:
      raise RuntimeError("TOCO failed see console for info.\n%s\n%s\n" %
                         (stdout, stderr))

If your toco and toco_from_protos shell commands aren't working then obviously this step will fail.

I made this change as a workaround in [python path]/site-packages/tensorflow/contrib/lite/python/convert.py , to force the first case to return:

  if _toco_from_proto_bin:
  # if not _toco_from_proto_bin: # comment out this line to force return
    return _toco_python.TocoConvert(
        model_flags_str, toco_flags_str, input_data_str)

Janky, but it got the job done for me. The proper solution would probably be to reinstall a version of Tensorflow that has working toco or redo the bazel build if you are building from source.

Make sure that the toco scripts are in your search path. Check PATH variable.

It seems that they are installed in /home/user/.local/bin so add this to PATH variable.

You have two options

  1. Download and build tensorflow from source
  2. Use google collab if you don't want to install tensorflow from source Here i made video on how to fix this Error

I just experienced this problem when quantizing a model into integer-precision.

RuntimeError: TOCO failed see console for info.
b'/bin/sh: 1: toco_from_protos: not found\n'

The key is to find where does toco_from_protos actually reside.

I use locate command to search across ALL files in my Linux system. (newly installed Linux may need to call updatedb on the first time, to create an initial database).

bash   > locate toco_from

result > /usr/local/intelpython3/bin/toco_from_protos

Bingo, let see if there are more things in this folder

ls /usr/local/intelpython3/bin/toco*

/usr/local/intelpython3/bin/toco
/usr/local/intelpython3/bin/toco_from_protos

My custom python installation requires manual configuration in order to be properly exposed to PATH

So I added the following lines in my ~/.profile

PYTHONBIN=/usr/local/intelpython3/bin
export PATH=$PATH:$PYTHONBIN

and I made it in effect immediately by:

source ~/.profile

Then everything goes well again.

I would recommend to try it on latest tensorflow (not tensorflow-gpu). Worked for me

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM