简体   繁体   English

如何在张量流中验证优化模型

[英]How to verify optimized model in tensorflow

I'm following a tutorial from codelabs . 我正在遵循codelabs教程 They use this script to optimize the model 他们使用此脚本优化模型

python -m tensorflow.python.tools.optimize_for_inference \
  --input=tf_files/retrained_graph.pb \
  --output=tf_files/optimized_graph.pb \
  --input_names="input" \
  --output_names="final_result"

they verify the optimized_graph.pb using this script 他们使用此脚本验证了optimized_graph.pb

python -m scripts.label_image \
    --graph=tf_files/optimized_graph.pb \
    --image=tf_files/flower_photos/daisy/3475870145_685a19116d.jpg

The problem is I try to use optimize_for_inference to my own code which is not for image classification. 问题是我尝试对我自己的代码使用optimize_for_inference ,这不适用于图像分类。

Previously, before optimizing, I use this script to verify my model by test it to a sample data: 以前,在进行优化之前,我使用此脚本通过对示例数据进行测试来验证我的模型:

import tensorflow as tf
from tensorflow.contrib import predictor
from tensorflow.python.platform import gfile
import numpy as np

def load_graph(frozen_graph_filename):
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def, name="prefix")

    input_name = graph.get_operations()[0].name+':0'
    output_name = graph.get_operations()[-1].name+':0'

    return graph, input_name, output_name

def predict(model_path, input_data):
    # load tf graph
    tf_model,tf_input,tf_output = load_graph(model_path)

    x = tf_model.get_tensor_by_name(tf_input)
    y = tf_model.get_tensor_by_name(tf_output) 

    model_input = tf.train.Example(
        features=tf.train.Features(feature={
        "thisisinput": tf.train.Feature(float_list=tf.train.FloatList(value=input_data)),
    }))
    model_input = model_input.SerializeToString()

    num_outputs = 3
    predictions = np.zeros(num_outputs)
    with tf.Session(graph=tf_model) as sess:
        y_out = sess.run(y, feed_dict={x: [model_input]})
        predictions = y_out

    return predictions

if __name__=="__main__":
    input_data = [4.7,3.2,1.6,0.2] # my model recieve 4 inputs
    print(np.argmax(predict("not_optimized_model.pb",x)))

but after optimizing the model, my testing script doesn't work. 但是在优化模型后,我的测试脚本无法正常工作。 It raises an error: 它引发一个错误:

ValueError: Input 0 of node import/ParseExample/ParseExample was passed float from import/inputtensors:0 incompatible with expected string. ValueError:节点import / ParseExample / ParseExample的输入0从import / inputtensors:0传递给float,与预期字符串不兼容。

So my question is how to verify my model after optimizing the model? 所以我的问题是优化模型后如何验证模型? I can't use --image command like the tutorial. 我不能像教程一样使用--image命令。

I've solved the error by changing the placeholder's type with tf.float32 when exporting the model: 在导出模型时,我通过使用tf.float32更改了占位符的类型来解决了该错误:

def my_serving_input_fn():
    input_data = {
        "featurename" : tf.placeholder(tf.float32, [None, 4], name='inputtensors')
    }
    return tf.estimator.export.ServingInputReceiver(input_data, input_data)

and then change the prediction function above to: 然后将上面的prediction函数更改为:

def predict(model_path, input_data):
    # load tf graph
    tf_model, tf_input, tf_output = load_graph(model_path)

    x = tf_model.get_tensor_by_name(tf_input)
    y = tf_model.get_tensor_by_name(tf_output) 

    num_outputs = 3
    predictions = np.zeros(num_outputs)
    with tf.Session(graph=tf_model) as sess:
        y_out = sess.run(y, feed_dict={x: [input_data]})
        predictions = y_out

    return predictions

After freezing the model, the prediction code above will be work. 冻结模型后,上面的预测代码将起作用。 But unfortunately it raises another error when trying to load pb directly after exporting the model . 但是不幸的是,在导出模型后尝试直接加载pb时,它会引发另一个错误

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM