简体   繁体   English

导入一个简单的Tensorflow frozen_model.pb文件并在C ++中进行预测

[英]Import a simple Tensorflow frozen_model.pb file and make prediction in C++

I am trying to import a graph I exported from Tensorflow Python into Tensorflow C++. 我正在尝试将从Tensorflow Python导出的图形导入Tensorflow C ++。 I've already successfully re-imported the graph into Python. 我已经成功地将图形重新导入Python。 The only thing I want now is to write the same code in C++ but I am not sure about the C++ api functions and there usage as the documentation on the Tensorflow website is not good enough. 我现在唯一想要的是用C ++编写相同的代码,但我不确定C ++ api函数和用法,因为Tensorflow网站上的文档不够好。

Here's the C++ code I found so far. 这是我到目前为止找到的C ++代码。

C++: C ++:

namespace tf = tensorflow;

tf::Session* session;

tf::Status status = tf::NewSession(tf::SessionOptions(), &session);
checkStatus(status);

tf::GraphDef graph_def;
status = ReadBinaryProto(tf::Env::Default(), "./models/frozen_model.pb", &graph_def);
checkStatus(status);

status = session->Create(graph_def);
checkStatus(status);

tf::Tensor x(tf::DT_FLOAT, tf::TensorShape());
tf::Tensor y(tf::DT_FLOAT, tf::TensorShape());

x.scalar<float>()() = 23.0;
y.scalar<float>()() = 19.0;

std::vector<std::pair<tf::string, tf::Tensor>> input_tensors = {{"x", x}, {"y", y}};
std::vector<string> vNames; // vector of names for required graph nodes
vNames.push_back("prefix/input_neurons:0");
vNames.push_back("prefix/prediction_restore:0");
std::vector<tf::Tensor> output_tensors;

status = session->Run({}, vNames,  {}, &output_tensors);
checkStatus(status);

tf::Tensor output = output_tensors[0];
std::cout << "Success: " << output.scalar<float>() << "!" << std::endl;
session->Close();
return 0;

The problem I am having with the current c++ code above is that it says it cannot find any operation by the name of prefix/input_neurons:0 . 我在上面的当前c ++代码中遇到的问题是它说它无法通过prefix/input_neurons:0的名称找到任何操作prefix/input_neurons:0 Although there is an operation in the graph because when i import this graph in the Python code (shown below), it works perfectly fine. 虽然图中有一个操作,因为当我在Python代码中导入这个图形时(如下所示),它可以很好地工作。

Here's the Python code to import the graph successfully. 这是成功导入图形的Python代码。

Python: ( Works perfectly fine ) Python :(完美无缺)

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the 
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we can use again a convenient built-in function to import a graph_def into the 
    # current default Graph
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(
            graph_def, 
            input_map=None, 
            return_elements=None, 
            name="prefix", 
            op_dict=None, 
            producer_op_list=None
        )
    return graph

# We use our "load_graph" function
graph = load_graph("./models/frozen_model.pb")

# We can verify that we can access the list of operations in the graph
for op in graph.get_operations():
    print(op.name)     # <--- printing the operations snapshot below
    # prefix/Placeholder/inputs_placeholder
    # ...
    # prefix/Accuracy/predictions

# We access the input and output nodes
x = graph.get_tensor_by_name('prefix/input_neurons:0')
y = graph.get_tensor_by_name('prefix/prediction_restore:0')

# We launch a Session
with tf.Session(graph=graph) as sess:

    test_features = [[0.377745556,0.009904444,0.063231111,0.009904444,0.003734444,0.002914444,0.008633333,0.000471111,0.009642222,0.05406,0.050163333,7e-05,0.006528889,0.000314444,0.00649,0.043956667,0.016816667,0.001644444,0.016906667,0.00204,0.027342222,0.13864]]
        # compute the predicted output for test_x
    pred_y = sess.run( y, feed_dict={x: test_features} )
    print(pred_y)

Update 更新

I can print the operations from the python script. 我可以从python脚本打印操作。 Here's the screenshot. 这是截图。

在此输入图像描述

Here's the error I get. 这是我得到的错误。

在此输入图像描述

See the Run function reference : in c++ the input is first the input dict, then the output nodes, then the other operations that need to be run, then the output vector (optinoally with extra arguments, but it looks like you don't need them). 请参阅运行函数参考 :在c ++中,输入首先是输入dict,然后是输出节点,然后是需要运行的其他操作,然后是输出向量(带有额外参数的optinoally,但看起来你不需要他们)。 This call should work: 此调用应该有效:

status = session->Run({{"prefix/input_neurons:0", x}}, {"prefix/prediction_restore:0"}, {}, &output_tensors);

If you want to set x to the same values as in python (there is very probably a way to do this without copying data, but I don't know how), you can do this before calling Run() : 如果你想将x设置为与python中相同的值(很可能有一种方法可以在不复制数据的情况下执行此操作,但我不知道如何),则可以在调用Run()之前执行此操作:

std::vector<float> test_features = {0.377745556,0.009904444,0.063231111,0.009904444,0.003734444,0.002914444,0.008633333,0.000471111,0.009642222,0.05406,0.050163333,7e-05,0.006528889,0.000314444,0.00649,0.043956667,0.016816667,0.001644444,0.016906667,0.00204,0.027342222,0.13864};
int n_features = test_features.size();
x= tf::Tensor(tf::DT_FLOAT, tf::TensorShape({1,n_features}));
auto x_mapped = x.tensor<float, 2>();

for (int i = 0; i< n_features; i++)
{
    x_mapped(0, i) = test_features[i];
}

Tell me if it's better with this ! 告诉我这是否更好!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 Tensorflow 中对经过训练的模型进行简单的预测? - How to make a simple prediction in Tensorflow on a trained model? 如何将Tensorflow Simple Audio Recognition冷冻图(.pb)转换为Core ML模型? - How to convert Tensorflow Simple Audio Recognition frozen graph(.pb) to Core ML model? 您如何从 Tensorflow 中的冻结模型(pb 文件)中找到 output_node_names? - How do you find the output_node_names from a frozen model (a pb file) in Tensorflow? 如何将基于 tensorflow 构建的修改模型(用于迁移学习,in.ckpt 格式)导出到冻结图(.pb 文件) - How to export a modified model(used for transfer learning, in .ckpt format) built on tensorflow to a frozen graph(.pb file) 将冷冻的TensorFlow pb包装在tf.keras模型中 - Wrapping a frozen TensorFlow pb in a tf.keras Model Tensorflow keras冻结的.pb模型在Android上返回不良结果 - Tensorflow keras frozen .pb model returns bad results on android Tensorflow:加载 .pb 文件,然后将其保存为冻结图问题 - Tensorflow: Load a .pb file and then save it as a frozen graph issues Tensorflow:从图形文件(.pb文件)中获取预测 - Tensorflow : get prediction out of graph file (.pb file) 无法将冻结的模型文件 .pb 转换为 .tflite - Can't convert frozen model file .pb to .tflite Tensorflow 2.0 将 keras 模型转换为 .pb 文件 - Tensorflow 2.0 Convert keras model to .pb file
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM