[英]TensorFlow Lite C++ API example for inference
I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor.我正在尝试让 TensorFlow Lite 示例在配备 ARM Cortex-A72 处理器的机器上运行。 Unfortunately, I wasn't able to deploy a test model due to the lack of examples on how to use the C++ API.不幸的是,由于缺少有关如何使用 C++ API 的示例,我无法部署测试模型。 I will try to explain what I have achieved so far.我将尝试解释到目前为止我所取得的成就。
Create the tflite model创建 tflite 模型
I have created a simple linear regression model and converted it, which should approximate the function f(x) = 2x - 1
.我创建了一个简单的线性回归模型并对其进行了转换,它应该近似于函数f(x) = 2x - 1
。 I got this code snippet from some tutorial, but I am unable to find it anymore.我从一些教程中得到了这个代码片段,但我再也找不到了。
import tensorflow as tf
import numpy as np
from tensorflow import keras
from tensorflow.contrib import lite
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([ -1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([ -3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([10.0]))
keras_file = 'linear.h5'
keras.models.save_model(model, keras_file)
converter = lite.TocoConverter.from_keras_model_file(keras_file)
tflite_model = converter.convert()
open('linear.tflite', 'wb').write(tflite_model)
This creates a binary called linear.tflite
, which I should be able to load.这将创建一个名为linear.tflite
的二进制文件,我应该能够加载它。
Compile TensorFlow Lite for my machine为我的机器编译 TensorFlow Lite
TensorFlow Lite comes with a script for the compilation on machines with the aarch64 architecture. TensorFlow Lite 附带了一个脚本,用于在具有 aarch64 架构的机器上进行编译。 I followed the guide here to do this, even though I had to modify the Makefile slightly.我按照此处的指南来执行此操作,即使我必须稍微修改 Makefile。 Note that I compiled this natively on my target system.请注意,我在目标系统上本地编译了它。 This created a static library called libtensorflow-lite.a
.这创建了一个名为libtensorflow-lite.a
的静态库。
Problem: Inference问题:推理
I tried to follow the tutorial on the site here , and simply pasted the the code snippets from loading and running the model together, eg我试图按照网站上的教程here ,并简单地将加载和运行模型的代码片段粘贴在一起,例如
class FlatBufferModel {
// Build a model based on a file. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromFile(
const char* filename,
ErrorReporter* error_reporter);
// Build a model based on a pre-loaded flatbuffer. The caller retains
// ownership of the buffer and should keep it alive until the returned object
// is destroyed. Return a nullptr in case of failure.
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
const char* buffer,
size_t buffer_size,
ErrorReporter* error_reporter);
};
tflite::FlatBufferModel model("./linear.tflite");
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
When trying to compile this via当试图通过编译这个
g++ demo.cpp libtensorflow-lite.a
I get a load of errors.我收到了很多错误。 Log:日志:
root@localhost:/inference# g++ demo.cpp libtensorflow-lite.a
demo.cpp:3:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
static std::unique_ptr<FlatBufferModel> BuildFromFile(
^~~~~~~~~~
demo.cpp:10:15: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
static std::unique_ptr<FlatBufferModel> BuildFromBuffer(
^~~~~~~~~~
demo.cpp:16:1: error: ‘tflite’ does not name a type
tflite::FlatBufferModel model("./linear.tflite");
^~~~~~
demo.cpp:18:1: error: ‘tflite’ does not name a type
tflite::ops::builtin::BuiltinOpResolver resolver;
^~~~~~
demo.cpp:19:6: error: ‘unique_ptr’ in namespace ‘std’ does not name a template type
std::unique_ptr<tflite::Interpreter> interpreter;
^~~~~~~~~~
demo.cpp:20:1: error: ‘tflite’ does not name a type
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
^~~~~~
demo.cpp:23:1: error: ‘interpreter’ does not name a type
interpreter->AllocateTensors();
^~~~~~~~~~~
demo.cpp:25:16: error: ‘interpreter’ was not declared in this scope
float* input = interpreter->typed_input_tensor<float>(0);
^~~~~~~~~~~
demo.cpp:25:48: error: expected primary-expression before ‘float’
float* input = interpreter->typed_input_tensor<float>(0);
^~~~~
demo.cpp:28:1: error: ‘interpreter’ does not name a type
interpreter->Invoke();
^~~~~~~~~~~
demo.cpp:30:17: error: ‘interpreter’ was not declared in this scope
float* output = interpreter->typed_output_tensor<float>(0);
^~~~~~~~~~~
demo.cpp:30:50: error: expected primary-expression before ‘float’
float* output = interpreter->typed_output_tensor<float>(0);
I am relatively new to C++, so I may be missing something obvious here.我对 C++ 比较陌生,所以我可能在这里遗漏了一些明显的东西。 It seems, however, that other people have trouble with the C++ API as well (look at this GitHub issue ).然而,其他人似乎也遇到了 C++ API 的问题(看看这个 GitHub 问题)。 Has anybody also stumbled across this and got it to run?有没有人也偶然发现这个并让它运行?
The most important aspects for me to cover would be:对我来说,最重要的方面是:
1.) Where and how do I define the signature, so that the model knows what to treat as inputs and outputs? 1.) 我在哪里以及如何定义签名,以便模型知道将什么视为输入和输出?
2.) Which headers do I have to include? 2.) 我必须包含哪些标题?
Thanks!谢谢!
EDIT编辑
Thanks to @Alex Cohn, the linker was able to find the correct headers.感谢@Alex Cohn,链接器能够找到正确的标头。 I also realized that I probably do not need to redefine the flatbuffers class, so I ended up with this code (minor change is marked):我也意识到我可能不需要重新定义 flatbuffers 类,所以我最终得到了这段代码(标记了小改动):
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
auto model = tflite::FlatBufferModel::BuildFromFile("linear.tflite"); //CHANGED
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
This reduces the number of errors greatly, but I am not sure how to resolve the rest:这大大减少了错误的数量,但我不确定如何解决其余的问题:
root@localhost:/inference# g++ demo.cpp -I/tensorflow
demo.cpp:10:34: error: expected ‘)’ before ‘,’ token
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
^
demo.cpp:10:44: error: expected initializer before ‘)’ token
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
^
demo.cpp:13:1: error: ‘interpreter’ does not name a type
interpreter->AllocateTensors();
^~~~~~~~~~~
demo.cpp:18:1: error: ‘interpreter’ does not name a type
interpreter->Invoke();
^~~~~~~~~~~
How do I have to tackle these?我该如何解决这些问题? It seems that I have to define my own resolver, but I have no clue on how to do that.似乎我必须定义自己的解析器,但我不知道如何做到这一点。
I finally got it to run.我终于让它运行了。 Considering my directory structure looks like this:考虑到我的目录结构如下所示:
/(root)
/tensorflow
# whole tf repo
/demo
demo.cpp
linear.tflite
libtensorflow-lite.a
I changed demo.cpp
to我将demo.cpp
更改为
#include <stdio.h>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
int main(){
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("linear.tflite");
if(!model){
printf("Failed to mmap model\n");
exit(0);
}
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Dummy input for testing
*input = 2.0;
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
printf("Result is: %f\n", *output);
return 0;
}
Also, I had to adapt my compile command (I had to install flatbuffers manually to make it work).此外,我必须调整我的编译命令(我必须手动安装 flatbuffers 才能使其工作)。 What worked for me was:对我有用的是:
g++ demo.cpp -I/tensorflow -L/demo -ltensorflow-lite -lrt -ldl -pthread -lflatbuffers -o demo
Thanks to @AlexCohn for getting me on the right track!感谢@AlexCohn 让我走上正轨!
Here is the minimal set of includes:这是最小的一组包含:
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/tools/gen_op_registration.h"
These will include other headers, eg <memory>
which defines std::unique_ptr
.这些将包括其他头文件,例如定义std::unique_ptr
<memory>
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.