[英]How to work with TF Lite library in a c++ project
在過去的 1-2 天里,我一直在為如何構建 TensorFlow Lite 而苦苦掙扎,以便我可以在我自己的 C\\C++ 項目中將其用作頭文件或庫。
例如,我有一個帶有 main.cpp 的 C++ 項目,代碼如下:
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
int main()
{
std::unique_ptr<tflite::FlatBufferModel> model;
model = tflite::FlatBufferModel::BuildFromBuffer(h5_converted_tflite, h5_converted_tflite_len);
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
// Resize input tensors, if desired.
interpreter->AllocateTensors();
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
}
我應該從哪里下載\\構建什么,以便我可以成功編譯此代碼? 目前它顯然說找不到 h 文件,當我克隆 TF 存儲庫並將其添加到包含文件夾時,它沒有找到“flatbuffers.h”文件,當我手動添加它時,它給出我有很多鏈接錯誤。 任何幫助將不勝感激在這里...
提前致謝
試試下面的代碼,它已經用TensorFlow
lite 1.14.0 測試過:
std::string str = "model.tflite";
ifstream file(str, std::ifstream::binary);
file.seekg(0, file.end);
int length = file.tellg();
file.seekg(0, file.beg);
char * model_data = new char [length];
file.read (model_data, length);
file.close();
std::unique_ptr<tflite::Interpreter> interpreter;
std::unique_ptr<tflite::FlatBufferModel> model;
tflite::ops::builtin::BuiltinOpResolver resolver;
model = tflite::FlatBufferModel::BuildFromBuffer(model_data, length);
tflite::InterpreterBuilder(*model, resolver)(&interpreter);
interpreter->AllocateTensors();
最近,TFLite 支持 CMake,它似乎解決了您的依賴問題。
https://www.tensorflow.org/lite/guide/build_cmake#create_a_cmake_project_which_uses_tensorflow_lite
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.