[英]How to use CNN in openCV via C++?
Here https://stackoverflow.com/a/49817506/1277317 there is an example of how to use a convolution network in OpenCV. https://stackoverflow.com/a/49817506/1277317上有一个如何在OpenCV中使用卷积网络的示例。 But this example is in Python.
但是这个例子是在Python中。 How to do the same in C++?
如何在C ++中做同样的事情? Namely, how to do this in C++:
即,如何在C ++中做到这一点:
net = cv.dnn.readNetFromTensorflow('model.pb')
net.setInput(inp.transpose(0, 3, 1, 2))
cv_out = net.forward()
? ?
And how to create Mat for the setInput function for an image size: 60x162x1? 以及如何为图像大小为60x162x1的setInput函数创建Mat? I use float for the data just like in the python example.
就像在python示例中一样,我将float用于数据。 Now I have this code and it gives incorrect results:
现在,我有了这段代码,它给出了错误的结果:
Net net = readNet("e://xor.pb");
float x0[60][162];
for(int i=0;i<60;i++)
{
for(int j=0;j<162;j++)
{
x0[i][j]=0;
}
}
x0[5][59]=0.5;
x0[5][60]=1;
x0[5][61]=1;
x0[5][62]=0.5;
Mat aaa = cv::Mat(60,162, CV_32F, x0);
Mat inputBlob = dnn::blobFromImage(aaa, 1.0, Size(60,162));
net.setInput(inputBlob , "conv2d_input");
Mat prob = net.forward("activation_2/Softmax");
for(int i=0;i<prob.cols;i++)
{
qDebug()<<i<<prob.at<float>(0,i);
}
In openCV almost all functions are designed to work with 3D matrices. 在openCV中,几乎所有功能都旨在与3D矩阵一起使用。 So the easiest way for me to work with CV_32F 4D matrices is to work with them directly.
因此,对我而言,使用CV_32F 4D矩阵最简单的方法是直接使用它们。 The following code works correctly and quickly:
以下代码正确,快速地工作:
Net net = readNet("e://xor.pb");
const int sizes[] = {1,1,60,162};
Mat tenz = Mat::zeros(4, sizes, CV_32F);
float* dataB=(float*)tenz.data;
int x=1;
int y=2;
dataB[y*tenz.size[2]+x]=0.5f;
x=1;
y=3;
dataB[y*tenz.size[2]+x]=1.0f;
try
{
net.setInput(tenz , "input_layer_my_input_1");
Mat prob = net.forward("output_layer_my/MatMul");
}
catch( cv::Exception& e )
{
const char* err_msg = e.what();
qDebug()<<"err_msg"<<err_msg;
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.