简体   繁体   English

如何使用 CImg 库捕获和处理图像的每一帧?

[英]How do I capture and Process each and every frame of an image using CImg library?

I'm working on a project based on real time image processing using CImg Library in Raspberrypi.我正在使用 Raspberrypi 中的 CImg 库进行基于实时图像处理的项目。

I need to capture images at higher frame rates (say atleast 30 fps), when I use the inbuilt Raspicam commands such as当我使用内置的 Raspicam 命令时,我需要以更高的帧速率(比如至少 30 fps)捕获图像

sudo raspistill -o -img_%d.jpg -tl 5 -t 1000  -a 512

/* -tl : time lapse duration in msec -t : total time duration (1000 msec = 1 sec) -a : displays frame numbers */ /* -tl : 以毫秒为单位的延时持续时间 -t : 总持续时间 (1000 毫秒 = 1 秒) -a : 显示帧数 */

with this command though it shows 34 frames per second,I could only capture maximum of 4 frames/images (and rest of the frames are skipped)使用此命令虽然它显示每秒 34 帧,但我最多只能捕获 4 帧/图像(其余帧被跳过)

sudo raspistill -o -img_%d.jpg -tl 5 -tl 1000 -q 5 -md 7 -w 640 -h 480 -a 512

and From this above command I could capture at a maximum of 7-8 images per second but by reducing the resolution and quality of the images.从上面的命令中,我可以每秒最多捕获 7-8 张图像,但会降低图像的分辨率和质量。

But I don't want to compromise on the quality of an image since I will be capturing an image, processing it immediately and will be deleting an image to save memory.但我不想在图像质量上妥协,因为我将捕捉图像,立即处理它并删除图像以节省内存。

Later I tried using V4L2(Video for Linux) drivers to make use of the best performance of a camera, but in the internet, tutorials regarding V4l2 and cimg are quite scarce, I couldn't find one.后来尝试使用V4L2(Video for Linux)驱动,发挥相机的最佳性能,但是网上关于V4l2和cimg的教程比较少,找不到。

I have been using the following commands我一直在使用以下命令

# Capture a JPEG image
 v4l2-ctl --set-fmt-video=width=2592,height=1944,pixelformat=3
 v4l2-ctl --stream-mmap=3 --stream-count=1 –stream-to=somefile.jpg

(source : http://www.geeetech.com/wiki/index.php/Raspberry_Pi_Camera_Module ) (来源: http : //www.geeetech.com/wiki/index.php/Raspberry_Pi_Camera_Module

but I couldn't get enough information about those parameters such as (stream-mmap & stream-count) what does it exactly, and how does these commands help me in capturing 30 frames/images per second ?但我无法获得有关这些参数的足够信息,例如 (stream-mmap & stream-count) 它究竟是什么,以及这些命令如何帮助我每秒捕获 30 帧/图像?

CONDITIONS:状况:

  1. Most importantly I don't want to use OPENCV, MATLAB or any other image processing softwares, since my image processing task is very simple (Ie detection of led light blink) also my objective is to have a light weight tool to perform these operations at the cost of higher performance.最重要的是我不想使用 OPENCV、MATLAB 或任何其他图像处理软件,因为我的图像处理任务非常简单(即检测 LED 灯闪烁)而且我的目标是拥有一个轻量级的工具来执行这些操作更高性能的代价。

  2. And also my programming code should be in either C or C++ but not in python or Java (since processing speed matters !)而且我的编程代码应该是 C 或 C++,而不是 python 或 Java(因为处理速度很重要!)

  3. Please make a note that,my aim is not to record a video but to capture as many frames as possible and to process each and individual images.请注意,我的目标不是录制视频,而是尽可能多地捕捉帧并处理每个图像。

For using in Cimg I searched over few docs from a reference manual, but I couldn't understand it clearly how to use it for my purpose.为了在 Cimg 中使用,我从参考手册中搜索了一些文档,但我无法清楚地理解如何将它用于我的目的。

The class cimg_library::CImgList represents lists of cimg_library::CImg images.类 cimg_library::CImgList 表示 cimg_library::CImg 图像的列表。 It can be used for instance to store different frames of an image sequence.例如,它可以用于存储图像序列的不同帧。 (source : http://cimg.eu/reference/group__cimg__overview.html ) (来源: http : //cimg.eu/reference/group__cimg__overview.html

  • I found the following examples, But i'm not quite sure whether it suits my task我找到了以下示例,但我不太确定它是否适合我的任务

Load a list from a YUV image sequence file.从 YUV 图像序列文件加载列表。

CImg<T>& load_yuv 
(
const char *const 
filename, 

const unsigned int 
size_x, 

const unsigned int 
size_y, 

const unsigned int 
first_frame = 0, 

const unsigned int 
last_frame = ~0U, 

const unsigned int 
step_frame = 1, 

const bool 
yuv2rgb = true 

Parameters filename Filename to read data from.参数 filename 从中读取数据的文件名。 size_x Width of the images. size_x 图像的宽度。 size_y Height of the images. size_y 图像的高度。 first_frame Index of first image frame to read. first_frame 要读取的第一个图像帧的索引。 last_frame Index of last image frame to read. last_frame 要读取的最后一个图像帧的索引。 step_frame Step applied between each frame. step_frame 在每帧之间应用的步骤。 yuv2rgb Apply YUV to RGB transformation during reading. yuv2rgb 在读取过程中将 YUV 应用到 RGB 转换。

But here, I need rgb values from an image frames directly without compression.但是在这里,我需要直接从图像帧中获取 rgb 值而不进行压缩。

Now I have the following code in OpenCv which performs my task, but I request you to help me in implementing the same using CImg libraries (which is in C++) or any other light weight libraries or something with v4l2现在我在 OpenCv 中有以下代码来执行我的任务,但我请求你帮助我使用 CImg 库(在 C++ 中)或任何其他轻量级库或带有 v4l2 的东西来实现相同的代码

#include <iostream>
#include <opencv2/opencv.hpp>

using namespace std;
using namespace cv;

int main (){
    VideoCapture capture (0); //Since you have your device at /dev/video0

    /* You can edit the capture properties with "capture.set (property, value);" or in the driver with "v4l2-ctl --set-ctrl=auto_exposure=1"*/

    waitKey (200); //Wait 200 ms to ensure the device is open

    Mat frame; // create Matrix where the new frame will be stored
    if (capture.isOpened()){
        while (true){
            capture >> frame; //Put the new image in the Matrix

            imshow ("Image", frame); //function to show the image in the screen
        }
    }
}
  • I'm a beginner to the Programming and Raspberry pi, please excuse if there are any mistakes in the above problem statements.我是编程和树莓派的初学者,如果上述问题陈述有任何错误,请见谅。

"With some of your recommendations, I slighthly modified the raspicam c++ api code and combined with CIMG image processing functionality " “根据您的一些建议,我稍微修改了 raspicam c++ api 代码并结合了 CIMG 图像处理功能”

 #include "CImg.h"
    #include <iostream>
    #include <cstdlib>
    #include <fstream>
    #include <sstream>
    #include <sys/timeb.h>
    #include "raspicam.h"
    using namespace std;
    using namespace cimg_library;
     bool doTestSpeedOnly=false;
    size_t nFramesCaptured=100;
//parse command line
//returns the index of a command line param in argv. If not found, return -1

    int findParam ( string param,int argc,char **argv ) {
    int idx=-1;
    for ( int i=0; i<argc && idx==-1; i++ )
        if ( string ( argv[i] ) ==param ) idx=i;
    return idx;

}


//parse command line
//returns the value of a command line param. If not found, defvalue is returned
float getParamVal ( string param,int argc,char **argv,float defvalue=-1 ) {
    int idx=-1;
    for ( int i=0; i<argc && idx==-1; i++ )
        if ( string ( argv[i] ) ==param ) idx=i;

    if ( idx==-1 ) return defvalue;
    else return atof ( argv[  idx+1] );
}




raspicam::RASPICAM_EXPOSURE getExposureFromString ( string str ) {
    if ( str=="OFF" ) return raspicam::RASPICAM_EXPOSURE_OFF;
    if ( str=="AUTO" ) return raspicam::RASPICAM_EXPOSURE_AUTO;
    if ( str=="NIGHT" ) return raspicam::RASPICAM_EXPOSURE_NIGHT;
    if ( str=="NIGHTPREVIEW" ) return raspicam::RASPICAM_EXPOSURE_NIGHTPREVIEW;
    if ( str=="BACKLIGHT" ) return raspicam::RASPICAM_EXPOSURE_BACKLIGHT;
    if ( str=="SPOTLIGHT" ) return raspicam::RASPICAM_EXPOSURE_SPOTLIGHT;
    if ( str=="SPORTS" ) return raspicam::RASPICAM_EXPOSURE_SPORTS;
    if ( str=="SNOW" ) return raspicam::RASPICAM_EXPOSURE_SNOW;
    if ( str=="BEACH" ) return raspicam::RASPICAM_EXPOSURE_BEACH;
    if ( str=="VERYLONG" ) return raspicam::RASPICAM_EXPOSURE_VERYLONG;
    if ( str=="FIXEDFPS" ) return raspicam::RASPICAM_EXPOSURE_FIXEDFPS;
    if ( str=="ANTISHAKE" ) return raspicam::RASPICAM_EXPOSURE_ANTISHAKE;
    if ( str=="FIREWORKS" ) return raspicam::RASPICAM_EXPOSURE_FIREWORKS;
    return raspicam::RASPICAM_EXPOSURE_AUTO;
}


    raspicam::RASPICAM_AWB getAwbFromString ( string str ) {
    if ( str=="OFF" ) return raspicam::RASPICAM_AWB_OFF;
    if ( str=="AUTO" ) return raspicam::RASPICAM_AWB_AUTO;
    if ( str=="SUNLIGHT" ) return raspicam::RASPICAM_AWB_SUNLIGHT;
    if ( str=="CLOUDY" ) return raspicam::RASPICAM_AWB_CLOUDY;
    if ( str=="SHADE" ) return raspicam::RASPICAM_AWB_SHADE;
    if ( str=="TUNGSTEN" ) return raspicam::RASPICAM_AWB_TUNGSTEN;
    if ( str=="FLUORESCENT" ) return raspicam::RASPICAM_AWB_FLUORESCENT;
    if ( str=="INCANDESCENT" ) return raspicam::RASPICAM_AWB_INCANDESCENT;
    if ( str=="FLASH" ) return raspicam::RASPICAM_AWB_FLASH;
    if ( str=="HORIZON" ) return raspicam::RASPICAM_AWB_HORIZON;
    return raspicam::RASPICAM_AWB_AUTO;
    }


    void processCommandLine ( int argc,char **argv,raspicam::RaspiCam &Camera ) {
    Camera.setWidth ( getParamVal ( "-w",argc,argv,640 ) );
    Camera.setHeight ( getParamVal ( "-h",argc,argv,480 ) );
    Camera.setBrightness ( getParamVal ( "-br",argc,argv,50 ) );
    Camera.setSharpness ( getParamVal ( "-sh",argc,argv,0 ) );
    Camera.setContrast ( getParamVal ( "-co",argc,argv,0 ) );
    Camera.setSaturation ( getParamVal ( "-sa",argc,argv,0 ) );
    Camera.setShutterSpeed( getParamVal ( "-ss",argc,argv,0 ) );
    Camera.setISO ( getParamVal ( "-iso",argc,argv ,400 ) );
   if ( findParam ( "-vs",argc,argv ) !=-1 )
        Camera.setVideoStabilization ( true );
    Camera.setExposureCompensation ( getParamVal ( "-ec",argc,argv ,0 ) );

    if ( findParam ( "-gr",argc,argv ) !=-1 )
      Camera.setFormat(raspicam::RASPICAM_FORMAT_GRAY);
    if ( findParam ( "-yuv",argc,argv ) !=-1 ) 
      Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
    if ( findParam ( "-test_speed",argc,argv ) !=-1 )
        doTestSpeedOnly=true;
    int idx;
    if ( ( idx=findParam ( "-ex",argc,argv ) ) !=-1 )
        Camera.setExposure ( getExposureFromString ( argv[idx+1] ) );
    if ( ( idx=findParam ( "-awb",argc,argv ) ) !=-1 )
        Camera.setAWB( getAwbFromString ( argv[idx+1] ) );

    nFramesCaptured=getParamVal("-nframes",argc,argv,100);
    Camera.setAWB_RB(getParamVal("-awb_b",argc,argv ,1), getParamVal("-awb_g",argc,argv ,1));

    }


    //timer functions
    #include <sys/time.h>
    #include <unistd.h>
    class Timer{
    private:
    struct timeval _start, _end;

    public:
      Timer(){}
    void start(){
        gettimeofday(&_start, NULL);
    }
    void end(){
        gettimeofday(&_end, NULL);
    }
    double getSecs(){
    return double(((_end.tv_sec  - _start.tv_sec) * 1000 + (_end.tv_usec - _start.tv_usec)/1000.0) + 0.5)/1000.;
    }

    }; 

    void saveImage ( string filepath,unsigned char *data,raspicam::RaspiCam &Camera ) {
    std::ofstream outFile ( filepath.c_str(),std::ios::binary );
    if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_BGR ||  Camera.getFormat()==raspicam::RASPICAM_FORMAT_RGB ) {
        outFile<<"P6\n";
    } else if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_GRAY ) {
        outFile<<"P5\n";
    } else if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_YUV420 ) { //made up format
        outFile<<"P7\n";
    }
    outFile<<Camera.getWidth() <<" "<<Camera.getHeight() <<" 255\n";
    outFile.write ( ( char* ) data,Camera.getImageBufferSize() );
    }


    int main ( int argc,char **argv ) {

    int a=1,b=0,c;
    int x=444,y=129; //pixel coordinates
    raspicam::RaspiCam Camera;
    processCommandLine ( argc,argv,Camera );
    cout<<"Connecting to camera"<<endl;

    if ( !Camera.open() ) {
        cerr<<"Error opening camera"<<endl;
        return -1;
       }
     //   cout<<"Connected to camera ="<<Camera.getId() <<" bufs="<<Camera.getImageBufferSize( )<<endl;
    unsigned char *data=new unsigned char[  Camera.getImageBufferSize( )];
    Timer timer;


       // cout<<"Capturing...."<<endl;
       // size_t i=0;
    timer.start();


    for (int i=0;i<=nFramesCaptured;i++)
        {
        Camera.grab();
        Camera.retrieve ( data );
                std::stringstream fn;
                fn<<"image.jpg";
                saveImage ( fn.str(),data,Camera );
    //  cerr<<"Saving "<<fn.str()<<endl;
    CImg<float> Img("/run/shm/image.jpg");
         //Img.display("Window Title");

    // 9 PIXELS MATRIX GRAYSCALE VALUES 
    float pixvalR1 = Img(x-1,y-1);

    float pixvalR2 = Img(x,y-1);

    float pixvalR3 = Img(x+1,y-1);

    float pixvalR4 = Img(x-1,y);

    float pixvalR5 = Img(x,y);

    float pixvalR6 = Img(x+1,y);

    float pixvalR7 = Img(x-1,y+1);

    float pixvalR8 = Img(x,y+1);

    float pixvalR9 = Img(x+1,y+1);

    // std::cout<<"coordinate value :"<<pixvalR5 << endl;


    // MEAN VALUES OF RGB PIXELS
    float light = (pixvalR1+pixvalR2+pixvalR3+pixvalR4+pixvalR5+pixvalR6+pixvalR7+pixvalR8+pixvalR9)/9 ;

    // DISPLAYING MEAN RGB VALUES OF 9 PIXELS
    // std::cout<<"Lightness value :"<<light << endl;


    // THRESHOLDING CONDITION
     c = (light > 130 ) ? a : b; 

    // cout<<"Data is " << c <<endl;

    ofstream fout("c.txt", ios::app);
    fout<<c;
    fout.close();


    }   

    timer.end();
       cerr<< timer.getSecs()<< " seconds for "<< nFramesCaptured << "  frames : FPS " << ( ( float ) ( nFramesCaptured ) / timer.getSecs() ) <<endl;

    Camera.release();

    std::cin.ignore();


    }
  • from this code, I would like to know how can we get the data directly from camera.retrieve(data), without storing it as an image file and to access the data from an image buffer, to process the image and delete it further.从这段代码中,我想知道我们如何直接从 camera.retrieve(data) 获取数据,而不将其存储为图像文件并从图像缓冲区访问数据,处理图像并进一步删除它。

As per the recommendations of Mark Setchell, which i made a slight changes in the code and i'm getting good results, but, Is there any way to improve the processing performance to get higher Frame rate ?根据 Mark Setchell 的建议,我对代码进行了轻微更改,并且得到了不错的结果,但是,有什么方法可以提高处理性能以获得更高的帧速率? with this code i'm able to get at a maximum of 10 FPS.使用此代码,我最多可以获得 10 FPS。

#include <ctime>
#include <fstream>
#include <iostream>
#include <thread>
#include <mutex>
#include <raspicam/raspicam.h>

// Don't want any X11 display by CImg
#define cimg_display 0

#include <CImg.h>

using namespace cimg_library;
using namespace std;

#define NFRAMES     1000
#define NTHREADS    2
#define WIDTH       640
#define HEIGHT      480

// Commands/status for the worker threads
#define WAIT    0
#define GO      1
#define GOING   2
#define EXIT    3
#define EXITED  4
volatile int command[NTHREADS];

// Serialize access to cout
std::mutex cout_mutex;

// CImg initialisation
// Create a 1280x960 greyscale (Y channel of YUV) image
// Create a globally-accessible CImg for main and workers to access
CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);

////////////////////////////////////////////////////////////////////////////////
// worker thread - There will 2 or more of these running in parallel with the
//                 main thread. Do any image processing in here.
////////////////////////////////////////////////////////////////////////////////
void worker (int id) {

   // If you need a "results" image of type CImg, create it here before entering
   // ... the main processing loop below - you don't want to do malloc()s in the
   // ... high-speed loop
   // CImg results...

   int wakeups=0;

   // Create a white for annotating
   unsigned char white[] = { 255,255,255 };

   while(true){
      // Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
      while((command[id]!=GO)&&(command[id]!=EXIT)){
         std::this_thread::sleep_for(std::chrono::microseconds(500));
      }
      if(command[id]==EXIT){command[id]=EXITED;break;}
      wakeups++;

      // Process frame of data - access CImg structure here
      command[id]=GOING;

      // You need to add your processing in HERE - everything from
      // ... 9 PIXELS MATRIX GRAYSCALE VALUES to
      // ... THRESHOLDING CONDITION
    int a=1,b=0,c;
    int x=330,y=84;

// CImg<float> Img("/run/shm/result.png");
float pixvalR1 = img(x-1,y-1);

float pixvalR2 = img(x,y-1);

float pixvalR3 = img(x+1,y-1);

float pixvalR4 = img(x-1,y);

float pixvalR5 = img(x,y);

float pixvalR6 = img(x+1,y);

float pixvalR7 = img(x-1,y+1);

float pixvalR8 = img(x,y+1);

float pixvalR9 = img(x+1,y+1);


// MEAN VALUES OF RGB PIXELS
float light = (pixvalR1+pixvalR2+pixvalR3+pixvalR4+pixvalR5+pixvalR6+pixvalR7+pixvalR8+pixvalR9)/9 ;

// DISPLAYING MEAN RGB VALUES OF 9 PIXELS
// std::cout<<"Lightness value :"<<light << endl;


// THRESHOLDING CONDITION
 c = (light > 130 ) ? a : b; 

// cout<<"Data is " << c <<endl;

ofstream fout("c.txt", ios::app);
fout<<c;
fout.close();
      // Pretend to do some processing.
      // You need to delete the following "sleep_for" and "if(id==0...){...}"
     // std::this_thread::sleep_for(std::chrono::milliseconds(2));


    /*  if((id==0)&&(wakeups==NFRAMES)){
        //  Annotate final image and save as PNG
          img.draw_text(100,100,"Hello World",white);
         img.save_png("result.png");
      } */
   }

   cout_mutex.lock();
   std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
   cout_mutex.unlock();
}

//timer functions
#include <sys/time.h>
#include <unistd.h>
class Timer{
    private:
    struct timeval _start, _end;

public:
  Timer(){}
    void start(){
        gettimeofday(&_start, NULL);
    }
    void end(){
        gettimeofday(&_end, NULL);
    }
    double getSecs(){
    return double(((_end.tv_sec  - _start.tv_sec) * 1000 + (_end.tv_usec - _start.tv_usec)/1000.0) + 0.5)/1000.;
    }

}; 

int main ( int argc,char **argv ) {

Timer timer;
   raspicam::RaspiCam Camera;
   // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
   Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);

   // Allowable widths: 320, 640, 1280
   // Allowable heights: 240, 480, 960
   // setCaptureSize(width,height)
   Camera.setCaptureSize(WIDTH,HEIGHT);

   std::cout << "Main: Starting"  << std::endl;
   std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
   std::cout << "Main: NFRAMES:"  << NFRAMES  << std::endl;
   std::cout << "Main: Width: "   << Camera.getWidth()  << std::endl;
   std::cout << "Main: Height: "  << Camera.getHeight() << std::endl;

   // Spawn worker threads - making sure they are initially in WAIT state
   std::thread threads[NTHREADS];
   for(int i=0; i<NTHREADS; ++i){
      command[i]=WAIT;
      threads[i] = std::thread(worker,i);
   }

   // Open camera
   cout<<"Opening Camera..."<<endl;
   if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}

   // Wait until camera stabilizes
   std::cout<<"Sleeping for 3 secs"<<endl;
   std::this_thread::sleep_for(std::chrono::seconds(3));
 timer.start();
   for(int frame=0;frame<NFRAMES;frame++){
      // Capture frame
      Camera.grab();

      // Copy just the Y component to our mono CImg
      std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);

      // Notify worker threads that data is ready for processing
      for(int i=0; i<NTHREADS; ++i){
         command[i]=GO;
      }
   }
timer.end();
cerr<< timer.getSecs()<< " seconds for "<< NFRAMES << "  frames : FPS " << ( ( float ) ( NFRAMES ) / timer.getSecs() ) << endl;
   // Let workers process final frame, then tell to exit
 //  std::this_thread::sleep_for(std::chrono::milliseconds(50));

   // Notify worker threads to exit
   for(int i=0; i<NTHREADS; ++i){
      command[i]=EXIT;
   }

   // Wait for all threads to finish
   for(auto& th : threads) th.join();
}

COMPILED COMMAND FOR EXECUTION OF THE CODE :执行代码的编译命令:

g++ -std=c++11 /home/pi/raspicam/src/raspicimgthread.cpp -o threadraspicimg -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util -O2 -L/usr/X11R6/lib -lm -lpthread -lX11

**RESULTS :**
Main: Starting
Main: NTHREADS:2
Main: NFRAMES:1000
Main: Width: 640
Main: Height: 480
Opening Camera...
Sleeping for 3 secs
99.9194 seconds for 1000  frames : FPS 10.0081
Thread[1]: Received 1000 wakeups
Thread[0]: Received 1000 wakeups

real    1m43.198s
user    0m2.060s
sys     0m5.850s

And one more query is that, when i used normal Raspicam c++ API code to perform the same tasks (the code which i mentioned previous to this) i got almost same results with very slight enhancement in the performance (ofcourse my frame rate is increased from 9.4 FPS to 10 FPS).还有一个问题是,当我使用普通的 Raspicam c++ API 代码来执行相同的任务(我之前提到的代码)时,我得到了几乎相同的结果,但性能略有提高(当然,我的帧速率从9.4 FPS 到 10 FPS)。

But in the code 1:但是在代码 1 中:

I have been saving images in a ram disk for processing and then i'm deleting.我一直将图像保存在 ram 磁盘中进行处理,然后我正在删除。 I haven't used any threads for parallel processing.我没有使用任何线程进行并行处理。

in the code 2 :在代码 2 中:

We are not saving any images in the disk and directly processing it from the buffer.我们没有在磁盘中保存任何图像并直接从缓冲区处理它。 And we are also using threads to improve the processing speed.我们也在使用线程来提高处理速度。

unfortunately, though we made some changes in the code 2 from the code 1, I'm not able to get desired results (which is to be performed at 30 FPS)不幸的是,尽管我们对代码 1 中的代码 2 进行了一些更改,但我无法获得所需的结果(以 30 FPS 执行)

Awaiting your favorable suggestions and any help is really appreciated.等待您的有利建议和任何帮助,我们将不胜感激。

Thanks in advance提前致谢

Best Regards BLV Lohith Kumar最好的问候 BLV Lohith Kumar

Updated Answer更新答案

I have updated my original answer here to show how to copy the acquired data into a CImg structure and also to show 2 worker threads that can then process the image while the main thread continues to acquire frames at the full speed.我在这里更新了我的原始答案,以展示如何将获取的数据复制到CImg结构中,并显示 2 个工作线程可以在主线程继续全速获取帧的同时处理图像。 It achieves 60 frames per second.它达到每秒 60 帧。

I have not done any processing inside the worker threads because I don't know what you want to do.我没有在工作线程内部做任何处理,因为我不知道你想做什么。 All I did was save the last frame to disk to show that the acquisition into a CImg is working.我所做的只是将最后一帧保存到磁盘,以显示对 CImg 的采集正在工作。 You could have 3 worker threads.你可以有 3 个工作线程。 You could pass one frame to each thread on a round-robin basis, or you could have each of 2 threads process half the frame at each iteration.您可以在循环的基础上将一帧传递给每个线程,或者您可以让 2 个线程中的每一个在每次迭代时处理一半的帧。 Or each of 3 threads process one third of a frame.或者 3 个线程中的每一个都处理一帧的三分之一。 You could change the polled wakeups to use condition variables.您可以更改轮询唤醒以使用条件变量。

#include <ctime>
#include <fstream>
#include <iostream>
#include <thread>
#include <mutex>
#include <raspicam/raspicam.h>

// Don't want any X11 display by CImg
#define cimg_display 0

#include <CImg.h>

using namespace cimg_library;
using namespace std;

#define NFRAMES     1000
#define NTHREADS    2
#define WIDTH       1280
#define HEIGHT      960

// Commands/status for the worker threads
#define WAIT    0
#define GO      1
#define GOING   2
#define EXIT    3
#define EXITED  4
volatile int command[NTHREADS];

// Serialize access to cout
std::mutex cout_mutex;

// CImg initialisation
// Create a 1280x960 greyscale (Y channel of YUV) image
// Create a globally-accessible CImg for main and workers to access
CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);

////////////////////////////////////////////////////////////////////////////////
// worker thread - There will 2 or more of these running in parallel with the
//                 main thread. Do any image processing in here.
////////////////////////////////////////////////////////////////////////////////
void worker (int id) {

   // If you need a "results" image of type CImg, create it here before entering
   // ... the main processing loop below - you don't want to do malloc()s in the
   // ... high-speed loop
   // CImg results...

   int wakeups=0;

   // Create a white for annotating
   unsigned char white[] = { 255,255,255 };

   while(true){
      // Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
      while((command[id]!=GO)&&(command[id]!=EXIT)){
         std::this_thread::sleep_for(std::chrono::microseconds(500));
      }
      if(command[id]==EXIT){command[id]=EXITED;break;}
      wakeups++;

      // Process frame of data - access CImg structure here
      command[id]=GOING;

      // You need to add your processing in HERE - everything from
      // ... 9 PIXELS MATRIX GRAYSCALE VALUES to
      // ... THRESHOLDING CONDITION

      // Pretend to do some processing.
      // You need to delete the following "sleep_for" and "if(id==0...){...}"
      std::this_thread::sleep_for(std::chrono::milliseconds(2));

      if((id==0)&&(wakeups==NFRAMES)){
         // Annotate final image and save as PNG
         img.draw_text(100,100,"Hello World",white);
         img.save_png("result.png");
      }
   }

   cout_mutex.lock();
   std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
   cout_mutex.unlock();
}

int main ( int argc,char **argv ) {

   raspicam::RaspiCam Camera;
   // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
   Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);

   // Allowable widths: 320, 640, 1280
   // Allowable heights: 240, 480, 960
   // setCaptureSize(width,height)
   Camera.setCaptureSize(WIDTH,HEIGHT);

   std::cout << "Main: Starting"  << std::endl;
   std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
   std::cout << "Main: NFRAMES:"  << NFRAMES  << std::endl;
   std::cout << "Main: Width: "   << Camera.getWidth()  << std::endl;
   std::cout << "Main: Height: "  << Camera.getHeight() << std::endl;

   // Spawn worker threads - making sure they are initially in WAIT state
   std::thread threads[NTHREADS];
   for(int i=0; i<NTHREADS; ++i){
      command[i]=WAIT;
      threads[i] = std::thread(worker,i);
   }

   // Open camera
   cout<<"Opening Camera..."<<endl;
   if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}

   // Wait until camera stabilizes
   std::cout<<"Sleeping for 3 secs"<<endl;
   std::this_thread::sleep_for(std::chrono::seconds(3));

   for(int frame=0;frame<NFRAMES;frame++){
      // Capture frame
      Camera.grab();

      // Copy just the Y component to our mono CImg
      std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);

      // Notify worker threads that data is ready for processing
      for(int i=0; i<NTHREADS; ++i){
         command[i]=GO;
      }
   }

   // Let workers process final frame, then tell to exit
   std::this_thread::sleep_for(std::chrono::milliseconds(50));

   // Notify worker threads to exit
   for(int i=0; i<NTHREADS; ++i){
      command[i]=EXIT;
   }

   // Wait for all threads to finish
   for(auto& th : threads) th.join();
}

Note on timing计时注意事项

You can time code like this:你可以这样计时:

#include <chrono>

typedef std::chrono::high_resolution_clock hrclock;

hrclock::time_point t1,t2;

t1 = hrclock::now();
// do something that needs timing
t2 = hrclock::now();

std::chrono::nanoseconds elapsed = t2-t1;
long long nanoseconds=elapsed.count();

Original Answer原答案

I have been doing some experiments with Raspicam .我一直在用Raspicam做一些实验。 I downloaded their code from SourceForge and modified it slightly to do some simple, capture-only tests.我从 SourceForge 下载了他们的代码,并对其稍作修改以进行一些简单的仅捕获测试。 The code I ended up using looks like this:我最终使用的代码如下所示:

#include <ctime>
#include <fstream>
#include <iostream>
#include <raspicam/raspicam.h>
#include <unistd.h> // for usleep()
using namespace std;

#define NFRAMES 1000

int main ( int argc,char **argv ) {

    raspicam::RaspiCam Camera;
    // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
    Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);

    // Allowable widths: 320, 640, 1280
    // Allowable heights: 240, 480, 960
    // setCaptureSize(width,height)
    Camera.setCaptureSize(1280,960);

    // Open camera 
    cout<<"Opening Camera..."<<endl;
    if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}

    // Wait until camera stabilizes
    cout<<"Sleeping for 3 secs"<<endl;
    usleep(3000000);
    cout << "Grabbing " << NFRAMES << " frames" << endl;

    // Allocate memory
    unsigned long bytes=Camera.getImageBufferSize();
    cout << "Width: "  << Camera.getWidth() << endl;
    cout << "Height: " << Camera.getHeight() << endl;
    cout << "ImageBufferSize: " << bytes << endl;;
    unsigned char *data=new unsigned char[bytes];

    for(int frame=0;frame<NFRAMES;frame++){
       // Capture frame
       Camera.grab();

       // Extract the image
       Camera.retrieve ( data,raspicam::RASPICAM_FORMAT_IGNORE );

       // Wake up a thread here to process the frame with CImg
    }
    return 0;
}

I dislike cmake so I just compiled like this:我不喜欢cmake所以我只是这样编译:

g++ -std=c++11 simpletest.c -o simpletest -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util

I found that, regardless of the dimensions of the image, and more or less regardless of the encoding (RGB, BGR, GRAY) it achieves 30 fps (frames per second).我发现,无论图像的尺寸如何,无论编码(RGB、BGR、GRAY)如何,它都能达到 30 fps(每秒帧数)。

The only way I could get better than that was by making the following changes:我可以做得更好的唯一方法是进行以下更改:

  • in the code above, use RASPICAM_FORMAT_YUV420 rather than anything else在上面的代码中,使用 RASPICAM_FORMAT_YUV420 而不是其他任何东西

  • editing the file private_impl.cpp and changing line 71 to set the framerate to 90.编辑文件private_impl.cpp并更改第 71 行以将帧速率设置为 90。

If I do that, I can achieve 66 fps.如果我这样做,我可以达到 66 fps。

As the Raspberry Pi is only a pretty lowly 900MHz CPU but with 4 cores, I would guess you would want to start 1-3 extra threads at the beginning outside the loop and then wake one, or more of them up where I have noted in the code to process the data.由于 Raspberry Pi 只是一个非常低的 900MHz CPU,但有 4 个内核,我猜你会想在循环外的开始处启动 1-3 个额外的线程,然后唤醒一个或多个线程,我已经注意到处理数据的代码。 The first thing they would do is copy the data out of the acquisition buffer before the next frame started - or have multiple buffers and use them in a round-robin fashion.他们要做的第一件事是在下一帧开始之前将数据从采集缓冲区中复制出来——或者有多个缓冲区并以循环方式使用它们。

Notes on threading穿线注意事项

In the following diagram, green represents the Camera.grab() where you acquire the image, and red represents the processing you do after the image is acquired.在下图中,绿色代表您获取图像的Camera.grab() ,红色代表您在获取图像后所做的处理。 At the moment, you are acquiring the data (green), and then processing it (red) before you can acquire the next frame.目前,您正在获取数据(绿色),然后在获取下一帧之前对其进行处理(红色)。 Note that 3 of your 4 CPUs do nothing.请注意,您的 4 个 CPU 中有 3 个什么都不做。

在此处输入图片说明

What I am suggesting is that you offload the processing (red) to the other CPUs/threads and keep acquiring new data (green) as fast as possible.我的建议是您将处理(红色)卸载到其他 CPU/线程,并尽可能快地获取新数据(绿色)。 Like this:像这样:

在此处输入图片说明

Now you see you get more frames (green) per second.现在您看到每秒获得更多帧(绿色)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM