简体   繁体   English

来自相机的 Qt 视频帧损坏

[英]Qt video frames from camera corrupted

EDIT: The first answer solved my problem.编辑:第一个答案解决了我的问题。 Apart from that I had to set the ASI_BANDWIDTH_OVERLOAD value to 0.除此之外,我必须将 ASI_BANDWIDTH_OVERLOAD 值设置为 0。

I am programming a Linux application in C++/Qt 5.7 to track stars in my telescope.我正在用 C++/Qt 5.7 编写一个 Linux 应用程序来跟踪望远镜中的星星。 I use a camera (ZWO ASI 120MM with according SDK v0.3) and grab its frames in a while loop in a separate thread.我使用相机(ZWO ASI 120MM 与 SDK v0.3)并在一个单独的线程中的 while 循环中抓取其帧。 These are then emitted to a QOpenGlWidget to be displayed.然后将它们发送到要显示的 QOpenGlWidget。 I have following problem: When the mouse is inside the QOpenGlWidget area, the displayed frames get corrupted.我有以下问题:当鼠标位于 QOpenGlWidget 区域内时,显示的帧会损坏。 Especially when the mouse is moved.尤其是当鼠标移动时。 The problem is worst when I use an exposure time of 50ms and disappears for lower exposure times.当我使用 50 毫秒的曝光时间并在较低的曝光时间下消失时,问题最严重。 When I feed the pipeline with alternating images from disk, the problem disappears.当我用磁盘中的交替图像向管道馈送时,问题就消失了。 I assume that this is some sort of thread-synchronization problem between the camera thread and the main thread, but I couldnt solve it.我认为这是相机线程和主线程之间的某种线程同步问题,但我无法解决它。 The same problem appears in the openastro software.同样的问题出现在 openastro 软件中。 Here are parts of the code:以下是部分代码:

MainWindow:主窗口:

MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent){

mutex = new QMutex;
camThread = new QThread(this);
camera = new Camera(nullptr, mutex);
display = new GLViewer(this, mutex);

setCentralWidget(display);

cameraHandle = camera->getHandle();

connect(camThread, SIGNAL(started()), camera, SLOT(connect()));
connect(camera, SIGNAL(exposureCompleted(const QImage)), display, SLOT(showImage(const QImage)), Qt::BlockingQueuedConnection );

camera->moveToThread(camThread);
camThread->start();
}

The routine that grabs the frames:抓取帧的例程:

void Camera::captureFrame(){
    while( cameraIsReady && capturing ){
        mutex->lock();
        error = ASIGetVideoData(camID, buffer, bufferSize, int(exposure*2*1e-3)+500);
        if(error == ASI_SUCCESS){
            frame = QImage(buffer,width,height,QImage::Format_Indexed8).convertToFormat(QImage::Format_RGB32); //Indexed8 is for 8bit 
            mutex->unlock();
            emit exposureCompleted(frame);
        }
        else {
            cameraStream << "timeout" << endl;
            mutex->unlock();
        }
    }
}

The slot that receives the image:接收图像的插槽:

bool GLViewer::showImage(const QImage image)
{
    mutex->lock();
    mOrigImage = image;
    mRenderQtImg = mOrigImage;

    recalculatePosition();

    updateScene();

    mutex->unlock();
    return true;
}

And the GL function that sets the image:以及设置图像的 GL 函数:

void GLViewer::renderImage()
{
    makeCurrent();
    glClear(GL_COLOR_BUFFER_BIT);

    if (!mRenderQtImg.isNull())
    {
        glLoadIdentity();
        glPushMatrix();
        {
            if (mResizedImg.width() <= 0)
            {
                if (mRenderWidth == mRenderQtImg.width() && mRenderHeight == mRenderQtImg.height())
                    mResizedImg = mRenderQtImg;
                else
                    mResizedImg = mRenderQtImg.scaled(QSize(mRenderWidth, mRenderHeight),
                                                      Qt::IgnoreAspectRatio,
                                                      Qt::SmoothTransformation);
            }
            glRasterPos2i(mRenderPosX, mRenderPosY);
            glPixelZoom(1, -1);
            glDrawPixels(mResizedImg.width(), mResizedImg.height(), GL_RGBA, GL_UNSIGNED_BYTE, mResizedImg.bits());
        }
        glPopMatrix();
        glFlush();
    }
}

I stole this code from here: https://github.com/Myzhar/QtOpenCVViewerGl我从这里偷了这个代码: https : //github.com/Myzhar/QtOpenCVViewerGl

And lastly, here is how my problem looks:最后,这是我的问题的样子:

This looks awful.这看起来很糟糕。

The image producer should produce new images and emit them through a signal.图像制作者应该产生新的图像并通过信号发出它们。 Since QImage is implicitly shared, it will automatically recycle frames to avoid new allocations.由于QImage是隐式共享的,它会自动回收帧以避免新的分配。 Only when the producer thread out-runs the display thread will image copies be made.只有当生产者线程超过显示线程时,才会制作图像副本。

Instead of using an explicit loop in the Camera object, you can run the capture using a zero-duration timer, and having the event loop invoke it.您可以使用零持续时间计时器运行捕获,并让事件循环调用它,而不是在Camera对象中使用显式循环。 That way the camera object can process events, eg timers, cross-thread slot invocations, etc.这样相机对象就可以处理事件,例如定时器、跨线程槽调用等。

There's no need for explicit mutexes, nor for a blocking connection.不需要显式互斥锁,也不需要阻塞连接。 Qt's event loop provides cross-thread synchronization. Qt 的事件循环提供了跨线程同步。 Finally, the QtOpenCVViewerGl project performs image scaling on the CPU and is really an example of how not to do it.最后,QtOpenCVViewerGl 项目在 CPU 上执行图像缩放,这确实是一个如何不这样做的示例。 You can get image scaling for free by drawing the image on a quad, even though that's also an outdated technique from the fixed pipeline days - but it works just fine.您可以通过在四边形上绘制图像来免费获得图像缩放,即使这也是固定管道时代的过时技术 - 但它工作得很好。

The ASICamera class would look roughly as follows: ASICamera类大致如下:

// https://github.com/KubaO/stackoverflown/tree/master/questions/asi-astro-cam-39968889
#include <QtOpenGL>
#include <QOpenGLFunctions_2_0>
#include "ASICamera2.h"

class ASICamera : public QObject {
   Q_OBJECT
   ASI_ERROR_CODE m_error;
   ASI_CAMERA_INFO m_info;
   QImage m_frame{640, 480, QImage::Format_RGB888};
   QTimer m_timer{this};
   int m_exposure_ms = 0;
   inline int id() const { return m_info.CameraID; }
   void capture() {
      m_error = ASIGetVideoData(id(), m_frame.bits(), m_frame.byteCount(),
                                 m_exposure_ms*2 + 500);
      if (m_error == ASI_SUCCESS)
         emit newFrame(m_frame);
      else
         qDebug() << "capture error" << m_error;
   }
public:
   explicit ASICamera(QObject * parent = nullptr) : QObject{parent} {
      connect(&m_timer, &QTimer::timeout, this, &ASICamera::capture);
   }
   ASI_ERROR_CODE error() const { return m_error; }
   bool open(int index) {
      m_error = ASIGetCameraProperty(&m_info, index);
      if (m_error != ASI_SUCCESS)
         return false;
      m_error = ASIOpenCamera(id());
      if (m_error != ASI_SUCCESS)
         return false;
      m_error = ASIInitCamera(id());
      if (m_error != ASI_SUCCESS)
         return false;
      m_error = ASISetROIFormat(id(), m_frame.width(), m_frame.height(), 1, ASI_IMG_RGB24);
      if (m_error != ASI_SUCCESS)
         return false;
      return true;
   }
   bool close() {
      m_error = ASICloseCamera(id());
      return m_error == ASI_SUCCESS;
   }
   Q_SIGNAL void newFrame(const QImage &);
   QImage frame() const { return m_frame; }
   Q_SLOT bool start() {
      m_error = ASIStartVideoCapture(id());
      if (m_error == ASI_SUCCESS)
         m_timer.start(0);
      return m_error == ASI_SUCCESS;
   }
   Q_SLOT bool stop() {
      m_error = ASIStopVideoCapture(id());
      return m_error == ASI_SUCCESS;
      m_timer.stop();
   }
   ~ASICamera() {
      stop();
      close();
   }
};

Since I'm using a dummy ASI API implementation, the above is sufficient.由于我使用的是虚拟 ASI API 实现,因此上述内容就足够了。 Code for a real ASI camera would need to set appropriate controls, such as exposure.真正的 ASI 相机的代码需要设置适当的控件,例如曝光。

The OpenGL viewer is also fairly simple: OpenGL 查看器也相当简单:

class GLViewer : public QOpenGLWidget, protected QOpenGLFunctions_2_0 {
   Q_OBJECT
   QImage m_image;
   void ck() {
      for(GLenum err; (err = glGetError()) != GL_NO_ERROR;) qDebug() << "gl error" << err;
   }
   void initializeGL() override {
      initializeOpenGLFunctions();
      glClearColor(0.2f, 0.2f, 0.25f, 1.f);
   }
   void resizeGL(int width, int height) override {
      glViewport(0, 0, width, height);
      glMatrixMode(GL_PROJECTION);
      glLoadIdentity();
      glOrtho(0, width, height, 0, 0, 1);
      glMatrixMode(GL_MODELVIEW);
      update();
   }
   // From http://stackoverflow.com/a/8774580/1329652
   void paintGL() override {
      auto scaled = m_image.size().scaled(this->size(), Qt::KeepAspectRatio);
      GLuint texID;
      glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
      glGenTextures(1, &texID);
      glEnable(GL_TEXTURE_RECTANGLE);
      glBindTexture(GL_TEXTURE_RECTANGLE, texID);
      glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGB, m_image.width(), m_image.height(), 0,
                   GL_RGB, GL_UNSIGNED_BYTE, m_image.constBits());

      glBegin(GL_QUADS);
      glTexCoord2f(0, 0);
      glVertex2f(0, 0);
      glTexCoord2f(m_image.width(), 0);
      glVertex2f(scaled.width(), 0);
      glTexCoord2f(m_image.width(), m_image.height());
      glVertex2f(scaled.width(), scaled.height());
      glTexCoord2f(0, m_image.height());
      glVertex2f(0, scaled.height());
      glEnd();
      glDisable(GL_TEXTURE_RECTANGLE);
      glDeleteTextures(1, &texID);
      ck();
   }
public:
   GLViewer(QWidget * parent = nullptr) : QOpenGLWidget{parent} {}
   void setImage(const QImage & image) {
      Q_ASSERT(image.format() == QImage::Format_RGB888);
      m_image = image;
      update();
   }
};

Finally, we hook the camera and the viewer together.最后,我们将相机和查看器连接在一起。 Since the camera initialization may take some time, we perform it in the camera's thread.由于相机初始化可能需要一些时间,我们在相机的线程中执行它。

The UI should emit signals that control the camera, eg to open it, start/stop acquisition, etc., and have slots that provide feedback from the camera (eg state changes). UI 应该发出控制相机的信号,例如打开它、开始/停止采集等,并具有提供来自相机的反馈(例如状态更改)的插槽。 A free-standing function would take the two objects and hook them together, using functors as appropriate to adapt the UI to a particular camera.一个独立的函数会将两个对象连接在一起,使用适当的函子来使 UI 适应特定的相机。 If adapter code would be extensive, you'd use a helper QObject for that, but usually a function should suffice (as it does below).如果适配器代码很长,您可以为此使用辅助QObject ,但通常一个函数就足够了(如下所示)。

class Thread : public QThread { public: ~Thread() { quit(); wait(); } };

// See http://stackoverflow.com/q/21646467/1329652
template <typename F>
static void postToThread(F && fun, QObject * obj = qApp) {
   QObject src;
   QObject::connect(&src, &QObject::destroyed, obj, std::forward<F>(fun), 
                    Qt::QueuedConnection);
}

int main(int argc, char ** argv) {
   QApplication app{argc, argv};
   GLViewer viewer;
   viewer.setMinimumSize(200, 200);
   ASICamera camera;
   Thread thread;
   QObject::connect(&camera, &ASICamera::newFrame, &viewer, &GLViewer::setImage);
   QObject::connect(&thread, &QThread::destroyed, [&]{ camera.moveToThread(app.thread()); });
   camera.moveToThread(&thread);
   thread.start();
   postToThread([&]{
      camera.open(0);
      camera.start();
   }, &camera);
   viewer.show();
   return app.exec();
}
#include "main.moc"

The GitHub project includes a very basic ASI camera API test harness and is complete: you can run it and see the test video rendered in real time. GitHub 项目包括一个非常基本的 ASI 相机 API 测试工具并且是完整的:您可以运行它并查看实时呈现的测试视频。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM