简体   繁体   English

在Windows上创建并行屏幕外OpenGL上下文

[英]Creating parallel offscreen OpenGL contexts on Windows

I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down. 我正在尝试设置并行多GPU离屏渲染上下文。我使用“OpenGL Insights”一书,第27章“NVIDIA Quadro上的多GPU渲染”。我还查看了wglCreateAffinityDCNV 文档,但仍无法将其固定下来。

My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit. 我的机器有2个NVidia Quadro 4000卡(无SLI)。运行在Windows 7 64bit上。 My workflow goes like this: 我的工作流程如下:

  1. Create default window context using GLFW. 使用GLFW创建默认窗口上下文。
  2. Map the GPU devices. 映射GPU设备。
  3. Destroy the default GLFW context. 销毁默认的GLFW上下文。
  4. Create new GL context for each one of the devices (currently trying only one) 为每个设备创建新的GL上下文(目前只尝试一个)
  5. Setup boost thread for each context and make it current in that thread. 为每个上下文设置boost线程并使其在该线程中是最新的。
  6. Run rendering procedures on each thread separately.(No resources share) 分别在每个线程上运行渲染过程。(无资源共享)

Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here : 创建的所有内容都没有错误并且运行但是一旦我尝试从屏幕外FBO读取像素,我在这里得到一个空指针:

GLubyte* ptr  = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);

Also glError returns "UNKNOWN ERROR" glError也会返回“UNKNOWN ERROR”

I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread. 我认为可能是多线程是问题但是在单线程上运行时相同的设置会给出相同的结果。 So I believe it is related to contexts creations. 所以我认为这与背景创作有关。

Here is how I do it : 我是这样做的:

  ////Creating default window with GLFW here .
      .....
         .....

Creating offscreen contexts: 创建屏幕外的上下文:

  PIXELFORMATDESCRIPTOR pfd =
{
    sizeof(PIXELFORMATDESCRIPTOR),
    1,
    PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,    //Flags
    PFD_TYPE_RGBA,            //The kind of framebuffer. RGBA or palette.
    24,                        //Colordepth of the framebuffer.
    0, 0, 0, 0, 0, 0,
    0,
    0,
    0,
    0, 0, 0, 0,
    24,                        //Number of bits for the depthbuffer
    8,                        //Number of bits for the stencilbuffer
    0,                        //Number of Aux buffers in the framebuffer.
    PFD_MAIN_PLANE,
    0,
    0, 0, 0
};

void  glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){

    int    pf;
    HGPUNV hGPU[MAX_GPU];
    HGPUNV GpuMask[MAX_GPU];

    UINT displayDeviceIdx;
    GPU_DEVICE gpuDevice;
    bool bDisplay, bPrimary;
    // Get a list of the first MAX_GPU GPUs in the system
    if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {

        printf("Device# %d:\n", gpuIndex);

        // Now get the detailed information about this device:
        // how many displays it's attached to
        displayDeviceIdx = 0;
        if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
        {   

            bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
            printf(" Display# %d:\n", displayDeviceIdx);
            printf("  Name: %s\n",   gpuDevice.DeviceName);
            printf("  String: %s\n", gpuDevice.DeviceString);
            if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
            {
                printf("  Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
                    gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
            }
            else
            {
                printf("  Not attached to the desktop\n");
            }

            // See if it's the primary GPU
            if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
            {
                printf("  This is the PRIMARY Display Device\n");
            }


        }

        ///=======================   CREATE a CONTEXT HERE 
        GpuMask[0] = hGPU[gpuIndex];
        GpuMask[1] = NULL;
        _affDC = wglCreateAffinityDCNV(GpuMask);

        if(!_affDC)
        {
            printf( "wglCreateAffinityDCNV failed");                  
        }

    }

    printf("GPU context created");
}

glMultiContext::renderingContext *
    glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
    glMultiContext::renderingContext *rc;

    rc = new renderingContext(gpuIndex);

    _pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);

    if(_pixelFormat == 0)
    {

        printf("failed to  choose pixel format");
        return false;
    }

     DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);

    if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
    {
        printf("failed to set pixel format");
        return false;
    }

    rc->_affRC = wglCreateContext(rc->_affDC);


    if(rc->_affRC == 0)
    {
        printf("failed to create gl render context");
        return false;
    }


    return rc;
}

//Call at the end to make it current :


 bool glMultiContext::makeCurrent(renderingContext *rc)
{
    if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
    {

        printf("failed to make context current");
        return false;
    }

    return true;
}

    ////  init OpenGL objects and rendering here :

     ..........
     ............

AS I said ,I am getting no errors on any stages of device and context creation. 正如我所说,我在设备和上下文创建的任何阶段都没有错误。 What am I doing wrong ? 我究竟做错了什么 ?

UPDATE: 更新:

Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread. 好吧,好像我弄明白了。我在调用wglMakeCurrent()之后调用了glfwTerminate(),所以看起来最新的“非当前”也是新的上下文。虽然它是有线的,因为OpenGL命令不断执行。所以它在单个线程中工作。

But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class: 但是现在,如果我使用boost踏板生成另一个线程,我会收到初始错误。这是我的线程类:

GPUThread::GPUThread(void)
{
    _thread =NULL;
    _mustStop=false;
    _frame=0;


    _rc =glMultiContext::getInstance().createRenderingContext(GPU1);
    assert(_rc);

    glfwTerminate(); //terminate the initial window and context
    if(!glMultiContext::getInstance().makeCurrent(_rc)){

        printf("failed to make current!!!");
    }
             // init engine here (GLEW was already initiated)
    engine = new Engine(800,600,1);

}
void GPUThread::Start(){



    printf("threaded view setup ok");

    ///init thread here :
    _thread=new boost::thread(boost::ref(*this));

    _thread->join();

}
void GPUThread::Stop(){
    // Signal the thread to stop (thread-safe)
    _mustStopMutex.lock();
    _mustStop=true;
    _mustStopMutex.unlock();

    // Wait for the thread to finish.
    if (_thread!=NULL) _thread->join();

}
// Thread function
void GPUThread::operator () ()
{
    bool mustStop;

    do
    {
        // Display the next animation frame
        DisplayNextFrame();
        _mustStopMutex.lock();
        mustStop=_mustStop;
        _mustStopMutex.unlock();
    }   while (mustStop==false);

}


void GPUThread::DisplayNextFrame()
{

    engine->Render(); //renders frame
    if(_frame == 101){
        _mustStop=true;
    }
}

GPUThread::~GPUThread(void)
{
    delete _view;
    if(_rc != 0)
    {
        glMultiContext::getInstance().deleteRenderingContext(_rc);
        _rc = 0;
    }
    if(_thread!=NULL)delete _thread;
}

Finally I solved the issues by myself. 最后,我自己解决了这些问题。 First problem was that I called glfwTerminate after I set another device context to be current. 第一个问题是我将另一个设备上下文设置为当前后调用了glfwTerminate。 That probably unmounted the new context too. 这也许可以解决新的背景。 Second problem was my "noobiness " with boost threads. 第二个问题是我使用提升线程的“noobiness”。 I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above. 我无法在自定义线程中初始化所有与渲染相关的对象,因为我在设置线程之前调用了rc init对象过程,如上例所示。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM