简体   繁体   中英

Can OpenGL be used to draw real valued triangles into buffer?

I need to implement an image reconstruction which involves drawing triangles in a buffer representing the pixels in an image. These triangles are assigned some floating point value to be filled with. If triangles are drawn such that they overlap, the values of the overlapping regions must be added together.

Is it possible to accomplish this with OpenGL? I would like to take advantage of the fact that rasterizing triangles is a basic graphics task that can be accelerated on the graphics card. I have a cpu-only implementation of this algorithm already but it is not fast enough for my purposes. This is due to the huge number of triangles that need to be drawn.

Specifically my questions are:

  1. Can I draw triangles with a real value using openGL? (Or can I come up with a hack using color etc?)
  2. Can OpenGL add the values where triangles overlap? (Once again I could deal with a hack, like color mixing)
  3. Can I recover the real values for the pixels as an array of floats or similar to be further processed?
  4. Do I have misconceptions about the idea that drawing in OpenGL -> using GPU to draw -> likely faster execution?

Additionally, I would like to run this code on a virtual machine so getting acceleration to work with OpenGL is more feasible than rolling my own implementation in something like Cuda as far as I understand. Is this true?

EDIT: Is an accumulation buffer an option here?

  1. If 32-bit floats are sufficient then it looks like the answer is yes: http://www.opengl.org/wiki/Image_Format#Required_formats
  2. Even under the old fixed pipeline you could use a blending mode with the function GL_FUNC_ADD , though I'm sure fragment shaders can do it more easily now.
  3. glReadPixels() will get you the data back out of the buffer after drawing.
  4. There are software implementations of OpenGL, but you get to choose when you set up the context. Using the GPU should be much faster than the CPU.
  5. No idea. I've never used OpenGL or CUDA on a VM. I've never used CUDA at all.

I guess giving pieces of code as an answer wouldn't be appropriate here as your question is extremely broad. So I'll simply answer your questions individually with bits of hints.

  1. Yes, drawing triangles with openGL is a piece of cake. You provide 3 vertice per triangle and with the proper shaders your can draw triangles, with filling or just edges, whatever you want. You seem to require a large set (bigger than [0, 255]) since a lot of triangles may overlap, and the value of each may be bigger than one. This is not a problem. You can fill a 32bit precision one channel frame buffer. In your case only one channel may suffice.
  2. Yes, the blending exists since forever on openGL. So whatever the version of openGL you choose to use, there will be a way to add up the value of the trianlges overlapping.
  3. Yes, depending on you implement it you may have to use glGetSubData() or glReadPixels or something else. However, depending on the size of the matrix you're filling, it may be a bit long to download the full buffer (2000x1000 pixels for a one channel at 32bit would be around 4-5ms). It may be more efficient to do all your processing on the GPU and extract only few valuable information instead of continuing the processing on the CPU.
  4. The execution will be undoubtedly faster. However, the download of data from the GPU memory is often not optimized (upload is). So the time you will win on the processing may be lost on the download. I've never worked with openGL on VM so the additional loss of performance is unknown to me.

//Struct definition
struct Triangle {
    float[2] position;
    float intensity;
};

//Init
glGenBuffers(1, &m_buffer);
glBindBuffer(GL_ARRAY_BUFFER, 0, m_buffer);
glBufferData(GL_ARRAY_BUFFER, 
    triangleVector.data() * sizeof(Triangle), 
    triangleVector.size(), 
    GL_DYNAMIC_DRAW);
glBindBufferBase(GL_ARRAY_BUFFER, 0, 0);

glGenVertexArrays(1, &m_vao);
glBindVertexArray(m_vao);
glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
glVertexAttribPointer(
    POSITION, 
    2, 
    GL_FLOAT, 
    GL_FALSE, 
    sizeof(Triangle), 
    0);
glVertexAttribPointer(
    INTENSITY, 
    1, 
    GL_FLOAT, 
    GL_FALSE, 
    sizeof(Triangle), 
    sizeof(float)*2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);

glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexImage2D(
    GL_TEXTURE_2D,
    0,
    GL_R32F,
    width,
    height,
    0,
    GL_RED,
    GL_FLOAT,
    NULL);
glBindTexture(GL_FRAMEBUFFER, 0);

glGenFrameBuffers(1, &m_frameBuffer);
glBindFrameBuffer(GL_FRAMEBUFFER, m_frameBuffer);
glFramebufferTexture(
    GL_FRAMEBUFFER,
    GL_COLOR_ATTACHMENT0,
    m_texture,
    0);
glBindFrameBuffer(GL_FRAMEBUFFER, 0);

After you just need to write your render function. A simple glDraw*() should be enough. Just remember to bind your buffers correctly. To enable the blending with the proper function. You might also want to disable the anti aliasing for your case. At first I'd say you need an ortho projection but I don't have all the element of your problem so it's up to you.

Long story short, if you never worked with openGL, the piece of code above will be relevant only after you read few documentation/tutorials on openGL/GLSL.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM