简体   繁体   English

OpenGL比例单像素线

[英]OpenGL Scale Single Pixel Line

I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). 我想制作一个内部尺寸为320x240的游戏,但以该数字的整数倍(640x480、960,720等)渲染到屏幕上。 I am going for retro 2D pixel graphics. 我打算使用复古的2D像素图形。

I have achieved this by setting the internal resolution via glOrtho(): 我通过设置glOrtho()的内部分辨率实现了这一点:

glOrtho(0, 320, 240, 0, 0, 1);

And then I scale up the output resolution by a factor of 3, like this: 然后将输出分辨率放大3倍,如下所示:

glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);

I draw rectangles like this: 我这样画矩形:

glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();

It works perfectly at 320x240 (not scaled): 它可以在320x240(不缩放)下完美运行:

在此处输入图片说明

When I scale up to 960x720, the pixel rendering all works just fine! 当我放大到960x720时,像素渲染一切正常! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. 但是,似乎GL_Line_Loop不是在320x240画布上绘制并按比例放大,而是在最终的960x720画布上绘制。 The result is 1px lines in a 3px world :( 结果是在3px的世界中有1px的线:(

在此处输入图片说明

How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas? 如何在320x240 glOrtho画布而不是960x720输出画布上绘制线条?

There is no "320x240 glOrtho canvas". 没有“ 320x240 glOrtho画布”。 There is only the window's actual resolution: 960x720. 只有窗口的实际分辨率:960x720。

All you are doing is scaling up the coordinates of the primitives you render. 您要做的只是放大渲染图元的坐标。 So, your code says to render a line from, for example, (20, 20) to (40, 40). 因此,您的代码说要渲染从(20,20)到(40,40)的线。 And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120). OpenGL(最终)在每个维度上将这些坐标按3比例缩放:(60、60)和(120x120)。

But that's only dealing with the end points . 但这仅涉及终点 What happens in the middle is still based on the fact that you're rendering at the window's actual resolution. 中间发生的情况仍然基于您以窗口的实际分辨率进行渲染的事实。

Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. 即使您使用glLineWidth来更改线条的宽度,也只能固定线条的宽度。 It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. 线的栅格化基于您要渲染的实际分辨率并不能解决这一事实。 So diagonal lines won't have the pixelated appearance you likely want. 因此,对角线将不会具有您可能想要的像素化外观。

The only way to do this properly is to, well, do it properly. 正确执行此操作的唯一方法是正确执行此操作。 Render to an image that is actual 320x240, then draw it to the window's actual resolution. 渲染为实际的320x240图像,然后将其绘制为窗口的实际分辨率。

You'll have to create a texture of that size, then attach it to a framebuffer object . 您必须创建该大小的纹理,然后将其附加到framebuffer对象 Bind the FBO for rendering and render to it (with the viewport set to the image's size). 绑定要渲染的FBO并将其渲染(视口设置为图像的大小)。 Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution). 然后取消绑定FBO,并将该纹理绘制到窗口(视口设置为窗口的分辨率)。

As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. 正如我在评论中提到的那样, 英特尔OpenGL驱动程序在直接渲染到纹理方面存在问题,我不知道有任何可行的解决方法。 In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. 在这种情况下,解决此问题的唯一方法是使用glReadPixels将屏幕内容复制到CPU内存,然后将其作为纹理复制回GPU Of coarse that is much much slower then direct rendering to texture. 粗糙的则比直接渲染到纹理要慢得多。 So here is the deal: 所以这里是交易:

  1. set low res view 设置低分辨率视图

    do not change resolution of your window just the glViewport values. 不要更改glViewport值的窗口分辨率。 Then render your scene in the low res (in just a fraction of screen space) 然后以较低的分辨率(仅一小部分屏幕空间)渲染场景

  2. copy rendered screen into texture 将渲染的屏幕复制到纹理中

  3. set target resolution view 设置目标分辨率视图
  4. render the texture 渲染纹理

    do not forget to use GL_NEAREST filter. 不要忘记使用GL_NEAREST过滤器。 The most important thing is that you swap buffers only after this not before !!! 最重要的是,仅在此之后才交换缓冲区! otherwise you would have flickering. 否则你会忽悠。

Here C++ source for this: 这里是C ++源代码:

void gl_draw()
    {
    // render resolution and multiplier
    const int xs=320,ys=200,m=2;

    // [low res render pass]
    glViewport(0,0,xs,ys);
    glClearColor(0.0,0.0,0.0,1.0);
    glClear(GL_COLOR_BUFFER_BIT);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_TEXTURE_2D);
    // 50 random lines
    RandSeed=0x12345678;
    glColor3f(1.0,1.0,1.0);
    glBegin(GL_LINES);
    for (int i=0;i<100;i++)
     glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
    glEnd();

    // [multiply resiolution render pass]
    static bool _init=true;
    GLuint  txrid=0;        // texture id
    BYTE map[xs*ys*3];      // RGB
    // init texture
    if (_init)              // you should also delte the texture on exit of app ...
        {
        // create texture
        _init=false;
        glGenTextures(1,&txrid);
        glEnable(GL_TEXTURE_2D);
        glBindTexture(GL_TEXTURE_2D,txrid);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);   // must be nearest !!!
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
        glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
        glDisable(GL_TEXTURE_2D);
        }
    // copy low res screen to CPU memory
    glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
    // and then to GPU texture
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D,txrid);         
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
    // set multiplied resolution view
    glViewport(0,0,m*xs,m*ys);
    glClear(GL_COLOR_BUFFER_BIT);
    // render low res screen as texture
    glBegin(GL_QUADS);
    glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
    glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
    glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
    glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
    glEnd();
    glDisable(GL_TEXTURE_2D);

    glFlush();
    SwapBuffers(hdc);   // swap buffers only here !!!
    }

And preview: 并预览:

预习

I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not). 我在我可以使用的某些Intel HD图形(上帝知道哪个版本)上对此进行了测试,并且可以正常工作(而标准的渲染纹理方法则没有)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM