简体   繁体   中英

How to support different OpenGL versions/extensions in C++ font library

I'm planning to rewrite a small C++ OpenGL font library I made a while back using FreeType 2 since I recently discovered the changes to newer OpenGL versions. My code uses immediate mode and some function calls I'm pretty sure are deprecated now, eg glLineStipple.

I would very much like to support a range of OpenGL versions such that the code uses eg VBO's when possible or falls back on immediate mode if nothing else is available and so forth. I'm not sure how to go about it though. Afaik, you can't do a compile time check since you need a valid OpenGL context created at runtime. So far, I've come up with the following proposals (with inspiration from other threads/sites):

  • Use GLEW to make runtime checks in the drawing functions and to check for function support (eg glLineStipple)

  • Use some #define's and other preprocessor directives that can be specified at compile time to compile different versions that work with different OpenGL versions

  • Compile different versions supporting different OpenGL versions and supply each as a separate download

  • Ship the library with a script (Python/Perl) that checks the OpenGL version on the system (if possible/reliable) and does the approapriate modifications to the source so it fits with the user's version of OpenGL

  • Target only newer OpenGL versions and drop support for anything below

I'm probably going to use GLEW anyhow to easily load extensions.

FOLLOW-UP: Based on your very helpful answers, I tried to whip up a few lines based on my old code, here's a snippet (no tested/finished). I declare the appropriate function pointers in the config header, then when the library is initialized, I try to get the right function pointers. If VBOs fail (pointers null), I fall back to display lists (deprecated in 3.0) and then finally to vertex arrays. I should (maybe?) also check for available ARB extensions if fx. VBOs fail to load or is that too much work? Would this be a solid approrach? Comments are appreciated :)

#if defined(WIN32) || defined(_WIN32) || defined(__WIN32__)
    #define OFL_WINDOWS
    // other stuff...
    #ifndef OFL_USES_GLEW
        // Check which extensions are supported
    #else
        // Declare vertex buffer object extension function pointers
        PFNGLGENBUFFERSPROC          glGenBuffers          = NULL;
        PFNGLBINDBUFFERPROC          glBindBuffer          = NULL;
        PFNGLBUFFERDATAPROC          glBufferData          = NULL;
        PFNGLVERTEXATTRIBPOINTERPROC glVertexAttribPointer = NULL;
        PFNGLDELETEBUFFERSPROC       glDeleteBuffers       = NULL;
        PFNGLMULTIDRAWELEMENTSPROC   glMultiDrawElements   = NULL;
        PFNGLBUFFERSUBDATAPROC       glBufferSubData       = NULL;
        PFNGLMAPBUFFERPROC           glMapBuffer           = NULL;
        PFNGLUNMAPBUFFERPROC         glUnmapBuffer         = NULL;
    #endif
#elif some_other_system

Init function:

#ifdef OFL_WINDOWS
    bool loaded = true;

    // Attempt to load vertex buffer obejct extensions
    loaded = ((glGenBuffers          = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers"))                   != NULL && loaded);
    loaded = ((glBindBuffer          = (PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer"))                   != NULL && loaded);
    loaded = ((glVertexAttribPointer = (PFNGLVERTEXATTRIBPOINTERPROC)wglGetProcAddress("glVertexAttribPointer")) != NULL && loaded);
    loaded = ((glDeleteBuffers       = (PFNGLDELETEBUFFERSPROC)wglGetProcAddress("glDeleteBuffers"))             != NULL && loaded);
    loaded = ((glMultiDrawElements   = (PFNGLMULTIDRAWELEMENTSPROC)wglGetProcAddress("glMultiDrawElements"))     != NULL && loaded);
    loaded = ((glBufferSubData       = (PFNGLBUFFERSUBDATAPROC)wglGetProcAddress("glBufferSubData"))             != NULL && loaded);
    loaded = ((glMapBuffer           = (PFNGLMAPBUFFERPROC)wglGetProcAddress("glMapBuffer"))                     != NULL && loaded);
    loaded = ((glUnmapBuffer         = (PFNGLUNMAPBUFFERPROC)wglGetProcAddress("glUnmapBuffer"))                 != NULL && loaded);

     if (!loaded)
        std::cout << "OFL: Current OpenGL context does not support vertex buffer objects" << std::endl;
    else {
        #define OFL_USES_VBOS
        std::cout << "OFL: Loaded vertex buffer object extensions successfully"
        return true;
    }

    if (glMajorVersion => 3.f) {
        std::cout << "OFL: Using vertex arrays" << std::endl;
        #define OFL_USES_VERTEX_ARRAYS
    } else {
        // Display lists were deprecated in 3.0 (although still available through ARB extensions)
        std::cout << "OFL: Using display lists"
        #define OFL_USES_DISPLAY_LISTS
    }
#elif some_other_system

First of all, and you're going to be safe with that one, because it's supported everywhere: Rewrite your font renderer to use Vertex Arrays . It's only a small step from VAs to VBOs, but VAs are supported everywhere. You only need a small set of extension functions; maybe it made sense to do the loading manually, to not be dependent on GLEW. Linking it statically was huge overkill.

Then put the calls into wrapper functions, that you can refer to through function pointers so that you can switch render paths that way. For example add a function "stipple_it" or so, and internally it calls glLineStipple or builds and sets the appropriate fragment shader for it.

Similar for glVertexPointer vs. glVertexAttribPointer.

If you do want to make every check by hand, then you won't get away from some #defines because Android/iOS only support OpenGL ES and then the runtime checks would be different.

The run-time checks are also almost unavoidable because (from personal experience) there are a lot of caveats with different drivers from different hardware vendors (for anything above OpenGL 1.0, of course).

"Target only newer OpenGL versions and drop support for anything below" would be a viable option, since most of the videocards by ATI/nVidia and even Intel support some version of OpenGL 2.0+ which is roughly equivalent to the GL ES 2.0.

GLEW is a good way to ease the GL extension fetching. Still, there are issues with the GL ES on embedded platforms.

Now the loading procedure:

  1. On win32/linux just check the function pointer for not being NULL and use the ExtensionString from GL to know what is supported at this concrete hardware

  2. The "loading" for iOS/Android/MacOSX would be just storing the pointers or even "do-nothing". Android is a different beast, here you have static pointers, but the need to check extension. Even after these checks you might not be sure about some things that are reported as "working" (I'm talking about "noname" Android devices or simple gfx hardware). So you will add your own(!) checks based on the name of the videocard.

  3. OSX/iOS OpenGL implementation "just works". So if you're running on 10.5, you'll get GL2.1; 10.6 - 2.1 + some extensions which make it almost like 3.1/3.2; 10.7 - 3.2 CoreProfile. No GL4.0 for Macs yet, which is mostly an evolution of 3.2.

If you're interested in my personal opinion, then I'm mostly from the "reinvent everything" camp and over the years we've been using some autogenerated extension loaders.

Most important, you're on the right track: the rewrite to VBO/VA/Shaders/NoFFP would give you a major performance boost.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM