简体   繁体   中英

Render fonts with SDL2, openGL ES 2.0 (GLSL 1.0) & Freetype

this is my first post here. I am working on an app that is intended for use on a raspberry pi 3 board. My target is to successfully draw graphics with the gpu instead of the cpu thus keeping the later available enough to satisfy cpu processing needs required by other parts of the app's program such as math calculations, i/o comms, I2C comms, virtual serial port comms etc.

I have so far been able to reach the point where i could use SDL2 & opengles 2.0 to draw lines with glDrawArrays() after, among others, following the tutorial of a paper from: https://keasigmadelta.com/store/gles3sdl2-tutorial/ . I successfully tried that in my raspberry pi 3 which with the GL driver enabled supports openGL ES 2.0 and GLSL 1.0 .

I was experimenting with rendering fonts with SDL2 which provides quite good functions that let you defined color & size as well as other params of the resulting rendering. Despite the fact that rendering fonts with SDL2 function TTF_RenderText_Blended(...) gave me perfect results, reasonable times and cpu overhead on my intel core 2 Quad core 9550 2.8GHz, i cannot say the same about the raspberry pi 3. Trying to use the gpu with glDrawArrays() on my raspi3 gave me impressive results with an almost 5-10% cpu load with 1000 lines between randomly selected vertices being drawn >50 times per second. However i need to render fonts by using the gpu instead of the cpu because plain SDL2 font rendering on my raspi3 resulted in 50-60% cpu load which leaves me with no space for other math calculations and such.

After searching for 2-3 days in the internet i decided to follow a tutorial from: https://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_01 which however did not include SDL2. Although i was able to compile the code, i am giving below, with no errors with: g++ -Wall main.cpp -I/usr/include/freetype2 -lSDL2 -lSDL2_ttf -lGL -lGLEW -lfreetype -o main . You could exclude some options as they refer to other libs being used such as ttf.

Please keep in mind that the code already works fine with glDrawArrays() although there is no reference to that.

Some of you will find some of my comments ridiculous i know.

What i get out of the following code is a screen at the color set by glClearColor(0.5, 0.5, 0.5, 1); which is grey and that is exactly what i get. Nothing else happens. When it comes to debuging, i placed a SDL_Log() in function renderText() . If you noticed, display() function contains 2 calls to renderText(). The 2 SDL_Log() functions give me the following in the app window:

INFO: Debug info: glyph w: 0, glyph rows: 0
INFO: Debug info: glyph w: 25, glyph rows: 36

There is no much more info i can give you. Can you help me go through the font rendering?

One thing is for sure. I have to sit down and learn some opengles as well as GLSL.

Vertex shader contents

#version 100

attribute vec4 coord;
varying vec2 texcoord;

void main(void) {
  gl_Position = vec4(coord.xy, 0, 1);
  texcoord = coord.zw;
}

Fragment shader contents

#version 100

#ifdef GL_ES
  precision highp float;
#endif

varying vec2 texcoord;
uniform sampler2D tex;
uniform vec4 color;

void main(void) {
  gl_FragColor = vec4(1, 1, 1, texture2D(tex, texcoord).r) * color;
}

Compiles with:

g++ -Wall main.cpp -I/usr/include/freetype2 -lSDL2 -lGL -lGLEW -lfreetype -o main

The source code was tested in Linux Mint 18.3 (ubuntu 16.04) and is as follows:

// Standard libs
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>

// Add -lSDL2
// Found at /usr/local/include/SDL2
#include <SDL2/SDL.h>

// Add -lGL and -lGLEW to compiler
// Found at /usr/include/GL
#include <GL/glew.h>
#include <GL/glu.h>

// Add -lfreetype and -I/usr/include/freetype2 to compiler
// Found at /usr/include/freetype2/freetype/config/
#include <ft2build.h>
#include FT_FREETYPE_H

SDL_Window *window=NULL;
SDL_GLContext openGLContext;
FT_Library ft=NULL;
FT_Face face;
GLuint shaderProg;

typedef struct {
  float position[2];
} Vertex;

// The function render_text() takes 5 arguments: the string to render, the x and y start coordinates, 
// and the x and y scale parameters. The last two should be chosen such that one glyph pixel corresponds 
// to one screen pixel. Let's look at the display() function which draws the whole screen:
void render_text(const char *text, float x, float y, float sx, float sy) {
  const char *p;

  FT_GlyphSlot g = face->glyph;

  SDL_Log("Debug info: glyph w: %d, glyph rows: %d", g->bitmap.width, g->bitmap.rows);

  for(p = text; *p; p++) {

    // If FT_Load_Char() returns a non-zero value then the glyph in *p could not be loaded
    if(FT_Load_Char(face, *p, FT_LOAD_RENDER))
        continue;

    glTexImage2D(
      GL_TEXTURE_2D,
      0,
      GL_RED,
      g->bitmap.width,
      g->bitmap.rows,
      0,
      GL_RED,
      GL_UNSIGNED_BYTE,
      g->bitmap.buffer
    );

    float x2 = x + g->bitmap_left * sx;
    float y2 = -y - g->bitmap_top * sy;
    float w = g->bitmap.width * sx;
    float h = g->bitmap.rows * sy;

    GLfloat box[4][4] = {
        {x2,     -y2    , 0, 0},
        {x2 + w, -y2    , 1, 0},
        {x2,     -y2 - h, 0, 1},
        {x2 + w, -y2 - h, 1, 1},
    };

    glBufferData(GL_ARRAY_BUFFER, sizeof box, box, GL_DYNAMIC_DRAW);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    x += (g->advance.x/64) * sx;
    y += (g->advance.y/64) * sy;
  }
}


void display(void) {

  // I had to add the three next lines of code because the 1st param to glUniform4fv() was unreferenced in the Wiki tutorial.
  // After looking at: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGetUniformLocation.xhtml
  // I concluded that i had to use glGetUniformLocation() to get the uLocation but i had no idea what to pass as 2nd param to glGetUniformLocation()
  // Docs say regarding the 2nd param: Points to a null terminated string containing the name of the uniform variable whose location is to be queried.
  // There is a var called uniform_location in the fragment shader so i had i pick one from there. I know i sound ridiculus...
  int w=0, h=0;
  SDL_GetWindowSize(window, &w, &h);
  GLint uLocation = glGetUniformLocation(shaderProg, "sample2D");

  // Clear the invisible buffer with the color specified
  glClearColor(0.5, 0.5, 0.5, 1);
  glClear(GL_COLOR_BUFFER_BIT);

  GLfloat black[4] = {0, 0, 0, 1};
  glUniform4fv(uLocation, 1, black);

  float sx = 2.0 / (float)w;
  float sy = 2.0 / (float)h;

  render_text("The Quick Brown Fox Jumps Over The Lazy Dog", -1 + 8 * sx,   1 - 50 * sy, sx, sy);
  render_text("The Misaligned Fox Jumps Over The Lazy Dog", -1 + 8.5 * sx, 1 - 100.5 * sy, sx, sy);

  // I replaced glutSwapBuffers(); with the following
  SDL_GL_SwapWindow(window);
}

void shaderProgDestroy(GLuint shaderProg) {
  glDeleteProgram(shaderProg);
}

/** Destroys a shader.
*/
static void shaderDestroy(GLuint shaderID) {
glDeleteShader(shaderID);
}

/** Gets the file's length.
*
* @param file the file
*
* @return size_t the file's length in bytes
*/
static size_t fileGetLength(FILE *file) {
  size_t length;
  size_t currPos = ftell(file);
  fseek(file, 0, SEEK_END);
  length = ftell(file);
  // Return the file to its previous position
  fseek(file, currPos, SEEK_SET);
  return length;
}


/** Loads and compiles a shader from a file.
*
* This will print any errors to the console.
*
* @param filename the shader's filename
* @param shaderType the shader type (e.g., GL_VERTEX_SHADER)
*
* @return GLuint the shader's ID, or 0 if failed
*/
static GLuint shaderLoad(const char *filename, GLenum shaderType) {
  FILE *file = fopen(filename, "r");
  if (!file) {
    SDL_Log("Can't open file: %s\n", filename);
    return 0;
  }
  size_t length = fileGetLength(file);
  // Alloc space for the file (plus '\0' termination)
  GLchar *shaderSrc = (GLchar*)calloc(length + 1, 1);
  if (!shaderSrc) {
    SDL_Log("Out of memory when reading file: %s\n", filename);
    fclose(file);
    file = NULL;
    return 0;
  }
  fread(shaderSrc, 1, length, file);
  // Done with the file
  fclose(file);
  file = NULL;

  // Create the shader
  GLuint shader = glCreateShader(shaderType);

  glShaderSource(shader, 1, (const GLchar**)&shaderSrc, NULL);
  free(shaderSrc);
  shaderSrc = NULL;
  // Compile it
  glCompileShader(shader);
  GLint compileSucceeded = GL_FALSE;
  glGetShaderiv(shader, GL_COMPILE_STATUS, &compileSucceeded);
  if (!compileSucceeded) {
    // Compilation failed. Print error info
    SDL_Log("Compilation of shader %s failed:\n", filename);
    GLint logLength = 0;
    glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &logLength);
    GLchar *errLog = (GLchar*)malloc(logLength);
    if (errLog) {
      glGetShaderInfoLog(shader, logLength, &logLength, errLog);
      SDL_Log("%s\n", errLog);
      free(errLog);
    }
    else {
      SDL_Log("Couldn't get shader log; out of memory\n");
    }
    glDeleteShader(shader);
    shader = 0;
  }
  return shader;
}

GLuint shaderProgLoad(const char *vertFilename, const char *fragFilename) {

  // Load vertex shader file from disk
  GLuint vertShader = shaderLoad(vertFilename, GL_VERTEX_SHADER);
  if (!vertShader) {
    SDL_Log("Couldn't load vertex shader: %s\n", vertFilename);
    return 0;
  }

  // Load fragment shader file from disk
  GLuint fragShader = shaderLoad(fragFilename, GL_FRAGMENT_SHADER);
  if (!fragShader) {
    SDL_Log("Couldn't load fragment shader: %s\n", fragFilename);
    shaderDestroy(vertShader);
    vertShader = 0;
    return 0;
  }

  // Create a shader program out of the two (or more) shaders loaded
  GLuint shaderProg = glCreateProgram();
  if (shaderProg) {
    // Attach the the two shaders to the program
    glAttachShader(shaderProg, vertShader);
    glAttachShader(shaderProg, fragShader);
    // Link the two shaders together
    glLinkProgram(shaderProg);

    GLint linkingSucceeded = GL_FALSE;

    // Get a status (true or false) of the linking process
    glGetProgramiv(shaderProg, GL_LINK_STATUS, &linkingSucceeded);

    // Handle the error if linking the two shaders went wrong
    if (!linkingSucceeded) {
      SDL_Log("Linking shader failed (vert. shader: %s, frag. shader: %s\n", vertFilename, fragFilename);
      GLint logLength = 0;
      glGetProgramiv(shaderProg, GL_INFO_LOG_LENGTH, &logLength);
      GLchar *errLog = (GLchar*)malloc(logLength);
      if (errLog) {
        glGetProgramInfoLog(shaderProg, logLength, &logLength, errLog);
        SDL_Log("%s\n", errLog);
        free(errLog);
      }
      else {
        SDL_Log("Couldn't get shader link log; out of memory\n");
      }
      glDeleteProgram(shaderProg);
      shaderProg = 0;
    }
  }
  else {
    SDL_Log("Couldn't create shader program\n");
  }

  // Free resources
  shaderDestroy(vertShader);
  shaderDestroy(fragShader);

  // Return the resulting shader program
  return shaderProg;
}

/** Creates the Vertex Buffer Object (VBO) containing
* the given vertices.
*
* @param vertices pointer to the array of vertices
* @param numVertices the number of vertices in the array
*/
GLuint vboCreate(Vertex *vertices, GLuint numVertices) {
  // Create the Vertex Buffer Object
  GLuint vbo;
  int nBuffers = 1;
  // Create a buffer
  glGenBuffers(nBuffers, &vbo);
  // Make the buffer a VBO buffer
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
  // Copy the vertices data in the buffer, and deactivate with glBindBuffer(GL_ARRAY_BUFFER, 0);
  glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * numVertices, vertices, GL_STATIC_DRAW);
  glBindBuffer(GL_ARRAY_BUFFER, 0);
  // Check for problems
  GLenum err = glGetError();
  if (err != GL_NO_ERROR) {
    // Failed
    glDeleteBuffers(nBuffers, &vbo);
    SDL_Log("Creating VBO failed, code %u\n", err);
    vbo = 0;
  }
  return vbo;
}


/** Frees the VBO.
*
* @param vbo the VBO's name.
*/
void vboFree(GLuint vbo) {
  glDeleteBuffers(1, &vbo);
}

void freeResources(void){
    SDL_GL_DeleteContext(openGLContext);
  shaderProgDestroy(shaderProg);
  SDL_Quit();
  SDL_DestroyWindow(window);
}


int main(int argc, char* args[]){
    // SDL2 video init
    SDL_Init( SDL_INIT_VIDEO | SDL_INIT_TIMER );

    // Setting openGL attributes
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
    // Enable double buffering
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
    // Enable hardware accelaration if available
    SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
    glewExperimental = GL_TRUE;

    // Get window
    window = SDL_CreateWindow( "Test", SDL_WINDOWPOS_UNDEFINED, 
    SDL_WINDOWPOS_UNDEFINED, 800, 600, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
    // Get openGL context
    openGLContext = SDL_GL_CreateContext(window);
    // Init glew
    glewInit();

    // ft globally defined with FT_Library ft
    FT_Init_FreeType(&ft);

    // face globally defined with FT_Face face;
    FT_New_Face(ft, "LiberationMono-Bold.ttf", 0, &face);


    // All return values of the init functions above since the point where main starts are normal. No errors are returned.
    // I have skipped the if conditions for the shake of simplicity


    // Load vertex & fragment shaders, compile them, link them together, make a program and return it
    shaderProg = shaderProgLoad("shaderV1.vert", "shaderV1.frag");
    // Activate the program
    glUseProgram(shaderProg);

    // The code up to this point works fine


    // This is where the code from wikipedia starts
    FT_Set_Pixel_Sizes(face, 0, 48);

    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    GLuint vbo;

    // I set the var attribute_coord. Is this right? The code from Wiki did not have any initializations for this variable.
    GLuint attribute_coord=0;

    glGenBuffers(1, &vbo);
    glEnableVertexAttribArray(attribute_coord);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glVertexAttribPointer(attribute_coord, 4, GL_FLOAT, GL_FALSE, 0, 0);

    display();
    // This is where the code from wikipedia ends

    while (1){

      // Wait
      SDL_Delay(10);
    }


    // A function that free resources
    freeResources();

    return 0;
  }

The issue is simple, you have forgotten to set the uniform variables tex and color ( tex is not necessary in your case, because it is set to 0 by default).

Determine the uniform locations of the active program resources tex and color , by glGetUniformLocation , after the program is linked ( glLinkProgram ). Set the uniforms by glUniform1i respectively glUniform4fv , after the shader program has become the current program ( gUseProgram ):

shaderProg = shaderProgLoad("shaderV1.vert", "shaderV1.frag");

GLuint tex_loc   = glGetUniformLocation( shaderProg, "tex" );
GLuint color_loc = glGetUniformLocation( shaderProg, "color" );

// Activate the program
glUseProgram(shaderProg);

glUniform1i( tex_loc, 0 ); // 0, because the texture is bound to of texture unit 0
float col[4] = { 1.0f, 0.0f, 0.0, 1.0f }; // red and opaque
glUniform4fv( color_loc, 1, col); 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM