简体   繁体   中英

Deferred Rendering not Displaying the GBuffer Textures

I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.

The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:

  glCreateFramebuffers(1, &id);
  glBindFramebuffer(GL_FRAMEBUFFER, id);

  std::vector<uint> textures;
  textures.resize(3);
  glCreateTextures(GL_TEXTURE_2D, 3, textures.data());
  
  for(size_t i = 0; i < 3; ++i)
  {
     glBindTexture(GL_TEXTURE_2D, textures[i]);

     if(i == 0)   // For Color Buffer
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
     else
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);

     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
     glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
  }

  GLenum color_buffers[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
  glDrawBuffers((GLsizei)textures.size(), color_buffers);

  uint depth_texture;
  glCreateTextures(GL_TEXTURE_2D, 1, &depth_texture);
  glBindTexture(GL_TEXTURE_2D, depth_texture);
  glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH24_STENCIL8, width, height);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
  glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);

  bool fbo_status = glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE;
  ASSERT(fbo_status, "Framebuffer Incompleted!");
  glBindFramebuffer(GL_FRAMEBUFFER, 0);

This is not reporting any errors and it seems to work since the framebuffer of the forward renderer renders properly. Then, when rendering, I run the next code after binding the framebuffer and clearing the color and depth buffers:

  camera_buffer->Bind();
  camera_buffer->SetData("ViewProjection", glm::value_ptr(viewproj_mat));
  camera_buffer->SetData("CamPosition", glm::value_ptr(glm::vec4(view_position, 0.0f)));
  camera_buffer->Unbind();

  for(Entity& entity : scene_entities)
  {
    shader->Bind();

    Texture* texture = entity.GetTexture();
    BindTexture(0, texture);

    shader->SetUniformMat4("u_Model", entity.transform);
    shader->SetUniformInt("u_Albedo", 0);
    shader->SetUniformVec4("u_Material.AlbedoColor", entity->AlbedoColor);
    shader->SetUniformFloat("u_Material.Smoothness", entity->Smoothness);

    glBindVertexArray(entity.VertexArray);
    glDrawElements(GL_TRIANGLES, entity.VertexArray.index_buffer.count, GL_UNSIGNED_INT, nullptr);

    // Shader, VArray and Textures Unbindings
  }

So with this code I manage to render the 3 textures created by using the ImGui::Image function, by switching the texture index between 0, 1 or 2 as the next:

ImGui::Image((ImTextureID)(fbo->textures[0]), viewport_size, ImVec2(0, 1), ImVec2(1, 0));

Now, the color texture (at index 0) works perfectly, as the next image shows: 颜色纹理

But when rendering the normals and position textures (indexes 2 and 3), I have no result: pos/norm 纹理

Does anybody sees what I'm doing wrong? Because I've been hours and hours with this and I cannot see it. I ran this on RenderDoc and I couldn't see anything wrong, the textures displayed in RenderDoc are the same than in the engine.

The vertex shader I use when rendering the entities is the next:

  layout(location = 0) in vec3 a_Position;
  layout(location = 1) in vec2 a_TexCoord;
  layout(location = 2) in vec3 a_Normal;

  out IBlock
  {
    vec2 TexCoord;
    vec3 FragPos;
    vec3 Normal;
  } v_VertexData;

  layout(std140, binding = 0) uniform ub_CameraData
  {
    mat4 ViewProjection;
    vec3 CamPosition;
  };

  uniform mat4 u_ViewProjection = mat4(1.0);
  uniform mat4 u_Model = mat4(1.0);

  void main()
  {
    vec4 world_pos = u_Model * vec4(a_Position, 1.0);
    
    v_VertexData.TexCoord = a_TexCoord;
    v_VertexData.FragPos = world_pos.xyz;
    v_VertexData.Normal = transpose(inverse(mat3(u_Model))) * a_Normal;
    
    gl_Position = ViewProjection * u_Model * vec4(a_Position, 1.0);
  }

And the fragment one is the next, they are both pretty simple:

  layout(location = 0) out vec4 gBuff_Color;
  layout(location = 1) out vec3 gBuff_Normal;
  layout(location = 2) out vec3 gBuff_Position;

  in IBlock
  {
    vec2 TexCoord;
    vec3 FragPos;
    vec3 Normal;
  } v_VertexData;

  struct Material
  {
    float Smoothness;
    vec4 AlbedoColor;
  };

  uniform Material u_Material = Material(1.0, vec4(1.0));
  uniform sampler2D u_Albedo, u_Normal;

  void main()
  {
    gBuff_Color = texture(u_Albedo, v_VertexData.TexCoord) * u_Material.AlbedoColor;
    gBuff_Normal = normalize(v_VertexData.Normal);
    gBuff_Position = v_VertexData.FragPos;
  }

It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.

One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.

If you declare a custom fragment shader output as float , vec2 , vec3 , the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.

What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).

What might happen here is either:

  • Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
  • Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.

If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.

If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.

As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM