简体   繁体   中英

OpenGL Compute Shader rendering to 3D texture does not seem to do anything

I have written a small shader to set the color values of a 3D volume.

#version 430

layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
layout (r8, binding = 0) uniform image3D destTex;

void main() 
{
    imageStore(destTex, ivec3(gl_GlobalInvocationID.xyz), vec4(0.33, 0, 1.0, 0.5));
}

It compiles and links without errors.

The volume is created and the compute shader is dispatched like this

    generateShader->use();

    glEnable(GL_TEXTURE_3D);
    glGenTextures(1, &densityTex);

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_3D, densityTex);

    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);

    glTexImage3D(GL_TEXTURE_3D, 0, GL_R8, chunksize + 1, chunksize + 1, chunksize + 1, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
    glBindImageTexture(0, densityTex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_R8);

    glDispatchCompute(chunksize + 1, chunksize + 1, chunksize + 1);
    // make sure writing to image has finished before read
    glMemoryBarrier(GL_ALL_BARRIER_BITS);

asdf(0); //Here I print the content of my texture. 
         //Strangely it contains the values 205 everywhere and independent of what I set in the shader: vec4(0.33, 0, 1.0, 0.5)

    glBindTexture(GL_TEXTURE_3D, 0);

The method for printing the image data looks like this:

void Chunk::asdf(int slice) {

    GLubyte* image;
    image = new GLubyte[(chunksize + 1) * (chunksize + 1) * (chunksize + 1)];

    glGetTexImage(GL_TEXTURE_3D, 0, GL_R8, GL_UNSIGNED_BYTE, image);
    std::cout << "Reading texels" << std::endl;
    for (int i = 0; i < (chunksize + 1); i++) {
        for (int j = 0; j < (chunksize + 1); j++) {

            int start = (((chunksize + 1) * (chunksize + 1) * slice) + (i * (chunksize + 1)) + j);
            std::cout << "Texel at " << i << " " << j << " " << slice << " has color " << (float)image[start] << std::endl;
        }
    }

    delete[] image;
}

What is wrong in this code? Do I need some special method to retrieve the data after setting it on the GPU? Also, I dont quite understand how the value is set in the GPU because when using imageStore one has to set a vec4 value, but the texture only contains one channel for red (it does not make a difference if I use a RGBA format)

edit: It turns out I didn't get the problem right. I wrote myself a shader to render the content of my 3D texture to the screen. It seems my print-function (asdf) does not get the right values of the texture because if rendered to the screen the color values are different渲染到屏幕的 1 片纹理

as you can see it still doesn't look quite right, but I assume this is a topic fo another issue?

glGetTexImage(GL_TEXTURE_3D, 0, GL_R8, GL_UNSIGNED_BYTE, image);

That will not copy any data to image , but will only generate a GL_INVALID_ENUM error. GL_R8 is just not a vaild format parameter. You most likely mean GL_RED here (note this is the same you used for glTexImage3D 's format parameter. GL_R8 is only an enum for internalFormat , and it is never relating to anything you see on OpenGL client side, see also this question ).

As a more general recommendation, I really advice everyone to use OpenGL's Debug Output feature during development, so that no OpenGL error will go unnoticed.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM