简体   繁体   中英

Interpret Texture data as Vertex data in Vertex Shader

I'm trying to use the Vertex shader to relocate particles(stored in a texture) which gets passed to the Frag shader and stored in a separate texture.

I'm trying to achieve efficient spatial binning entirely on the GPU. This is for an SPH fluid sim.

Is it possible to pass all pixels(textels) of a 2D texture to the Vertex Shader? Each textel (RGBA) would be reinterpreted as the incoming Vertex (XYZW).

It is possible to pass 2D textures to a vertex shader if the hardware supports Vertex Texture Fetch . You can test for this using the following code:

int maxVertexTextureImageUnits;
glGetIntegerv(GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, &maxVertexTextureImageUnits);

If maxVertexTextureImageUnits is zero, then this feature is not supported.

You don't pass the texture in as vertex data directly, rather, you create a mesh of vertices whose coordinates correspond to the texture coordinates of each pixel in the texture. Then in the vertex shader you use these vertices to read the actual vertex coordinates (texel value) that were generated in your fragment shader. These are then used to compute gl_Position .

I have never used this feature personally (not supported on my platform), but I have seen cool particle/fluid demos with it. I believe support for it is pretty spotty. Even on hardware that supports it the driver may not implement it.

It isn't possible to pretend that a 2D texture is actually vertex data. However, I'm wondering why you're trying to do it that way.

The most efficient way to do this is to not store your particle data as textures, but in buffer objects . You can use buffer objects as source data for your vertex arrays directly.


Given "I'm storing data as textures because the particle locations are the output of a fragment shader," I can advise something more useful: buffer textures . These are 1D textures that store their data in a buffer object. They can still be used as a render target (though obviously you'll need to change how you're rasterizing the data, since they are 1D render targets), but what they render is stored in a buffer object.

That way, you can do your render to the buffer texture, then bind the buffer as an array and source from that.

If you absolutely must render to a 2D texture, you can use a slower method: read from the texture into a buffer object. PBOs are primarily for asynchronous pixel transfers, so I would suggest giving the GPU something else to do between the time you perform the glGetTexImage into the buffer and the time you call glDrawArrays to render from it. Otherwise, you'll just stall the GPU.

Yes, it's easy: Just render as many vertices as you have pixels, but don't bind a vertex buffer. Instead, compute the pixel corresponding to the current vertex by obtaining the vertex ID ( gl_VertexID ) (you should render as points.) From the vertex ID, you can issue a fetch on the texture and get the data (make sure you use texelFetch .)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM