简体   繁体   中英

GLSL : uniform buffer object example

I have an array of GLubyte of variable size. I want to pass it to fragment shader. I have seen This thread and this thread . So I decided to use "Uniform Buffer Objects". But being a newbie in GLSL, I do not know:

1 - If I am going to add this to fragment shader, how do I pass size? Should I create a struct?

layout(std140) uniform MyArray
 { 
  GLubyte  myDataArray[size];  //I know GLSL doesn't understand GLubyte
 };

2- how and where in C++ code associate this buffer object ?

3 - how to deal with casting GLubyte to float?

1 - If I am going to add this to fragment shader, how do I pass size? Should I create a struct?

Using Uniform Buffers (UB), you cannot do this.

size must be static and known when you link your GLSL program. This means it has to be hard-coded into the actual shader.

The modern way around this is to use a feature from GL4 called Shader Storage Buffers (SSB).

SSBs can have variable length (the last field can be declared as an unsized array, like myDataArray[] ) and they can also store much more data than UBs.

In older versions of GL, you can use a Buffer Texture to pass large amounts of dynamically sized data into a shader, but that is a cheap hack compared to SSBs and you cannot access the data using a nice struct -like interface either.

3 - how to deal with casting GLubyte to float?

You really would not do this at all, it is considerably more complicated.

The smallest data type you can use in a GLSL data structure is 32-bit. You can pack and unpack smaller pieces of data into a uint if need though using special functions like packUnorm4x8 (...) . That was done intentionally, to avoid having to define new data types with smaller sizes.

You can do that even without using any special GLSL functions.

packUnorm4x8 (...) is roughly equivalent to performing the following:

for (int i = 0; i < 4; i++)
  packed += round (clamp (vec [i], 0, 1) * 255.0) * pow (2, i * 8);

It takes a 4-component vector of floating-point values in the range [0,1] and does fixed-point arithmetic to pack each of them into an unsigned normalized (unorm) 8-bit integer occupying its own 1/4 of a uint .

Newer versions of GLSL introduce intrinsic functions that do that, but GPUs have actually been doing that sort of thing for as long as shaders have been around. Anytime you read/write a GL_RGBA8 texture from a shader you are basically packing or unpacking 4 8-bit unorms represented by a 32-bit integer.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM