简体   繁体   中英

What is the texture sampling precision?

In OpenGL, when sampling a texture, what is the precision or format used for the location?

To elaborate: when sampling with texture(sampler, vTextureCoordinates) in a shader, on eg precision highp float , two float32+ go in. However, is that precision used to sample the texture, or will it be degraded (eg "snapped to fixed point" like in d3d)?


While I am primarily interested in WebGL2, this would also be interesting to know for other OpenGL versions.

My current guess is, that it will be truncated to a 16-bit normalized unsigned integer, but I am not sure. Perhaps it is also unspecified, in which case, what can be depended upon?

This is related to my texture-coordinate-inaccuracy question. Now that I have several hints, that this degradation might really take place, I can ask about the specific part. Should sampling precision indeed be a 16-bit normalized integer, I could also close that one.

This is a function of the hardware, not the graphics API commanding that hardware. So it doesn't matter if you're using D3D, WebGL, Vulkan, or whatever, the precision of texture coordinate sampling is based on the hardware you're running on.

Most APIs don't actually tell you what this precision is. They will generally require some minimum precision, but hardware can vary.

Vulkan actually allows implementations to tell you the sub-texel precision. The minimum requirement is 4 bits of sub-texel precision (16 values). The Vulkan hardware database shows that hardware varies between 4 and 8, with 8 being 10x more common than 4.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM