简体   繁体   中英

GLSL uint_fast64_t type

how can i get an input to the vertex shader of type uint_fast64_t? there is not such type available in the language how can i pass it differently?

my code is this:

#version 330 core

#define CHUNK_SIZE 16

#define BLOCK_SIZE_X 0.1
#define BLOCK_SIZE_Y 0.1
#define BLOCK_SIZE_Z 0.1

// input vertex and UV coordinates, different for all executions of this shader
layout(location = 0) in uint_fast64_t vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;

// Output data ; will be interpolated for each fragment.
out vec2 UV;

// model view projection matrix 
uniform mat4 MVP;


int getAxis(uint_fast64_t p, int choice) { // axis: 0=x 1=y 2=z 3=index_x 4=index_z
    switch (choice) {
    case 0:
        return (int)((p>>59 ) & 0xF); //extract the x axis int but i only want 4bits
    case 1:
        return (int)((p>>23 ) & 0xFF);//extract the y axis int but i only want 8bits
    case 2:
        return (int)((p>>55 ) & 0xF);//extract the z axis int but i only want 4bits
    case 3:
        return (int)(p & 0x807FFFFF);//extract the index_x 24bits
    case 4:
        return (int)((p>>32) & 0x807FFFFF);//extract the index_z 24bits
    }
}

void main()
{
    // assign vertex position
    float x = (getAxis(vertexPosition_modelspace,0) + getAxis(vertexPosition_modelspace,3)*CHUNK_SIZE)*BLOCK_SIZE_X;
    float y = getAxis(vertexPosition_modelspace,1)*BLOCK_SIZE_Y;
    float z = (getAxis(vertexPosition_modelspace,2) + getAxis(vertexPosition_modelspace,3)*CHUNK_SIZE)*BLOCK_SIZE_Z;

    gl_Position = MVP * vec4(x,y,z, 1.0);

    // UV of the vertex. No special space for this one.
    UV = vertexUV;
}

the error message i am takeing is:

在此处输入图像描述

i tried to put uint64_t but the same problem

Unextended GLSL for OpenGL does not have the ability to directly use 64-bit integer values. And even the fairly widely supported ARB extension that allows for the use of 64-bit integers within shaders doesn't actually allow you to use them as vertex shader attributes. That requires an NVIDIA extension supported only by... NVIDIA .

However, you can send 32-bit integers, and a 64-bit integer is just two 32-bit integers. You can put 64-bit integers into the buffer and pass them as 2 32-bit unsigned integers in your vertex attribute format:

glVertexAttribIFormat(0, 2, GL_UNSIGNED_INT, <byte_offset>);

Your shader will retrieve them as a uvec2 input:

layout(location = 0) in uvec2 vertexPosition_modelspace;

The x component of the vector will have the first 4 bytes and the y component will store the second 4 bytes. But since "first" and "second" are determined by your CPU's endian, you'll need to know whether your CPU is little endian or big endian to be able to use them. Since most desktop GL implementations are paired with little endian CPUs, we'll assume that is the case.

In this case, vertexPosition_modelspace.x contains the low 4 bytes of the 64-bit integer, and vertexPosition_modelspace.y contains the high 4 bytes.

So your code could be adjusted as follows (with some cleanup):

const vec3 BLOCK_SIZE(0.1, 0.1, 0.1);

//Get the three axes all at once.
uvec3 getAxes(in uvec2 p)
{
  return uvec3(
    (p.y >> 27) & 0xF),
    (p.x >> 23) & 0xFF),
    (p.y >> 23) & 0xF)
  );
}

//Get the indices
uvec2 getIndices(in uvec2 p)
{
  return p & 0x807FFFFF; //Performs component-wise bitwise &
}

void main()
{
  uvec3 iPos = getAxes(vertexPosition_modelspace);
  uvec2 indices = getIndices(vertexPosition_modelspace);

  vec3 pos = vec3(
    iPos.x + (indices.x * CHUNK_SIZE),
    iPos.y,
    iPos.z + (indices.x * CHUNK_SIZE) //You used index 3 in your code, so I used .x here, but I think you meant index 4.
  );

  pos *= BLOCK_SIZE;

  ...
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM