繁体   English   中英

使用Glium中的UniformBuffer将任意大小的对象传递给片段着色器

[英]Passing an arbitrarily sized object to a fragment shader using a UniformBuffer in Glium

我在尝试一系列不同的技术时提出了我的问题,但我没有多少经验。 可悲的是,我甚至不知道我是否犯了一个愚蠢的逻辑错误,我是否正在使用glium crate错误,我是否搞乱了GLSL等等。无论如何,我设法从一个新的Rust项目开始从头开始,朝着显示我的问题的最小例子努力,问题至少在我的计算机上重现。

最小的例子最终难以解释,所以我首先做一个更小的例子来做我想做的事情,虽然通过黑客攻击并限制为128个元素(四次32位,在GLSL uvec4 )。 从这一点来看,我的问题出现的版本的步骤非常简单。

一个工作版本,具有简单的uniform和位移

该程序在屏幕上创建一个矩形,水平纹理坐标从0.0128.0 该程序包含一个用于矩形的顶点着色器,以及一个片段着色器,它使用纹理坐标在矩形上绘制垂直条纹:如果纹理坐标(夹在一个uint )是奇数,它会绘制一种颜色,当纹理坐标是甚至,它画出了另一种颜色。

// GLIUM, the crate I'll use to do "everything OpenGL"
#[macro_use]
extern crate glium;

// A simple struct to hold the vertices with their texture-coordinates.
// Nothing deviating much from the tutorials/crate-documentation.
#[derive(Copy, Clone)]
struct Vertex {
    position: [f32; 2],
    tex_coords: [f32; 2],
}

implement_vertex!(Vertex, position, tex_coords);


// The vertex shader's source. Does nothing special, except passing the
// texture coordinates along to the fragment shader.
const VERTEX_SHADER_SOURCE: &'static str = r#"
    #version 140

    in vec2 position;
    in vec2 tex_coords;
    out vec2 preserved_tex_coords;

    void main() {
        preserved_tex_coords = tex_coords;
        gl_Position = vec4(position, 0.0, 1.0);
    }
"#;

// The fragment shader. uses the texture coordinates to figure out which color to draw.
const FRAGMENT_SHADER_SOURCE: &'static str =  r#"
    #version 140

    in vec2 preserved_tex_coords;
    // FIXME: Hard-coded max number of elements. Replace by uniform buffer object
    uniform uvec4 uniform_data;
    out vec4 color;

    void main() {
        uint tex_x = uint(preserved_tex_coords.x);
        uint offset_in_vec = tex_x / 32u;
        uint uint_to_sample_from = uniform_data[offset_in_vec];
        bool the_bit = bool((uint_to_sample_from >> tex_x) & 1u);
        color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0);
    }
"#;

// Logic deciding whether a certain index corresponds with a 'set' bit on an 'unset' one.
// In this case, for the alternating stripes, a trivial odd/even test.
fn bit_should_be_set_at(idx: usize) -> bool {
    idx % 2 == 0
}

fn main() {
    use glium::DisplayBuild;
    let display = glium::glutin::WindowBuilder::new().build_glium().unwrap();

    // Sets up the vertices for a rectangle from -0.9 till 0.9 in both dimensions.
    // Texture coordinates go from 0.0 till 128.0 horizontally, and from 0.0 till
    // 1.0 vertically.
    let vertices_buffer = glium::VertexBuffer::new(
        &display,
        &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [  0.0, 0.0] },
              Vertex { position: [ 0.9,  0.9], tex_coords: [  0.0, 1.0] },
              Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] },
              Vertex { position: [-0.9,  0.9], tex_coords: [128.0, 1.0] }]).unwrap();
    // The rectangle will be drawn as a simple triangle strip using the vertices above.
    let indices_buffer = glium::IndexBuffer::new(&display,
                                                 glium::index::PrimitiveType::TriangleStrip,
                                                 &vec![0u8, 1u8, 2u8, 3u8]).unwrap();
    // Compiling the shaders defined statically above.
    let shader_program = glium::Program::from_source(&display,
                                                     VERTEX_SHADER_SOURCE,
                                                     FRAGMENT_SHADER_SOURCE,
                                                     None).unwrap();

    // Some hackyy bit-shifting to get the 128 alternating bits set up, in four u32's,
    // which glium manages to send across as an uvec4.
    let mut uniform_data = [0u32; 4];
    for idx in 0..128 {
        let single_u32 = &mut uniform_data[idx / 32];
        *single_u32 = *single_u32 >> 1;
        if bit_should_be_set_at(idx) {
            *single_u32 = *single_u32 | (1 << 31);
        }
    }

    // Trivial main loop repeatedly clearing, drawing rectangle, listening for close event.
    loop {
        use glium::Surface;
        let mut frame = display.draw();
        frame.clear_color(0.0, 0.0, 0.0, 1.0);
        frame.draw(&vertices_buffer, &indices_buffer, &shader_program,
                   &uniform! { uniform_data: uniform_data },
                   &Default::default()).unwrap();
        frame.finish().unwrap();

        for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } }
    }
}

但这还不够好......

这个程序工作,并显示交替条纹的矩形,但有明确的限制,限制为128条纹(或64条纹,我猜。其他64条是“矩形的背景”)。 为了允许任意多个条带(或者,通常,将任意多个数据传递给片段着色器),可以使用均匀的缓冲对象这是glium暴露的 glium repo中最相关的例子遗憾地无法在我的机器上编译:不支持GLSL版本, buffer关键字在支持的版本中是语法错误,一般不支持计算着色器(在我的机器上使用glium) ,无头渲染上下文。

一个不太多的工作版本,缓冲区uniform

因此,无法从该示例开始,我必须从头开始使用文档。 对于上面的例子,我想出了以下内容:

// Nothing changed here...
#[macro_use]
extern crate glium;

#[derive(Copy, Clone)]
struct Vertex {
    position: [f32; 2],
    tex_coords: [f32; 2],
}

implement_vertex!(Vertex, position, tex_coords);


const VERTEX_SHADER_SOURCE: &'static str = r#"
    #version 140

    in vec2 position;
    in vec2 tex_coords;
    out vec2 preserved_tex_coords;

    void main() {
        preserved_tex_coords = tex_coords;
        gl_Position = vec4(position, 0.0, 1.0);
    }
"#;
// ... up to here.

// The updated fragment shader. This one uses an entire uint per stripe, even though only one
// boolean value is stored in each.
const FRAGMENT_SHADER_SOURCE: &'static str =  r#"
    #version 140
    // examples/gpgpu.rs uses
    //     #version 430
    //     buffer layout(std140);
    // but that shader version is not supported by my machine, and the second line is
    // a syntax error in `#version 140`

    in vec2 preserved_tex_coords;

    // Judging from the GLSL standard, this is what I have to write:
    layout(std140) uniform;
    uniform uniform_data {
        // TODO: Still hard-coded max number of elements, but now arbitrary at compile-time.
        uint values[128];
    };
    out vec4 color;

    // This one now becomes much simpler: get the coordinate, clamp to uint, index into
    // uniform using tex_x, cast to bool, choose color.
    void main() {
        uint tex_x = uint(preserved_tex_coords.x);
        bool the_bit = bool(values[tex_x]);
        color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0);
    }
"#;


// Mostly copy-paste from glium documentation: define a Data type, which stores u32s,
// make it implement the right traits
struct Data {
    values: [u32],
}

implement_buffer_content!(Data);
implement_uniform_block!(Data, values);


// Same as before
fn bit_should_be_set_at(idx: usize) -> bool {
    idx % 2 == 0
}

// Mostly the same as before
fn main() {
    use glium::DisplayBuild;
    let display = glium::glutin::WindowBuilder::new().build_glium().unwrap();

    let vertices_buffer = glium::VertexBuffer::new(
        &display,
        &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [  0.0, 0.0] },
              Vertex { position: [ 0.9,  0.9], tex_coords: [  0.0, 1.0] },
              Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] },
              Vertex { position: [-0.9,  0.9], tex_coords: [128.0, 1.0] }]).unwrap();
    let indices_buffer = glium::IndexBuffer::new(&display,
                                                 glium::index::PrimitiveType::TriangleStrip,
                                                 &vec![0u8, 1u8, 2u8, 3u8]).unwrap();
    let shader_program = glium::Program::from_source(&display,
                                                     VERTEX_SHADER_SOURCE,
                                                     FRAGMENT_SHADER_SOURCE,
                                                     None).unwrap();


    // Making the UniformBuffer, with room for 128 4-byte objects (which u32s are).
    let mut buffer: glium::uniforms::UniformBuffer<Data> =
              glium::uniforms::UniformBuffer::empty_unsized(&display, 4 * 128).unwrap();
    {
        // Loop over all elements in the buffer, setting the 'bit'
        let mut mapping = buffer.map();
        for (idx, val) in mapping.values.iter_mut().enumerate() {
            *val = bit_should_be_set_at(idx) as u32;
            // This _is_ actually executed 128 times, as expected.
        }
    }

    // Iterating again, reading the buffer, reveals the alternating 'bits' are really
    // written to the buffer.

    // This loop is similar to the original one, except that it passes the buffer
    // instead of a [u32; 4].
    loop {
        use glium::Surface;
        let mut frame = display.draw();
        frame.clear_color(0.0, 0.0, 0.0, 1.0);
        frame.draw(&vertices_buffer, &indices_buffer, &shader_program,
                   &uniform! { uniform_data: &buffer },
                   &Default::default()).unwrap();
        frame.finish().unwrap();

        for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } }
    }
}

我希望这会产生相同的条纹矩形(或者给出一些错误,如果我做的事情是错误的话会崩溃)。 相反,它显示矩形,最右边的四分之一为实心明亮的红色(即,“当片段着色器读取它时,该位似乎设置了”),其余四分之三则显示为深红色(即“该位未设置为片段着色器读取它“)。

自原始发布以来更新

我真的在黑暗中刺伤,所以认为它可能是一个低级错误,内存排序,字节顺序,缓冲区溢出/欠载等等。我尝试了各种方法用容易辨别的位模式填充“相邻”内存位置(例如,每三组中有一位,每四位一组,两组后跟两组未设置等)。 这没有改变输出。

其中一个明显的方式来获得内存“近”的uint values[128]就是把它变成了Data结构,只是在前面的values (后面values是不允许的,因为Datavalues: [u32]是动态大小)。 如上所述,这不会改变输出。 但是,将一个正确填充的uvec4放在uniform_data缓冲区中,并使用类似于第一个示例的main函数产生原始结果。 这表明glium::uniforms::UniformBuffer<Data> 在本质上 确实工作。

因此,我更新了标题,以反映问题似乎在其他地方。

在Eli的回答之后

@Eli Friedman的回答帮助我朝着解决方案的方向前进,但我还没到那里。

分配和填充四倍大的缓冲区确实改变了输出,从四分之一填充矩形到完全填充矩形。 糟糕,这不是我想要的。 不过,我的着色器现在正在阅读正确的记忆词。 所有这些单词都应该填充正确的位模式。 尽管如此,矩形的任何部分都没有变成条纹。 由于bit_should_be_set_at应该设置每隔一位,我提出了一个假设,即发生的事情如下:

Bits: 1010101010101010101010101010101010101
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: all bits set

为了验证这个假设,我改变了bit_should_be_set_at以在bit_should_be_set_at和8的倍数上返回true 。结果与我的假设一致:

Bits: 1001001001001001001001001001001001001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating two unset, one set.

Bits: 1000100010001000100010001000100010001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: all bits set

Bits: 1000010000100001000010000100001000010
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating four unset, one set.

Bits: 1000001000001000001000001000001000001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating two unset, one set.

Bits: 1000000100000010000001000000100000010
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating six unset, one set.

Bits: 1000000010000000100000001000000010000
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then every other bit set.

这个假设有意义吗? 无论如何:看起来问题是设置数据(在Rust端),还是将其读回(在GLSL端)?

您遇到的问题与如何分配制服有关。 uint values[128]; 没有你认为的内存布局; 它实际上具有与uint4 values[128]相同的内存布局uint4 values[128] 请参阅https://www.opengl.org/registry/specs/ARB/uniform_buffer_object.txt第2.15.3.1.2小节。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM