简体   繁体   English

GLSL:使用片段着色器进行对象翻译

[英]GLSL : Object translation with fragment shader

As shown in the following figure, I'm trying to express the outlines by drawing the object two times more : 1 pixel moved left and right each. 如下图所示,我试图通过绘制对象两倍以上来表示轮廓:每个像素左右移动1个像素。

三圈重复

But, I don't know whether this should be run in vertex shader or fragment shader. 但是,我不知道这应该在顶点着色器中还是片段着色器中运行。

Is it possible to move vertices(pixels) in fragment shader? 是否可以在片段着色器中移动顶点(像素)?

If not, should I calculate screen space coordinates of vertices at every frame? 如果没有,我是否应该计算每帧顶点的屏幕空间坐标?

一旦您处于片段着色器中,就不会通过光栅化过程固定输出位置。

With traditional fragment shader outputs, the answer is a clear and resounding NO . 使用传统的片段着色器输出,答案是明确而响亮的NO The fragment shader cannot decide which pixel it wants to render. 片段着色器无法确定要渲染的像素。 A fixed function step (rasterization) between the vertex shader and the fragment shader determines which fragments are covered by a primitive. 顶点着色器和片段着色器之间的固定功能步骤(栅格化)确定了哪些片段被图元覆盖。 The fragment shader then gets invoked for each of these fragments. 然后为每个片段调用片段着色器。 It gets to decide the values (colors, etc) written to output buffers at this fragment position, or it can decide to not write anything at all ( discard ). 它可以决定在此片段位置写入输出缓冲区的值(颜色等),也可以决定根本不写入任何内容( discard )。 But it does not get to change the position. 但这并不能改变立场。

The following are some options that come to mind. 以下是一些想到的选项。

Images 图片

There is a feature in OpenGL 4.2 and later that adds new options in this area: images. OpenGL 4.2和更高版本中有一项功能,可在此区域添加新选项:图像。 You can bind textures as images, and then write to them in shader code using the built-in imageStore() function. 您可以将纹理绑定为图像,然后使用内置的imageStore()函数以着色器代码写入纹理。 This function takes coordinates as well as values as parameters, so you can write values to arbitrary positions within an image. 该函数将坐标以及值作为参数,因此您可以将值写入图像中的任意位置。

Using this, you could use an image for your output instead of a traditional fragment shader output, and write multiple values to it. 使用此功能,您可以将图像用作输出,而不是传统的片段着色器输出,然后向其中写入多个值。 Or use a hybrid, where you still use the fragment shader output for your primary rendering, write the shadow part to an image, and then combine the two with an additional rendering pass. 或使用混合方法,在该方法中,仍将片段着色器输出用于主要渲染,将阴影部分写入图像,然后将两者与其他渲染过程结合在一起。

Multiple Draw Calls 多次抽奖

With more traditional features, you would typically have to render your geometry multiple times, using depth or stencil tests to combine the primary rendering with the shadow effect. 使用更传统的功能时,通常必须使用深度或模板测试将主要渲染与阴影效果结合在一起,多次渲染几何图形。 For example, using the depth test, you could render the shape once in its original color, and then render it two more times with a slight left/right offset, and also slightly increased depth, so that the shadow is behind the original shape. 例如,使用深度测试,可以以原始颜色渲染一次形状,然后以略有左右偏移和深度稍微增加的方式渲染两次,以使阴影在原始形状后面。

Geometry Shaders 几何着色器

I believe you could use a geometry shader to generate the 3 instances of each primitive. 我相信您可以使用几何着色器来生成每个图元的3个实例。 So each primitive is still rendered 3 times, but you avoid having to actually make 3 different draw calls. 因此,每个基元仍将渲染3次,但您不必实际进行3次不同的绘制调用。

Image Post-Processing 图像后处理

To achieve the effect you want, you could render the whole thing without shadows into an FBO, which produces the frame in a texture. 为了获得所需的效果,您可以将没有阴影的整个对象渲染到FBO中,该FBO可以在纹理中生成帧。 Then you make another draw pass where you draw a window sized quad, and sample from the texture containing your frame. 然后进行另一个绘制过程,在其中绘制一个窗口大小的四边形,并从包含框架的纹理中采样。 You sample the texture 3 times, and combine the 3 results to produce the shadow effect. 您对纹理采样了3次,然后将这3个结果合并以产生阴影效果。

Just to sketch this (code completely untested). 只是为了勾勒出这一点(代码未经测试)。 If you use a texture with an alpha component as your render target, you can check the alpha value to see if a given pixel was hit during rendering. 如果使用带有alpha成分的纹理作为渲染目标,则可以检查alpha值以查看渲染期间是否命中了给定像素。

// Texture produced as output from original render pass.
uniform texture2D Tex;
// Offset to add one pixel to texture coordinates, should be
// 1.0 / width of render target.
uniform float PixelOffset;
// Incoming texture coordinate from rendering window sized quad.
in vec2 TexCoord;
// Output.
out vec4 FragColor;

void main() {
    vec4 centerColor = texture(Tex, TexCoord);
    vec4 leftColor = texture(Tex, vec2(TexCoord.s, TexCoord.t - PixelOffset));
    vec4 rightColor = texture(Tex, vec2(TexCoord.s, TexCoord.t + PixelOffset));

    if (centerColor.a > 0.0) {
        // Fragment was rendered, use its color as output.
        FragColor = centerColor;
    } else if (leftColor.a + rightColor.a > 0.0) {
        // Fragment is within 1 pixel left/right of rendered fragment,
        // color it black.
        FragColor = vec4(0.0, 0.0, 0.0, 1.0);
    } else {
        // Neither rendered nor in shadow. Set output to background color,
        // or discard it. This would be for white background.
        FragColor = vec4(1.0, 1.0, 1.0, 1.0);
    }
}

Conclusion/Recommendation 结论/建议

Intuitively, I like the Image Post-Processing approach myself. 凭直觉,我自己喜欢“图像后处理”方法。 I would probably try that first. 我可能会首先尝试。 I think the next most elegant solution is replicating the primitives with a geometry shader. 我认为下一个最优雅的解决方案是使用几何着色器复制基元。

Since the pixel's location is already determined by the time you are inside the fragment shader, that's not an option. 由于像素的位置已经由您进入片段着色器的时间确定,因此这不是一个选择。 The vertex shader can't help you either, because it can only push a single output vertex for every incoming vertex. 顶点着色器也无济于事,因为它只能为每个传入的顶点推送一个输出顶点。

The geometry shader stage however, can emit multiple vertices for every single incoming vertex. 但是,几何着色器阶段可以为每个传入的顶点发出多个顶点。 This can allow you to clone two extra vertices, each with a translation to the left or right of the original. 这可以允许您克隆两个额外的顶点,每个顶点在原始对象的左侧或右侧都有平移。

This resource has some detailed modern examples: https://open.gl/geometry 该资源提供了一些详细的现代示例: https : //open.gl/geometry

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM