简体   繁体   中英

GLSL : Object translation with fragment shader

As shown in the following figure, I'm trying to express the outlines by drawing the object two times more : 1 pixel moved left and right each.

三圈重复

But, I don't know whether this should be run in vertex shader or fragment shader.

Is it possible to move vertices(pixels) in fragment shader?

If not, should I calculate screen space coordinates of vertices at every frame?

一旦您处于片段着色器中,就不会通过光栅化过程固定输出位置。

With traditional fragment shader outputs, the answer is a clear and resounding NO . The fragment shader cannot decide which pixel it wants to render. A fixed function step (rasterization) between the vertex shader and the fragment shader determines which fragments are covered by a primitive. The fragment shader then gets invoked for each of these fragments. It gets to decide the values (colors, etc) written to output buffers at this fragment position, or it can decide to not write anything at all ( discard ). But it does not get to change the position.

The following are some options that come to mind.

Images

There is a feature in OpenGL 4.2 and later that adds new options in this area: images. You can bind textures as images, and then write to them in shader code using the built-in imageStore() function. This function takes coordinates as well as values as parameters, so you can write values to arbitrary positions within an image.

Using this, you could use an image for your output instead of a traditional fragment shader output, and write multiple values to it. Or use a hybrid, where you still use the fragment shader output for your primary rendering, write the shadow part to an image, and then combine the two with an additional rendering pass.

Multiple Draw Calls

With more traditional features, you would typically have to render your geometry multiple times, using depth or stencil tests to combine the primary rendering with the shadow effect. For example, using the depth test, you could render the shape once in its original color, and then render it two more times with a slight left/right offset, and also slightly increased depth, so that the shadow is behind the original shape.

Geometry Shaders

I believe you could use a geometry shader to generate the 3 instances of each primitive. So each primitive is still rendered 3 times, but you avoid having to actually make 3 different draw calls.

Image Post-Processing

To achieve the effect you want, you could render the whole thing without shadows into an FBO, which produces the frame in a texture. Then you make another draw pass where you draw a window sized quad, and sample from the texture containing your frame. You sample the texture 3 times, and combine the 3 results to produce the shadow effect.

Just to sketch this (code completely untested). If you use a texture with an alpha component as your render target, you can check the alpha value to see if a given pixel was hit during rendering.

// Texture produced as output from original render pass.
uniform texture2D Tex;
// Offset to add one pixel to texture coordinates, should be
// 1.0 / width of render target.
uniform float PixelOffset;
// Incoming texture coordinate from rendering window sized quad.
in vec2 TexCoord;
// Output.
out vec4 FragColor;

void main() {
    vec4 centerColor = texture(Tex, TexCoord);
    vec4 leftColor = texture(Tex, vec2(TexCoord.s, TexCoord.t - PixelOffset));
    vec4 rightColor = texture(Tex, vec2(TexCoord.s, TexCoord.t + PixelOffset));

    if (centerColor.a > 0.0) {
        // Fragment was rendered, use its color as output.
        FragColor = centerColor;
    } else if (leftColor.a + rightColor.a > 0.0) {
        // Fragment is within 1 pixel left/right of rendered fragment,
        // color it black.
        FragColor = vec4(0.0, 0.0, 0.0, 1.0);
    } else {
        // Neither rendered nor in shadow. Set output to background color,
        // or discard it. This would be for white background.
        FragColor = vec4(1.0, 1.0, 1.0, 1.0);
    }
}

Conclusion/Recommendation

Intuitively, I like the Image Post-Processing approach myself. I would probably try that first. I think the next most elegant solution is replicating the primitives with a geometry shader.

Since the pixel's location is already determined by the time you are inside the fragment shader, that's not an option. The vertex shader can't help you either, because it can only push a single output vertex for every incoming vertex.

The geometry shader stage however, can emit multiple vertices for every single incoming vertex. This can allow you to clone two extra vertices, each with a translation to the left or right of the original.

This resource has some detailed modern examples: https://open.gl/geometry

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM