简体   繁体   English

GLSL着色器可提供纹理“烟熏”效果

[英]GLSL shader for texture 'smoke' effect

I've looked around and haven't found anything relevant. 我环顾四周,没有发现任何相关内容。 I'm tyring to create a shader to give a texture smoke effect animation like here: 我打算创建一个着色器,以提供如下所示的纹理烟雾效果动画:

例

Not asking for a complete/full solution (although that would be awesome) but any pointers towards where I can get started to achieve this effect. 没有要求完整/完整的解决方案(尽管那会很棒),但没有任何指向我可以开始实现此效果的指针。 Would i need to have the vertices for the drawing or is this possible if I have the texture only? 我是否需要图纸的顶点,或者如果仅具有纹理,这可能吗?

Modelling smoke with a fluid simulation isn't simple and can be very slow for a detailed simulation. 使用流体模拟对烟雾进行建模并不简单,对于详细的模拟而言可能非常缓慢。 Using noise to add finer details can be a fair bit faster. 使用噪点添加更精细的细节可能会更快一些。 If this is the direction you want to head in, this answer has some good links to the little grasshopper . 如果这是您要前往的方向,则答案与小蚱hopper有一些良好的联系。 If you have a texture, use it to initialize the smoke density (or spawn particles for that matter) and run the simulation. 如果您有纹理,请使用它来初始化烟雾密度(或为此生成的粒子)并运行模拟。 If you start with vector data, and want the animation to trail along the curve as in your example it gets more complex. 如果从矢量数据开始,并且希望动画沿曲线尾随,如您的示例所示,它将变得更加复杂。 Perhaps draw the curve over the top of the smoke simulation, gradually drawing less of it and drawing the erased bits as density into the simulation. 也许在烟雾模拟的顶部绘制曲线,然后逐渐减少绘制,并将擦除的位作为密度绘制到模拟中。 Spawning particles along its length and using "noise based particles" as linked above sounds like a good alternative too. 沿其长度生成粒子并使用上面链接的“基于噪声的粒子”听起来也是一个不错的选择。

Still, it sounds like you're after something a bit simpler. 不过,听起来您在追求一些简单的事情。 I've created a short demo on shadertoy, just using perlin noise for animated turbulence on a texture. 我已经创建了一个关于阴影的简短演示,只是使用了Perlin噪波在纹理上进行了动态湍流。 It doesn't require any intermediate texture storage or state information other than a global time. 除了全局时间外,它不需要任何中间纹理存储或状态信息。

https://www.shadertoy.com/view/Mtf3R7 https://www.shadertoy.com/view/Mtf3R7

The idea started with trying to create streaks of smoke that blur and grow with time. 这个想法始于试图创造出随着时间的流逝而模糊并不断增长的烟雾条纹。 Start with a curve, sum/average colour along it and then make it longer to make the smoke appear to move. 从曲线开始,沿其求和/平均颜色,然后使其更长以使烟雾看起来在移动。 Rather than add points to the curve over time to make it longer, the curve has a fixed number of points and their distance increases with time. 曲线具有固定数量的点,并且它们的距离随时间增加,而不是随时间增加点到曲线的长度。

To create a random curve, perlin noise is sampled recursively, providing offsets to each point in turn. 为了创建随机曲线,对perlin噪声进行递归采样,从而依次偏移每个点。

在此处输入图片说明

Using mipmapping, the samples towards the end of the curve can cover a larger area and make the smoke appear to blur into nothing, just as your image does. 使用mipmapping,曲线末端的样本可以覆盖更大的区域,使烟雾像图像一样模糊不成。 However, since this is a gather operation the end of the smoke curve is actually the start (hence the steps-i below). 但是,由于这是一次收集操作,因此烟雾曲线的终点实际上是起点(因此,下面的steps-i )。

//p = uv coord, o = random offset for per-pixel noise variation, t = time
vec3 smoke(vec2 p, vec2 o, float t)
{
    const int steps = 10;
    vec3 col = vec3(0.0);
    for (int i = 1; i < steps; ++i)
    {
        //step along a random path that grows in size with time
        p += perlin(p + o) * t * 0.002;
        p.y -= t * 0.003; //drift upwards

        //sample colour at each point, using mipmaps for blur
        col += texCol(p, float(steps-i) * t * 0.3);
    }
    return col.xyz / float(steps);
}

As always with these effects, you can spend hours playing with constants getting it to look that tiny bit better. 与这些效果一样,您可以花费数小时来使用常量,以使其看起来好一点。 I've used a linearly changing value for the mipmap bias as the second argument to texCol() , which I'm sure could be improved. 我已经使用了线性变化的值作为texCol()的第二个参数作为mipmap的偏差,我敢肯定它可以得到改善。 Also averaging a number of smoke() calls with varying o will give a smoother result. 同样,将o不同的smoke()调用平均可以得到更平滑的结果。

[ EDIT ] If you want the smoke to animate along a curve with this method, I'd use a second texture that stores a "time offset" to delay the simulation for certain pixels. [ 编辑 ]如果您想用这种方法使烟雾沿曲线动画,我将使用第二个纹理存储“时间偏移”以延迟某些像素的模拟。 Then draw the curve with a gradient along it so the end of the curve will take a little while to start animating. 然后沿着其渐变绘制曲线,这样曲线的末端将需要一些时间才能开始设置动画。 Since it's a gather operation you should draw a much fatter lines into this time offset texture as it's the pixels around them which will gather colour. 由于这是一个收集操作,因此您应该在此时间偏移纹理中绘制粗的线条,因为周围的像素会聚集颜色。 Unfortunately this will break when parts of the curve are too close or intersect. 不幸的是,当曲线的某些部分太靠近或相交时,这会中断。

In the example pictured it appears as if they have the vertices. 在上图的示例中,看起来好像它们具有顶点。 Possibly the "drawing" of the flower shape was recorded and then played back continuously. 可能记录了花朵形状的“绘画”,然后连续播放。 Then the effect hits the vertices based on a time offset from when they were drawn. 然后,效果会根据绘制时的时间偏移量命中顶点。 The effect there appears to mostly be a motion blur. 那里的效果似乎主要是运动模糊。

So to replicate this effect you would need the vertices. 因此,要复制此效果,您将需要顶点。 See how the top of the flower starts to disappear before the bottom? 看到花朵的顶部如何开始在底部之前消失? If you look closely you'll see that actually the blur effect timing follows the path of the flower around counter clockwise. 如果仔细观察,您会发现模糊效果的计时实际上是沿着花朵的逆时针方向移动的。 Even on the first frame of your gif you can see that the end of the flower shape is a brighter yellow than the beginning. 即使在gif的第一帧,您也可以看到花朵形状的末端比开始处的黄色亮。

The angle of the motion blur also appears to change over time from being more left oriented to being more up oriented. 运动模糊的角度也似乎随着时间的变化从更左定向变为更向上定向。

And the brightness of the segment is also changing over time starting with the yellowish color and ending either black or transparent. 该段的亮度也随时间而变化,从淡黄色开始,以黑色或透明结束。

What I can't tell from this is if the effect is additive, meaning that they're applying the effect to the whole frame and then to the results of that effect each frame, or if it's being recreated each frame. 我无法确定的是效果是否是累加的,这意味着他们是将效果应用到整个帧,然后将效果应用到每个帧,或者是否正在重新创建每个帧。 If recreated each frame you'd be able to do the effect in reverse and have the image appear. 如果重新创建每一帧,您将可以反向执行效果并显示图像。

If you are wanting this effect on a bitmapped texture instead of a line object that's also doable, although the approach would be different. 如果您希望这种效果对位图纹理而不是对线对象也可行,尽管方法会有所不同。

Let's start with the line object and assume you have the vertices. 让我们从线对象开始,并假设您具有顶点。 The way I would approach it is that I would add a percentage of decay as an attribute to the vertex data. 我采用的方法是将一定百分比的衰减作为顶点数据的属性添加。 Then each frame that you render you'd first update the decay percentage based on the time for that vertex. 然后,渲染的每个帧都将首先基于该顶点的时间来更新衰减百分比。 Stagger them slightly. 稍微交错一下。

Then the shader would draw the line segment using a motion blur shader where the amount of motion blur, the angle of the blur, and the color of the segment are controlled by a varying variable that is assigned by the decay attribute. 然后,着色器将使用运动模糊着色器绘制线段,其中运动模糊的量,模糊的角度和线段的颜色由衰减属性指定的可变变量控制。 I haven't tested this shader. 我尚未测试此着色器。 Treat it like pseudocode. 像伪代码一样对待它。 But I'd approach it this way... Vertex Shader: 但是我会这样处理的……顶点着色器:

 uniform mat4 u_modelViewProjectionMatrix;
 uniform float maxBlurSizeConstant;  // experiment with value and it will be based on the scale of the render

 attribute vec3 a_vertexPosition;
 attribute vec2 a_vertexTexCoord0;
 attribute float a_decay;

 varying float v_decay;
 varying vec2 v_fragmentTexCoord0;
 varying vec2 v_texCoord1;
 varying vec2 v_texCoord2;
 varying vec2 v_texCoord3;
 varying vec2 v_texCoord4;
 varying vec2 v_texCoordM1;
 varying vec2 v_texCoordM2;
 varying vec2 v_texCoordM3;
 varying vec2 v_texCoordM4;

 void main()
 {
    gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);

    v_decay = a_decay;

    float angle = 2.8 - a_decay * 0.8;  // just an example of angles

    vec2 tOffset = vec2(cos(angle),sin(angle)) * maxBlurSizeConstant * a_decay;

    v_fragmentTexCoord0 = a_vertexTexCoord0;

    v_texCoordM1 = a_vertexTexCoord0 - tOffset;
    v_texCoordM2 = a_vertexTexCoord0 - 2.0 * tOffset;
    v_texCoordM3 = a_vertexTexCoord0 - 3.0 * tOffset;
    v_texCoordM4 = a_vertexTexCoord0 - 4.0 * tOffset;
    v_texCoord1 = a_vertexTexCoord0 + tOffset;
    v_texCoord2 = a_vertexTexCoord0 + 2.0 * tOffset;
    v_texCoord3 = a_vertexTexCoord0 + 3.0 * tOffset;
    v_texCoord4 = a_vertexTexCoord0 + 4.0 * tOffset;
 }

Fragment Shader: 片段着色器:

 uniform sampler2D u_textureSampler;

 varying float v_decay;
 varying vec2 v_fragmentTexCoord0;
 varying vec2 v_texCoord1;
 varying vec2 v_texCoord2;
 varying vec2 v_texCoord3;
 varying vec2 v_texCoord4;
 varying vec2 v_texCoordM1;
 varying vec2 v_texCoordM2;
 varying vec2 v_texCoordM3;
 varying vec2 v_texCoordM4;

 void main()
 {
     lowp vec4 fragmentColor = texture2D(u_textureSampler, v_fragmentTexCoord0) * 0.18;

     fragmentColor += texture2D(u_textureSampler, v_texCoordM1) * 0.15;
     fragmentColor += texture2D(u_textureSampler, v_texCoordM2) * 0.12;
     fragmentColor += texture2D(u_textureSampler, v_texCoordM3) * 0.09;
     fragmentColor += texture2D(u_textureSampler, v_texCoordM4) * 0.05;
     fragmentColor += texture2D(u_textureSampler, v_texCoord1) * 0.15;
     fragmentColor += texture2D(u_textureSampler, v_texCoord2) * 0.12;
     fragmentColor += texture2D(u_textureSampler, v_texCoord3) * 0.09;
     fragmentColor += texture2D(u_textureSampler, v_texCoord4) * 0.05;

     gl_FragColor = vec4(fragmentColor.rgb, fragmentColor.a * v_decay);
 }

Of course the trick is in varying the decay amount per vertex based on a slight offset in time. 当然,诀窍在于根据时间的微小偏移量来改变每个顶点的衰减量。

If you want to do the same with a sprite you're going to do something very similar except that the difference between the decay per vertex would have to be played with to get right as there are only 4 vertices. 如果要对子图执行相同的操作,则将执行非常相似的操作,除了每个顶点的衰减之间的差异必须正确处理(因为只有4个顶点)。

SORRY - EDIT 对不起-编辑

Sorry... The above shader blurs the incoming texture. 抱歉...上面的着色器模糊了传入的纹理。 It doesn't necessarily blur the color of the line being drawn. 它不一定会模糊所绘制线条的颜色。 This might or might not be what you want to do. 这可能是或可能不是您想要执行的操作。 But again without knowing more of what you are actually trying to accomplish it's difficult to give you a perfect answer. 但是,又一次又不知道您实际上要完成什么,很难给您一个完美的答案。 I get the feeling you'd rather do this on a sprite anyway than a line vertex based object. 我觉得您还是宁愿在精灵上执行此操作,也不愿基于线顶点的对象执行此操作。 So no you can't copy and paste this shader in to your code as is. 因此,不能,您不能原样复制此着色器并将其粘贴到您的代码中。 But it shows the concept of how you'd do what you're looking to do. 但是它显示了您将如何做自己想做的事情的概念。 Especially if you're doing it on a texture instead of on a vertex based line. 尤其是当您在纹理而不是基于顶点的行上进行处理时。

Also the above shader isn't complete. 另外,上面的着色器还不完整。 For example it doesn't expand to allow the blur to get beyond the bounds of the texture. 例如,它不会扩展以允许模糊超出纹理范围。 And it gets texture info from outside the area where the sprite is in the sprite sheet. 它从子画面中子画面所在区域的外部获取纹理信息。 To fix this you'd have to start with a bounding box larger than the sprite and shrink the sprite in the vertex to be the right size. 要解决此问题,您必须先从大于精灵的边界框开始,然后将顶点中的精灵缩小为正确的大小。 And you'd have to not grab textels from the spite sheet beyond the bounds of the sprite. 而且,您不必从Sprite表中获取超出Sprite边界的文本框。 There are ways of doing this without having to include a bunch of white space around the sprite in the sprite sheet. 有一些方法可以做到,而不必在Sprite工作表中的Sprite周围包含一堆空白。

Update 更新资料

On second look it might be particle based. 从第二个角度看,它可能是基于粒子的。 If it is they again have all the vertices but as particle locations. 如果是的话,它们又具有所有顶点,但作为粒子位置。 I sort of prefer the idea that it's line segments because I don't see any gaps. 我比较喜欢线段,因为我看不到任何间隙。 So if it is particles there are a lot and they're tightly placed. 因此,如果是颗粒,则存在很多,并且将它们紧密放置。 The particles are still decaying cascading from the top petal around to the last. 粒子仍在从顶部花瓣到最后一个花瓣的级联衰减。 Even if it's line segments you could treat the vertices as particles to apply the wind and gravity. 即使是线段,也可以将顶点视为施加风和重力的粒子。

As for how the smoke effect works check out this helper app by 71 squared: https://71squared.com/particledesigner 至于烟雾效果是如何工作的,请通过71平方查看此助手应用程序: https : //71squared.com/particledesigner

The way it works is that you buy the Mac app to use to design and save your particle. 它的工作方式是购买Mac应用程序以用于设计和保存粒子。 Then you go to their github and get the iOS code. 然后转到他们的github并获取iOS代码。 But this particular code creates a particle emitter. 但是此特定代码创建了粒子发射器。 Doing a shape out of particles would be different code. 用粒子做形状将是不同的代码。 But the evolution of the particles is the same. 但是粒子的演化是相同的。

OpenGL ES suggests that your target platform may not have the computional power to do a real smoke simulation (and if it does, it would consume quite a bit of power, which is undesirable on a device like a phone). OpenGL ES建议您的目标平台可能没有计算能力来进行真实的烟雾模拟(如果这样做,它将消耗相当多的能量,这在电话等设备上是不希望的)。

However, your target device will definitively have the power to create a fake texture-space effect which looks good enough to be convincing. 但是,您的目标设备将绝对有能力创建伪造的纹理空间效果,看上去效果足以令人信服。

First look at the animation you posted. 首先看一下您发布的动画。 The flower is blurring and fading, there is a sideway motion to the left ("wind") and an upwards motion of the smoke. 花开始模糊和褪色,向左运动(“风”),烟气向上运动。 Thus, what is primarily needed is ping-ponging between two textures, sampling for each fragment at the fragment's location offset by a vector pointing downwards and right (you only have gather available, not scatter ). 因此,最需要的是在两个纹理之间进行ping操作,在片段位置偏移每个像素的位置采样一个指向下方和右侧的矢量(您只能使用可用的集合 ,而不能使用散点 )。
No texelFetchOffset or such function in ES 2.0 so you'll have to use plain old texture2D and do the vector add yourself, but that shouldn't be a lot of trouble. ES 2.0中没有texelFetchOffset或类似函数,因此您必须使用普通的旧texture2D并进行矢量添加,但这并不麻烦。 Note that since you need to use texture2D anyway you'll not need to worry about gl_FragCoord either. 请注意,由于无论如何都需要使用texture2D ,因此您也不必担心gl_FragCoord Have the interpolator give you the correct texture coordinate (simply set texcoord of vertices of the quad to 0 on one end, and to 1 on the other end). 让插值器为您提供正确的纹理坐标(只需将四边形的顶点的texcoord一端设置为0,另一端设置为1)。

To get the blur effect, randomize the offset vector (eg by adding another random vector with a much smaller magnitude, so the "overall direction" remains the same), to get the fade effect either multiply alpha with an attenuation factor (such as 0.95) or do the same with the color (which will give you "black" rather than "transparent", but depending on wheter or not you want premultiplied alpha, that may be the correct thing). 要获得模糊效果,请对偏移矢量进行随机化(例如,通过添加另一个幅度较小的随机矢量,以使“总体方向”保持不变),以获取淡入淡出效果,即将alpha与衰减因子相乘(例如0.95) )或对颜色进行相同操作(这将使您获得“黑色”而不是“透明”,但取决于是否需要预乘alpha,这可能是正确的选择)。

Alternatively you could implement the blur and fade effect by generating mipmaps first (gradually fade them to transparent), and using the optional bias value in texture2D , slightly increasing the bias as time progresses. 另外,您可以通过首先生成mipmap(逐渐将其淡化为透明),并在texture2D使用可选的bias值来实现模糊和淡入淡出的效果,随着时间的推移会稍微增加偏差。 That will be, yet lower quality (possibly with visible box artefacts), but it allows you to preprocess much of the calculation ahead of time and has a much more cache-friendly access pattern. 这将是,但低质量(可能可见箱文物),但它可以让你预处理非常超前的计算,并具有高速缓存友好访问模式。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM