简体   繁体   English

将ShaderToy转换为片段着色器

[英]Convert ShaderToy to fragment shader

I came across several shaders on ShaderToy and I have not had success converting them into a format that can be used on a mobile device, such as a .fsh . 我在ShaderToy上遇到了多个着色器,但尚未成功将它们转换为可在移动设备上使用的格式,例如.fsh

I have this Shader, and I want to be able to use it on a mobile device. 我拥有着色器,并且希望能够在移动设备上使用它。

I know that I need to make modifications to the iXXXX variables and change mainImage to main(). 我知道我需要修改iXXXX变量并将mainImage更改为main()。

Does anyone know how I can do this? 有人知道我该怎么做吗? I am unable to find any resources on how to do this, and have never encountered it myself. 我找不到任何有关执行此操作的资源,而且我自己从未遇到过。

float noise(vec2 p)
{
    float sample = texture2D(iChannel1,vec2(1.,2.*cos(iGlobalTime))*iGlobalTime*8. + p*1.).x;
    sample *= sample;
    return sample;
}

float onOff(float a, float b, float c)
{
    return step(c, sin(iGlobalTime + a*cos(iGlobalTime*b)));
}

float ramp(float y, float start, float end)
{
    float inside = step(start,y) - step(end,y);
    float fact = (y-start)/(end-start)*inside;
    return (1.-fact) * inside;

}

float stripes(vec2 uv)
{

    float noi = noise(uv*vec2(0.5,1.) + vec2(1.,3.));
    return ramp(mod(uv.y*4. + iGlobalTime/2.+sin(iGlobalTime + sin(iGlobalTime*0.63)),1.),0.5,0.6)*noi;
}

vec3 getVideo(vec2 uv)
{
    vec2 look = uv;
    float window = 1./(1.+20.*(look.y-mod(iGlobalTime/4.,1.))*(look.y-mod(iGlobalTime/4.,1.)));
    look.x = look.x + sin(look.y*10. + iGlobalTime)/50.*onOff(4.,4.,.3)*(1.+cos(iGlobalTime*80.))*window;
    float vShift = 0.4*onOff(2.,3.,.9)*(sin(iGlobalTime)*sin(iGlobalTime*20.) + 
                                         (0.5 + 0.1*sin(iGlobalTime*200.)*cos(iGlobalTime)));
    look.y = mod(look.y + vShift, 1.);
    vec3 video = vec3(texture2D(iChannel0,look));
    return video;
}

vec2 screenDistort(vec2 uv)
{
    uv -= vec2(.5,.5);
    uv = uv*1.2*(1./1.2+2.*uv.x*uv.x*uv.y*uv.y);
    uv += vec2(.5,.5);
    return uv;
}

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    vec2 uv = fragCoord.xy / iResolution.xy;
    uv = screenDistort(uv);
    vec3 video = getVideo(uv);
    float vigAmt = 3.+.3*sin(iGlobalTime + 5.*cos(iGlobalTime*5.));
    float vignette = (1.-vigAmt*(uv.y-.5)*(uv.y-.5))*(1.-vigAmt*(uv.x-.5)*(uv.x-.5));

    video += stripes(uv);
    video += noise(uv*2.)/2.;
    video *= vignette;
    video *= (12.+mod(uv.y*30.+iGlobalTime,1.))/13.;

    fragColor = vec4(video,1.0);
}

I've written main() and included the Sprite equivalent variables of the ShaderToys variables at the bottom of my answer. 我已经写了main()并在答案的底部包含了ShaderToys变量的Sprite等效变量。

Setup 设定

To apply a shader to your node you need to tell SpriteKit to attach a shader to the SKSpriteNode in the .fsh file. 要将着色器应用于节点,您需要告诉SpriteKit将着色器附加到.fsh文件中的.fsh

  1. Create empty text file ending in .fsh for the shader code. 为着色器代码创建以.fsh结尾的空文本文件。

The swizzle 🌀 izz

shader1.fsh shader1.fsh

void main() {

vec4 val = texture2D(_texture, v_tex_coord);
vec4 grad = texture2D(u_gradient, v_tex_coord);

if (val.a < 0.1 && grad.r < 1.0 && grad.a > 0.8) {
vec2 uv = gl_FragCoord.xy / u_sprite_size.xy;
uv = screenDistort(uv);
vec3 video = getVideo(uv);
float vigAmt = 3.+.3*sin(u_time + 5.*cos(u_time*5.));
float vignette = (1.-vigAmt*(uv.y-5)*(uv.y-5.))*(1.-vigAmt*(uv.x-.5)*(uv.x-.5));

video += stripes(uv);
video += noise(uv*2.)/2.;
video *= vignette;
video *= (12.+mod(uv.y*30.+u_time,1.))/13.;

gl_FragColor = vec4(video,1.0);

} else {
 gl_FragColor = val;
}

} // end of main()
  1. Next, attach the shader in SpriteKit. 接下来,将着色器附加到SpriteKit中。

shader1.swift shader1.swift

let sprite = self.childNodeWithName("targetSprite") as! SKSpriteNode
let shader = SKShader(fileNamed: "shader1.fsh")
sprite.shader = shader

Explanation 说明

  • The shader turns every pixel the color of the effect (screenDistort(uv)). 着色器将每个像素变为效果的颜色(screenDistort(uv))。
  • main() is the entry point. main()是入口点。
  • gl_FragColor is the return. gl_FragColor是返回值。
  • For each pixel of the image this code is executed. 对于图像的每个像素,将执行此代码。
  • When the code executes it is telling each pixel that the color should be the color of the effect. 代码执行时,它告诉每个像素颜色应该是效果的颜色。 The vec4() call has ar,g,b,a values. vec4()调用具有ar,g,b,a值。

ShaderToys variable names -> SpriteKit variable names ShaderToys变量名称-> SpriteKit变量名称

iGlobalTime -> u_time iGlobalTime > u_time

iResolution -> u_sprite_size iResolution > u_sprite_size

fragCoord.xy -> gl_FragCoord.xy fragCoord.xy > gl_FragCoord.xy

iChannelX -> SKUniform with name of “iChannelX” containing SKTexture iChannelX > SKUniform with name of “iChannelX” containing SKTexture

fragColor -> gl_FragColor fragColor > gl_FragColor

Since you have the Sprite equivalent variables you can now easily convert these remaining methods that are above main() . 由于您拥有Sprite等效变量,因此现在可以轻松转换main()之上的其余方法。

float noise {}

float onOff {}

float ramp {}

float stripes {}

vec3 getVideo {}

vec2 screenDistort {}

Theory 理论

Q. Why does main() contain texture2D and u_gradient, v_tex_coord ? 问:为什么main()包含texture2Du_gradient, v_tex_coord

A. SpriteKit uses textures and uv coordinates. 答:SpriteKit使用纹理和uv坐标。

UV mapping 紫外线贴图

UV mapping is the 3D modeling process of projecting a 2D image to a 3D model's surface for texture mapping. UV贴图是将2D图像投影到3D模型表面以进行纹理贴图的3D建模过程。

UV coordinates 紫外线坐标

When texturing a mesh, you need a way to tell to OpenGL which part of the image has to be used for each triangle. 对网格进行纹理处理时,需要一种方法来告知OpenGL每个三角形必须使用图像的哪一部分。 This is done with UV coordinates. 这是通过UV坐标完成的。 Each vertex can have, on top of its position, a couple of floats, U and V. These coordinates are used to access and distort the texture. 每个顶点可以在其位置的顶部具有两个浮点U和V。这些坐标用于访问和扭曲纹理。

SKShader Class Reference SKShader类参考

OpenGL ES for iOS 适用于iOS的OpenGL ES

Best Practices for Shaders 着色器最佳实践

WWDC Session 606 - What's New in SpriteKit - Shaders, Lighters, Shadows WWDC 606会议-SpriteKit的新增功能-着色器,打火机,阴影

this works for me in unity3D engine. 这在unity3D引擎中对我有用。

// Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'


Shader"ShaderMan/Clip"{
Properties{
_MainTex("MainTex", 2D) = "white"{}
_SecondTex("_SecondTex",2D) = "white"{}
}
SubShader{
Pass{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
struct appdata{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
uniform sampler2D  _MainTex;
uniform fixed4     fragColor;
uniform fixed      iChannelTime[4];// channel playback time (in seconds)
uniform fixed3     iChannelResolution[4];// channel resolution (in pixels)
uniform fixed4     iMouse;// mouse pixel coords. xy: current (if MLB down), zw: click
uniform fixed4     iDate;// (year, month, day, time in seconds)
uniform fixed      iSampleRate;// sound sample rate (i.e., 44100)
sampler2D _SecondTex;

struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
float4 screenCoord : TEXCOORD1;
};

v2f vert(appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
o.screenCoord.xy = ComputeScreenPos(o.vertex);
return o;
}
fixed noise(fixed2 p)
{
    fixed sample = tex2D(_SecondTex,fixed2(1.,2.*cos(_Time.y))*_Time.y*8. + p*1.).x;
    sample  = mul(    sample ,sample);
    return sample;
}

fixed onOff(fixed a, fixed b, fixed c)
{
    return step(c, sin(_Time.y + a*cos(_Time.y*b)));
}

fixed ramp(fixed y, fixed start, fixed end)
{
    fixed inside = step(start,y) - step(end,y);
    fixed fact = (y-start)/(end-start)*inside;
    return (1.-fact) * inside;

}

fixed stripes(fixed2 uv)
{

    fixed noi = noise(uv*fixed2(0.5,1.) + fixed2(1.,3.));
    return ramp(fmod(uv.y*4. + _Time.y/2.+sin(_Time.y + sin(_Time.y*0.63)),1.),0.5,0.6)*noi;
}

fixed3 getVideo(fixed2 uv)
{
    fixed2 look = uv;
    fixed window = 1./(1.+20.*(look.y-fmod(_Time.y/4.,1.))*(look.y-fmod(_Time.y/4.,1.)));
    look.x = look.x + sin(look.y*10. + _Time.y)/50.*onOff(4.,4.,.3)*(1.+cos(_Time.y*80.))*window;
    fixed vShift = 0.4*onOff(2.,3.,.9)*(sin(_Time.y)*sin(_Time.y*20.) + 
                                         (0.5 + 0.1*sin(_Time.y*200.)*cos(_Time.y)));
    look.y = fmod(look.y + vShift, 1.);
    fixed3 video = fixed3(tex2D(_MainTex,look).xyz);
    return video;
}

fixed2 screenDistort(fixed2 uv)
{
    uv -= fixed2(.5,.5);
    uv = uv*1.2*(1./1.2+2.*uv.x*uv.x*uv.y*uv.y);
    uv += fixed2(.5,.5);
    return uv;
}

fixed4 frag(v2f i) : SV_Target{

{
    fixed2 uv = i.uv;
    uv = screenDistort(uv);
    fixed3 video = getVideo(uv);
    fixed vigAmt = 3.+.3*sin(_Time.y + 5.*cos(_Time.y*5.));
    fixed vignette = (1.-vigAmt*(uv.y-.5)*(uv.y-.5))*(1.-vigAmt*(uv.x-.5)*(uv.x-.5));

    video += stripes(uv);
    video += noise(uv*2.)/2.;
    video  = mul(    video ,vignette);
    video  = mul(    video ,(12.+fmod(uv.y*30.+_Time.y,1.))/13.);

    return  fixed4(video,1.0);
}
}ENDCG
}
}
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM