简体   繁体   中英

Why does HLSL have semantics?

In HLSL I must use semantics to pass info from a vertex shader to a fragment shader. In GLSL no semantics are needed. What is an objective benefit of semantics?

Example: GLSL

vertex shader

varying vec4 foo
varying vec4 bar;

void main() {
  ...
  foo = ...
  bar = ...
}

fragment shader

varying vec4 foo
varying vec4 bar;

void main() {
  gl_FragColor = foo * bar;
}

Example: HLSL

vertex shader

struct VS_OUTPUT
{
   float4  foo : TEXCOORD3;
   float4  bar : COLOR2;
}

VS_OUTPUT whatever()
{
  VS_OUTPUT out;

  out.foo = ...
  out.bar = ...

  return out;
}

pixel shader

void main(float4 foo : TEXCOORD3,
          float4 bar : COLOR2) : COLOR
{
    return foo * bar;
}

I see how foo and bar in the VS_OUTPUT get connected to foo and bar in main in the fragment shader. What I don't get is why I have the manually choose semantics to carry the data. Why, like GLSL, can't DirectX just figure out where to put the data and connect it when linking the shaders?

Is there some more concrete advantage to manually specifying semantics or is it just left over from assembly language shader days? Is there some speed advantage to choosing say TEXCOORD4 over COLOR2 or BINORMAL1?

I get that semantics can imply meaning, there's no meaning to foo or bar but they can also obscure meaning just was well if foo is not a TEXCOORD and bar is not a COLOR. We don't put semantics on C# or C++ or JavaScript variables so why are they needed for HLSL?

Simply, (old) glsl used this varying for variables naming (please note that varying is now deprecated).

An obvious benefit of semantic, you don't need the same variable names between stages, so the DirectX pipeline does matching via semantics instead of variable name, and rearranges data as long as you have a compatible layout.

If you rename foo by foo2, you need to replace this names in potentially all your shader (and eventually subsequent ones). With semantics you don't need this.

Also since you don't need an exact match, it allows easier separation between shader stages.

For example:

you can have a vertex shader like this:

struct vsInput
{
float4 PosO : POSITION;
float3 Norm: NORMAL;
float4 TexCd : TEXCOORD0;
};

struct vsOut
{
float4 PosWVP : SV_POSITION;
float4 TexCd : TEXCOORD0;
float3 NormView : NORMAL;
};

vsOut VS(vsInput input)
{ 
     //Do you processing here
}

And a pixel shader like this:

struct psInput
{
    float4 PosWVP: SV_POSITION;
    float4 TexCd: TEXCOORD0;
};

Since vertex shader output provides all the inputs that pixel shader needs, this is perfectly valid. Normals will be ignored and not provided to your pixel shader.

But then you can swap to a new pixel shader that might need normals, without the need to have another Vertex Shader implementation. You can also swap the PixelShader only, hence saving some API calls ( Separate Shader Objects extension is the OpenGL equivalent).

So in some ways semantics provide you a way to encapsulate In/Out between your stages, and since you reference other languages, that would be an equivalent of using properties/setters/pointer addresses...

Speed wise there's no difference depending on naming (you can name semantics pretty much any way you want, except for system ones of course). Different layout position will do imply a hit tho (pipeline will reorganize for you but will also issue a warning, at least in DirectX).

OpenGL also provides Layout Qualifier which is roughly equivalent (it's technically a bit different, but follows more or less the same concept).

Hope that helps.

As Catflier has mentioned above, using semantics can help decoupling the shaders.

However, there are some downsides of semantics which I can think of:

  1. The number of semantics are limited by the DirectX version you are using, so it might run out.

  2. The semantics name are a little misleading. You will see these kind of variables a lot (look at posWorld variable):

    struct v2f { float4 position : POSITION; float4 posWorld : TEXCOORD0; }

You will see that the posWorld variable and the TEXCOORD0 semantic are not relevant at all. But it is fine because we do not need to pass texture coordinate to TEXCOORD0. However, this is likely to cause some confusions between those who just pick up shader language.

I much prefer writing : NAME after a variable than writing layout(location = N) in before. Also you don't need to update the HLSL shader if you change the order of the vertex inputs unlike in Vulkan.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM