Search Unity

Question [Question] Why position can assign for texcoord?

Discussion in 'Shaders' started by GiangPuzzle, Oct 10, 2021.

  1. GiangPuzzle

    GiangPuzzle

    Joined:
    Mar 11, 2021
    Posts:
    9
    I am learning Unlit Shader. I have the code:

    Code (CSharp):
    1. CGPROGRAM
    2.             #pragma vertex vert
    3.             #pragma fragment frag
    4.             #include "UnityCG.cginc"
    5.  
    6.             struct appdata
    7.             {
    8.                 float4 vertex : POSITION;
    9.                 float2 uv : TEXCOORD0;
    10.             };
    11.  
    12.             struct v2f
    13.             {
    14.                 float2 uv : TEXCOORD0;
    15.                 float4 position: TEXCOORD1;
    16.                 float4 vertex : SV_POSITION;
    17.             };
    18.  
    19.             v2f vert (appdata v)
    20.             {
    21.                 v2f o;
    22.                 o.vertex = UnityObjectToClipPos(v.vertex);
    23.                 o.uv = v.uv;
    24.                 o.position = v.vertex;
    25.                 return o;
    26.             }
    27.  
    28.             fixed4 frag (v2f i) : SV_Target
    29.             {
    30.                 float inCircle = step(0.25, length(i.position.xy));
    31.                 half4 col = (1, 1, 1, 1) * inCircle;
    32.                 return col;
    33.             }
    34.             ENDCG
    why o.position is TEXCOORD1 = v.vertex is POSITION?
    I think the vertex includes UV, right?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    TEXCOORD#
    semantics are used to pass arbitrary data between the vertex and fragment shaders. It’s named “texcoord” because when the semantics where first thought up, they didn’t imagine anyone passing anything but texture coordinates (aka UVs), and colors from the vertex shader to the fragment shader. But there’s not anything actually special about the
    TEXCOORD#
    semantics, unlike the
    SV_POSITION
    one, so it can be used to pass any kind of data you want.

    And the
    v.vertex
    has the local vertex position data in it.
     
  3. GiangPuzzle

    GiangPuzzle

    Joined:
    Mar 11, 2021
    Posts:
    9
    Thank bgolus, but i applied this shader to Quad. therefore, it has 4 vertices (0, 0, 0), (0, 1, 0), (1, 1, 0), (1, 0, 0), so why o.position has a lot of value than 4?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Due to how GPUs use matrix transforms, the vertex position is a
    float4
    with a w of 1.0. When you're using a 3D scale, rotation, and translation transform matrix, you need a 4x4 matrix, and you need to use a
    float4
    with a w of 1.0 to have it apply the translation otherwise a w of 0.0 only applies the scale and rotation. For legacy reasons of GPUs not using "shaders" like we think of them today, that means the vertex data needs to have the position be a
    float4
    with the w of 1.0 already set. Today there's no real reason for it as the shader can override the w value to be whatever is needed.

    There's no need for it to be a
    float4
    when being passed to the fragment shader since only the x and y values are used.

    Also, the default quad's vertex positions are (-0.5, -0.5, 0.0, 1.0), (-0.5, 0.5, 0.0, 1.0), (0.5, 0.5, 0.0, 1.0), (0.5, -0.5, 0.0, 1.0), which is why the fragment shader draws a circle and not a quarter slice of a circle.
     
  5. GiangPuzzle

    GiangPuzzle

    Joined:
    Mar 11, 2021
    Posts:
    9
    I'm sorry but I don't understand. if o.position = v.vertex so o.position just has 4 values (-0.5, -0.5, 0.0, 1.0), (-0.5, 0.5, 0.0, 1.0), (0.5, 0.5, 0.0, 1.0), (0.5, -0.5, 0.0, 1.0)

    in this line: float inCircle = step(0.25, length(i.position.xy)); if it has 4 positions, the shader will draw 4 vertices, but this draws a circle like i.position.xy has values are the pixels like UV.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Unity's default quad mesh has 4 vertices. The
    v.vertex
    values for those 4 vertices are as I stated above, and the
    v.uv
    values are (0.0, 0.0), (0.0, 1.0), (1.0, 1.0), (1.0, 0.0). Those are the values the vertex shader sees as inputs, and passes onto the fragment shader.

    The fragment shader never sees those exact values. It instead sees the interpolated values depending on what position within each triangle it appears. Using the barycentric coordinate (proportionally how close that position on the triangle it is to each of the 3 vertices that make that triangle) the fragment shader gets a blend of 3 of the values output by the vertex shader. It doesn't matter if the values come from UVs, or the vertex position, or the vertex color, or are calculated by the vertex shader some other way. They're all treated the same way ... they get interpolated and passed onto the fragment shader.

    The only thing this isn't true for is the
    SV_Position
    , which is the clip space (aka projective screen space) position of those 4 vertices. Because GPUs don't draw quads, they draw triangles, the quad mesh is two triangles. So normally you'd be drawing a square on screen, but instead you're getting the local position and clipping the area outside of a radius from the center of the quad.
     
  7. GiangPuzzle

    GiangPuzzle

    Joined:
    Mar 11, 2021
    Posts:
    9
    Thank you very much, I understood it.