Search Unity

Surface shader vertex unwrap

Discussion in 'Shaders' started by fra3point, Nov 20, 2019.

  1. fra3point

    fra3point

    Joined:
    Aug 20, 2012
    Posts:
    269
    Hello, I'm working on an unwrap surface shader but I'm confused about how the vertex shader modifier works.

    I know how to create a basic unwrap shader which displays the full-screen uv map of an object:

    Code (CSharp):
    1.  
    2. Shader "Custom/Unwrap" {
    3.    SubShader{
    4.        Pass{    
    5.             Lighting Off
    6.             Cull Off
    7.    
    8.             CGPROGRAM
    9.             #pragma vertex vert
    10.             #pragma fragment frag
    11.             #include "UnityCG.cginc"
    12.    
    13.             struct v2f {
    14.             float4 pos : SV_POSITION;
    15.         };
    16.    
    17.         struct appdata {
    18.             float2 uv : TEXCOORD0;
    19.         };      
    20.    
    21.         v2f vert(appdata v) {
    22.             v2f o;
    23.             o.pos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, 1.0, 1.0);
    24.             return o;
    25.         }
    26.    
    27.         fixed4 frag(v2f i) : SV_Target{
    28.             return float4(0.4,0.5,1,1);
    29.         }
    30.             ENDCG
    31.         }
    32.    }
    33. }

    upload_2019-11-20_18-27-30.png

    And I'm trying to use this technique with surface schaders:

    Code (CSharp):
    1. Shader "Custom/UnwrapSurf"
    2. {
    3.     Properties
    4.     {
    5.         _Color ("Color", Color) = (1,1,1,1)
    6.         _MainTex ("Albedo (RGB)", 2D) = "white" {}
    7.         _BumpMap ("Normals", 2D) = "white" {}
    8.         _Glossiness ("Smoothness", Range(0,1)) = 0.5
    9.         _Metallic ("Metallic", Range(0,1)) = 0.0
    10.     }
    11.     SubShader
    12.     {
    13.         Tags { "RenderType"="Opaque" }
    14.         LOD 200
    15.  
    16.         CGPROGRAM
    17.         #pragma surface surf Standard fullforwardshadows vertex:vert
    18.         #pragma target 3.0
    19.  
    20.         sampler2D _MainTex;
    21.         sampler2D _BumpMap;
    22.  
    23.         struct Input
    24.         {
    25.             float2 uv_MainTex;
    26.             float2 uv_BumpMap;
    27.         };
    28.  
    29.         half _Glossiness;
    30.         half _Metallic;
    31.         fixed4 _Color;
    32.  
    33.         void vert (inout appdata_full v)
    34.         {
    35.             v.vertex.xyz = float3(v.texcoord.x* 2.0 - 1.0, v.texcoord.y 2.0 - 1.0, 1.0);
    36.         }
    37.      
    38.         void surf (Input IN, inout SurfaceOutputStandard o)
    39.         {
    40.             fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    41.             o.Albedo = c.rgb;
    42.             o.Metallic = _Metallic;
    43.             o.Smoothness = _Glossiness;
    44.             o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap));
    45.             o.Alpha = c.a;
    46.         }
    47.         ENDCG
    48.     }
    49.     FallBack "Diffuse"
    50. }
    51.  

    upload_2019-11-20_18-58-18.png

    As you can see, the vertex function didn't work as expected. In fact, it seems the texcoords are in model space and not in uv space. There must be something wrong in the vert function:

    Code (CSharp):
    1. void vert (inout appdata_full v)
    2. {
    3.     v.vertex.xyz = float3(v.texcoord.x 2.0 - 1.0, v.texcoord.y 2.0 - 1.0, 1.0);
    4. }


    Does anyone know how to solve this?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    The vertex fragment shader's output is the clip space position, what the GPU uses to determine the screen space position. Normally you'd take the object space vertex positions and convert them into clip space using the UnityObjectToClipPos() function, where as the trick here is you're converting the UVs directly into a clip space position.

    In a Surface Shader, the vertex function just lets you modify the object space vertex data. The Surface Shader's next step outside of the custom vertex function is to immediately apply the UnityObjectToClipPos() to the v.vertex value. To do what you're looking to do with a Surface Shader, you'd have to calculate the clip space position in object space. Unfortunately that's harder said that done as Unity does not provide the necessary matrices to transform from clip space back to object space. There's no easy solution to this apart from calculating the inverse projection matrix in the shader manually (which there's no built in functions for), or do it in c# and pass the matrix to the shader.
     
    fra3point likes this.
  3. fra3point

    fra3point

    Joined:
    Aug 20, 2012
    Posts:
    269
    Hello,

    thanks for your answer. I'm interested in the "inverse projection matrix" solution, it's fine for me to compute it in a C# script and pass it to the shader.
    If I understand wat you're saying, in my custom vertex function I have to "prepare" the v.vertex value so that when the shader applies the UnityObjectToClipPos() to it, the resulting value is the clip space position. Am I correct?

    I'm still not sure about a couple of things:

    1) Is it the Camera.projectionMatrix to be inverted, or it's something else?
    2) How to use the invese projection matrix to calculate clip space position in object space? Can I simply do something like this?

    Code (CSharp):
    1. v.vertex = mul(INV_PROJ_MATRIX, v.vertex);
    Thanks,
    Francesco
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Yes. You have to apply the inverse matrix operations to basically "undo" what the
    UnityObjectToClipPos()
    is going to do to it so the resulting position is the one you already calculated. Really you want to calculate the inverse
    UNITY_MATRIX_VP
    (view projection) matrix so you can apply exactly the inverse operations, since that function does this:
    Code (csharp):
    1. inline float4 UnityObjectToClipPos(in float3 pos)
    2. {
    3.     // More efficient than computing M*VP matrix product
    4.     return mul(UNITY_MATRIX_VP, mul(unity_ObjectToWorld, float4(pos, 1.0)));
    5. }
    See that page's notes about the matrix used on the GPU.
    You also need to use the GPU world to camera matrix, which you can get from Camera.worldToCameraMatrix.

    So in C# you need to do this:
    Code (csharp):
    1. // get GPU projection matrix
    2. Matrix4x4 projMatrix = GL.GetGPUProjectionMatrix(cam.projectionMatrix, false);
    3.  
    4. // get GPU view projection matrix
    5. Matrix4x4 viewProjMatrix = projMatrix * cam.worldToCameraMatrix;
    6.  
    7. // get inverse VP matrix
    8. Matrix4x4 inverseViewProjMatrix = viewProjMatrix.inverse;
    You can then set that matrix on the material directly, or as a shader global.

    In the Surface Shader you'll then want to do this:
    Code (csharp):
    1. float4 clipPos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, 1.0, 1.0);
    2. v.vertex = mul(unity_WorldToObject, mul(_inverseViewProjMatrix, clipPos));
     
    fra3point likes this.
  5. fra3point

    fra3point

    Joined:
    Aug 20, 2012
    Posts:
    269
    This solution works perfectly! Thank you so much for these tips, your help saved my night! :)
     
  6. fra3point

    fra3point

    Joined:
    Aug 20, 2012
    Posts:
    269
    @bgolus

    As you can see in the following image, the shader outputs vertices in the correct screen position only when the camera projection is set to orthographic. It isn't a problem for me to use orthographic projection, but I'm wondering why it doesn't work with perspective projection.

    upload_2019-11-21_18-47-31.png

    However, the real problem is that no lights are rendered on the model's surface when unwrapped (and I actually need them). I suppose lighting is computed after the vertices have already been modified.

    Is there a Surface Shader-based solution or a workaround for this?
    Even thinkin about a fragment/vertex solution, the vertex shader will always change vertices position before the fragment shader's execution, and this would mess up per-pixel lighting calculation... The only thing I can imagine to solve this problem is to compute both world space and clip space positions in the vertex shader (in the classic way), then unwrap the vertices and finally compute lighting using the world space positions.

    Would this be a good/correct approach?

    Thanks,
    Francesco
     
    Last edited: Nov 21, 2019
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Yeah, I don't think you can do this in a surface shader while still retaining the proper world position data. Since you can only modify the object space vertex positions, and that same data is used to determine the world space, you're dead in the water.

    You'll have to modify the generated shader code directly rather than trying to stay within a Surface Shader.

    I'm not totally sure either. Might be some floating point math problems causing the object to get clipped... you could try:
    float4 clipPos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, 0.5, 1.0);
     
    fra3point likes this.