Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Normal Map behavior in VR

Discussion in 'Shaders' started by Manello, Jun 16, 2020.

  1. Manello

    Manello

    Joined:
    Dec 14, 2014
    Posts:
    12
    After fiddling around for a while with already existing Outline shaders I had to realize that many of them don't work in VR, no matter if you use Multipass or Singlepass.

    In general my shader is supposed to display outlines, but currently I am having a problem with the _CameraDepthNormalsTexture. Depending on how I am moving the VR headset it is generating normal map values, and in some places not. The Problem is that the ares where no normal map values are generated (which are blue) are in the vision of the player.

    Here is a video showing the problem:


    So:
    1. Yes I know the colors are not correct because I am drawing the vector values instead of the color map, which shouldn't matter here as it still shows that in the blue areas there are no normal map values available.
    2. Why does every pixel in the preview window has its correct normal value while it is horribly wrong when using the values?
    3. In my geometry function I am using the index 0 for each vertex in a triangle, though I know I should use the correct index (0-2). But if I use any other index than 0 I will receive a very weird projection of the normal map on my objects.
    Code (CSharp):
    1. o.scrPos = IN[0].scrPos;
    Here is the full shader code, as you can see it is currently bypassing all of its purposes and is only displaying
    colors for normal vectors.


    Code (CSharp):
    1. Shader "Custom/Geometry/Wireframe"
    2. {
    3.     Properties
    4.     {
    5.         [PowerSlider(3.0)]
    6.         _WireframeVal("Wireframe width", Range(0., 3.0)) = 1.0
    7.         _LineColor("Front color", color) = (1., 1., 1., 1.)
    8.         _FaceColor("Face color", color) = (0., 0., 0., 1.)
    9.     }
    10.         SubShader
    11.         {
    12.             Tags { "Queue" = "Geometry" "RenderType" = "Opaque" }
    13.         Pass
    14.         {
    15.             Cull Back
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.             #pragma geometry geom
    20.  
    21.                 #include "UnityCG.cginc"
    22.  
    23.                 fixed _WireframeVal;
    24.  
    25.                 struct v2g {
    26.                     float4 worldPos : SV_POSITION;
    27.                     float4 scrPos : TEXCOORD1;
    28.                 };
    29.  
    30.                 struct g2f {
    31.                     float4 pos : SV_POSITION;
    32.                     float4 scrPos : TEXCOORD1;
    33.                     float3 bary : TEXCOORD0;
    34.                 };
    35.  
    36.                 v2g vert(appdata_base v) {
    37.                     v2g o;
    38.                     o.worldPos = mul(unity_ObjectToWorld, v.vertex);
    39.                     o.scrPos = ComputeScreenPos(o.worldPos);
    40.                     return o;
    41.                 }
    42.  
    43.                 [maxvertexcount(3)]
    44.                 void geom(triangle v2g IN[3], inout TriangleStream<g2f> triStream) {
    45.                     float3 param = float3(0., 0., 0.);
    46.  
    47.                     float EdgeA = length(IN[0].worldPos - IN[1].worldPos);
    48.                     float EdgeB = length(IN[1].worldPos - IN[2].worldPos);
    49.                     float EdgeC = length(IN[2].worldPos - IN[0].worldPos);
    50.  
    51.                     if (EdgeA > EdgeB && EdgeA > EdgeC)
    52.                         param.y = 1.;
    53.                     else if (EdgeB > EdgeC && EdgeB > EdgeA)
    54.                         param.x = 1.;
    55.                     else
    56.                         param.z = 1.;
    57.  
    58.                     g2f o;
    59.                     o.pos = mul(UNITY_MATRIX_VP, IN[0].worldPos);
    60.                     o.bary = (float3(1., 0., 0.) + param);
    61.                     o.scrPos = IN[0].scrPos;  
    62.                     triStream.Append(o);
    63.  
    64.                     o.pos = mul(UNITY_MATRIX_VP, IN[1].worldPos);
    65.                     o.bary = (float3(0., 0., 1.) + param);
    66.                     o.scrPos = IN[0].scrPos;        //NOTE: WRONG IN index but works on the correct faces??!?!??
    67.                     triStream.Append(o);
    68.  
    69.                     o.pos = mul(UNITY_MATRIX_VP, IN[2].worldPos);
    70.                     o.bary = (float3(0., 1., 0.) + param);
    71.                     o.scrPos = IN[0].scrPos;        //NOTE: WRONG IN index but works on the correct faces??!?!??
    72.                     triStream.Append(o);
    73.                 }
    74.  
    75.                 fixed4 _LineColor;
    76.                 fixed4 _FaceColor;
    77.                 sampler2D _CameraDepthNormalsTexture;
    78.  
    79.                 fixed4 frag(g2f i) : SV_Target
    80.                 {
    81.                     // Check if there is a difference in normals
    82.                     //bool drawMe = false;
    83.                     float3 normalThis;
    84.                     float depthThis;
    85.                     DecodeDepthNormal(tex2D(_CameraDepthNormalsTexture, i.scrPos.xy), depthThis, normalThis);
    86.  
    87.              
    88.                     return float4 (normalThis, 0);
    89.                     //return float4 (i.scrPos.z, 0, 0, 0.);
    90.                     //return float4 (normalThis, 0.);
    91.  
    92.                     float3 bary_grad = fwidth(i.bary); // change over 1 pixel
    93.                     float3 bary = i.bary / bary_grad; // scale barycentrics
    94.  
    95.                     float edge = smoothstep(0.0, _WireframeVal, min(bary.x, min(bary.y, bary.z)));
    96.  
    97.                     return lerp(_LineColor, _FaceColor, edge);
    98.                 }
    99.  
    100.                 ENDCG
    101.             }
    102.         }
    103. }
    104.  
    105.  
    Using Unity 2019.3
     
    Last edited: Jun 16, 2020
  2. Manello

    Manello

    Joined:
    Dec 14, 2014
    Posts:
    12
    Update: The same problem persists in any non-vr project.
    I tried different setups now, but no matter what I am doing I can't get a proper normal map texture going :/

    I know it is probably linked to bad calculations of the coordinates for the normal texture, but yeh I tried now most ways to calculate them I found on google, but nothing works correctly.
     
  3. Namey5

    Namey5

    Joined:
    Jul 5, 2013
    Posts:
    188
    Well for one you aren't sampling the screen-space texture properly to account for perspective distortion;

    Code (CSharp):
    1. //This
    2. tex2D (_CameraDepthNormalsTexture, i.scrPos.xy)
    3.  
    4. //Should be this
    5. tex2D (_CameraDepthNormalsTexture, i.scrPos.xy / i.scrPos.w)
    and two, you aren't calculating the screen-position properly in the first place;

    Code (CSharp):
    1. //ComputeScreenPos takes clip-pos as an input, not world-pos
    2. o.scrPos = ComputeScreenPos (o.worldPos);
    3.  
    4. //Should be
    5. o.scrPos = ComputeScreenPos (UnityObjectToClipPos (v.vertex));
     
    Manello likes this.
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    I wrote up a similar response and then forgot to actually post it, so thank you @Namey5 for actually responding!

    I did want to go over a little bit of why the above changes are necessary though.

    First
    ComputeScreenPos
    . The input needs to be the clip space position, as @Namey5 already mentioned, and it outputs a
    float4
    because all of the components are actually important.

    The clip space position is what the shader stage before the fragment stage needs to output as it's already the screen space position. However it's stored in a -w to w range, where the w value is the clip space w, which is also the view space depth. The reason for this is to correct for perspective distortion. Interpolating the xy screen space position alone with a perspective projection would cause weird distortions with the values seen by the fragment shader. For example, perspective correct texture mapping works because of how the clip space position is defined. Without it things go back to looking like a PS1 game.

    The
    ComputeScreenPos
    function isn't touching the z or w, but is rescaling the xy from a -w to w range to a 0.0 to w range, and also correcting for the cases where Unity is flipping the projection, which it does because OpenGL is weird and Unity tries to make all other graphics APIs match OpenGL's weirdness for consistency.

    But you want a 0.0 to 1.0 range for the screen space UVs, so you need to divide them by the w before you use them. And because of perspective distortion you need to do this in the fragment shader and not in the vertex shader.


    Now you might ask, "why don't I just use the clip space position in the fragment shader?" This is actually an option, but the
    SV_POSITION
    is transformed by the GPU before the fragment shader gets it. This means it's no longer in a -w to w range, but instead it's a pixel position coordinate. It's the pixel coordinates the GPU calculated from the clip space position. This is the equivalent to the fragment stage input semantic
    VPOS
    , or
    gl_FragCoord
    in OpenGL. You have to divide that by the screen resolution, and possibly flip the Y axis still, to get usable UVs. But it is an option.
     
    Namey5 likes this.
  5. Fressno

    Fressno

    Joined:
    Mar 31, 2015
    Posts:
    185
    Hey.
    Im having a similar problem with my models not showing any normal height in them. Im only using normal maps in HDRP VR.
    Any idea what i should do?
    do i have to create a shader from scratch or could i use an excisting HDRP shader and just tweak it/check some boxes and it might work?
    need some pointers.
    everything looks flat in VR.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    You're talking about something completely unrelated to what this thread was discussing.

    Normal maps have no height. That's the whole trick for how they work. They have no height, they just modify the lighting.

    I'm guessing you're using normal maps on objects that have relatively low triangle counts or otherwise large flat surfaces. In VR, the fact the normal maps don't actually have any height and flat surfaces are just flat becomes obvious. The solution is don't use models that are as low poly, or don't try to fake as much detail on large flat surfaces.

    For HDRP you can also try providing a height map, if you have one for the content, and using one of the displacement options the HDRP Lit shader has. Though displacement won't always work well with all content.
     
    asimdeyaf likes this.
  7. Fressno

    Fressno

    Joined:
    Mar 31, 2015
    Posts:
    185

    Thanx for clearing it out. yes i know about the normal map not actually making anything pop out "physically", but it looks like it. Even thou it just bends the light looking like it.
    I was just generally just talking about normal maps effect, making something look loke it has height or depth. Thanx for the tip btw. Ill try it out.
     
  8. UWU420Games

    UWU420Games

    Joined:
    May 1, 2023
    Posts:
    1
    Normal maps still don't respect stereoscopy in VR. Wish there was a way to render normals for each eye. If anyone knows please post it here.