Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Understanding WorldSpaceViewDir - incorrect/weird values

Discussion in 'Shaders' started by AntonDelta, Apr 24, 2022.

  1. AntonDelta

    AntonDelta

    Joined:
    Oct 7, 2021
    Posts:
    4
    I tried for the last two days to create an underwater shader. The ideea is to draw over the skybox with a calculated fog color which lighten when approaching the surface thus revealing the skybox.

    upload_2022-4-24_23-53-41.png
    The surface line is an imaginary one hardcoded. I must calculate the "depth" or the length of the vectors coming from the camera to each pixel in order to determine how far away the light is and to adjust the fog. This algorithm should only be used when the real depth value for that pixel is infinite. Otherwise there are objects in the viewport and for those the real depth should be used to calculate the color.

    The way I do this is very simple: I take the view direction for a particular pixel, make the y axis to be a unity (by multiplying the entire vector by the inverse of y) and then multiply again the vector by the distance to the surface. This should scale the entire vector and the length of it should give me the depth to the surface.

    I am using URP with a Blit script I found online. I run this shader for the entire screen.
    Unity 2020.3.30f1

    Code:
    Code (CSharp):
    1. Shader "Unlit/TestUnlitShader"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex("Texture", 2D) = "white" {}
    6.     }
    7.         SubShader
    8.     {
    9.         Tags { "RenderType" = "Opaque" }
    10.         LOD 100
    11.  
    12.         Pass
    13.         {
    14.             CGPROGRAM
    15.             #pragma vertex vert
    16.             #pragma fragment frag
    17.  
    18.             #include "UnityCG.cginc"
    19.  
    20.             struct appdata {
    21.                 float4 vertex : POSITION;
    22.                 float2 uv : TEXCOORD0;
    23.             };
    24.  
    25.             struct v2f {
    26.                 float4 pos : SV_POSITION;
    27.                 float2 uv : TEXCOORD0;
    28.                 float3 view_dir : TEXCOORD1;
    29.             };
    30.  
    31.             // From the docs I can see this will be automatically populated with the depth texture
    32.             // BUT remeber to tell it to do so...in UniversalRenderPipelineAsset check the Depth Texture in General
    33.             sampler2D _CameraDepthTexture;
    34.             sampler2D _MainTex;
    35.             float4 _MainTex_ST;
    36.  
    37.  
    38.             v2f vert(appdata v) {
    39.                 v2f o;
    40.                 o.pos = UnityObjectToClipPos(v.vertex);
    41.                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    42.  
    43.                 o.view_dir = WorldSpaceViewDir(v.vertex);
    44.                 //o.view_dir = normalize(mul(unity_ObjectToWorld, v.vertex));
    45.                 //o.view_dir = normalize(ObjSpaceViewDir(v.vertex));
    46.  
    47.                 //float3 worldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0)).xyz;
    48.                 //o.view_dir = worldPos;// -_WorldSpaceCameraPos;
    49.                 return o;
    50.             }
    51.  
    52.             fixed4 frag(v2f i) : SV_Target{
    53.                 // Sample the color texture
    54.                 fixed4 col = tex2D(_MainTex, i.uv);
    55.  
    56.                 // God response: https://answers.unity.com/questions/877170/render-scene-depth-to-a-texture.html
    57.                 float depth = UNITY_SAMPLE_DEPTH(tex2D(_CameraDepthTexture, i.uv));
    58.                 float worldDepth = LinearEyeDepth(depth);    // Real z value away from camera
    59.                 depth = pow(Linear01Depth(depth), 1.0f);
    60.  
    61.                 // Constants
    62.                 float fog_start = 0;
    63.                 float fog_end = 10;
    64.  
    65.                 // Red, Green, Blue
    66.                 fixed4 ocean_color = fixed4(0, 0.486,0.905,0);
    67.                 float ocean_surface = 20;
    68.                 float3 dir = normalize(i.view_dir);
    69.  
    70.                 // Run only for skybox
    71.                 if (depth==1) {
    72.                     if (dir.y == 0) dir.y = 0.0001;
    73.  
    74.                     // Make the y component to be of size 1
    75.                     dir = mul(dir, 1.0f/dir.y);
    76.  
    77.                     // Multiply the "unit" y axis by the distance to the surface.
    78.                     // This will also extend the other components
    79.                     dir = mul(dir, ocean_surface - _WorldSpaceCameraPos.y);
    80.  
    81.                     // pow for debug adjustments
    82.                     worldDepth = pow(length(dir),1);
    83.  
    84.                     // For debugging
    85.                     fixed4 debug_color = fixed4(1,0,0,0);
    86.                     if (worldDepth < 5) {
    87.                         return debug_color * (worldDepth/5);
    88.                     }
    89.                     return debug_color;
    90.                 }
    91.  
    92.                 float fogVar = saturate(1.0 - (fog_end - worldDepth) / (fog_end - fog_start));
    93.                 return lerp(col, ocean_color, fogVar);
    94.             }
    95.             ENDCG
    96.         }
    97.     }
    98. }
    99.  
    What seems to be happening is that all pixels composing the skybox receive the same color. If I use the second variant(commented) for calculating the view direction then I get a mix of colors but the calculated depth is completely incorrect. Even the furthest point on the skybox is barely red and is within the 5 units of distance used for the debugging part.
    Also any movement of the camera(in the editor) will drastically change the depth of the pixels. I compared the depth I calculated with the depth of an object at the exact surface level and from the test I can see my depth values are wrong.

    upload_2022-4-25_0-9-57.png
    (Here I am using the unity_ObjectToWorld method. Here the black part are pixels close-by. The red which can be seen in the distance shows that is where the 5 units of depth end where in fact those pixels are very far away)


    But it just seems the values are crazy! In one instance I colored in red all the pixels where their direction vector's y component was negative. This vector is supposed to be relative to the camera's position and even though I don't fully understand the mechanism I just cannot understand why all the pixels were red when the CAMERA was at around y=0. The vector is relative! It shouldn't matter where the camera is.
     
    Last edited: Apr 25, 2022
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,255
    The problem you’re likely running into is, depending on how you’re doing your “blit”, neither
    ObjectToViewPos
    nor
    unity_ObjectToWorld
    are valid things to use. Those work if you’re rendering … an object … in the scene. If you’re rendering a shader via a Blit() there’s no “object”, there isn’t even necessarily “a scene” anymore. Just whatever render texture Unity has still after rendering everything else that it’s passing to the blit, and what the blit is rendering to. Most of the matrices related to rendering have been cleared out or are for something unrelated to the blit draw you’re doing.

    But there are still some things left. You should look into how to reconstruct world space in a post process, which is what you’re doing here.
     
  3. AntonDelta

    AntonDelta

    Joined:
    Oct 7, 2021
    Posts:
    4
    That makes a lot of sense! Explains everything. Do I have access to _WorldSpaceCameraPos.or at least _CameraDepthTexture?
    Perhaps I am in luck and the matrixes for the camera are still loaded...can I use unity_CameraToWorld?

    I looked closer at the blit code and I saw that I can pass values to the shader from within. Would the camera.cameraToWorldMatrix do the job (as a replacement for unity_ObjectToWorld) ? From a previous issue (I think you wrote the answer) I read that this is named incorectly and doesn't do what the name suggests. Are there alternatives?
     
    Last edited: Apr 27, 2022
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,255
    _WorldSpaceCameraPos
    ,
    _CameraDepthTexture
    ,
    unity_CameraInvProjection
    , and
    unity_CameraToWorld
    are all available, and you'll need both of those later matrices to get the world space position from the depth buffer.

    I have an example shader here that may be useful to you. The shader is showing off ways to calculate the world normal from the depth texture, but to do that it also has to calculate the view space position from the depth.
    https://gist.github.com/bgolus/a07ed65602c009d5e2f753826e8078a0

    The main function that would be of interest to you would be the
    viewSpacePosAtScreenUV
    and
    viewSpacePosAtPixelPosition
    functions.

    Code (csharp):
    1. UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
    2. float4 _CameraDepthTexture_TexelSize;
    3. float getRawDepth(float2 uv) { return SAMPLE_DEPTH_TEXTURE_LOD(_CameraDepthTexture, float4(uv, 0.0, 0.0)); }
    4.  
    5. // inspired by keijiro's depth inverse projection
    6. // https://github.com/keijiro/DepthInverseProjection
    7. // constructs view space ray at the far clip plane from the screen uv
    8. // then multiplies that ray by the linear 01 depth
    9. float3 viewSpacePosAtScreenUV(float2 uv)
    10. {
    11.     float3 viewSpaceRay = mul(unity_CameraInvProjection, float4(uv * 2.0 - 1.0, 1.0, 1.0) * _ProjectionParams.z);
    12.     float rawDepth = getRawDepth(uv);
    13.     return viewSpaceRay * Linear01Depth(rawDepth);
    14. }
    15.  
    16. float3 viewSpacePosAtPixelPosition(float2 vpos)
    17. {
    18.     float2 uv = vpos * _CameraDepthTexture_TexelSize.xy;
    19.     return viewSpacePosAtScreenUV(uv);
    20. }
    From that you should be able to do this to get the world position:
    Code (csharp):
    1. float3 viewPos = viewSpacePosAtScreenUV(i.uv);
    2. float3 worldPos = mul(unity_CameraToWorld, float4(viewPos.xy, -viewPos.z, 1.0)).xyz;
     
  5. AntonDelta

    AntonDelta

    Joined:
    Oct 7, 2021
    Posts:
    4
    Brilliant! Well I got it working, done some tests and initially all looked good. But I tried to do also the inverse of what I wanted initially: draw the underwater objects as seen from above the surface. So two colors, a deep ocean color and a shallow one. Then I get the difference in distance between the real depth of the object and distance to the surface for that pixel.

    But I seem to get a cone...a frustum in shape of a cone. It is as if the objects depth shrinks on the sides of the frustum (or my calculations return a bigger distance to the surface than should).

    So again the plan:
    upload_2022-5-14_0-45-46.png

    And this is what I get:
    upload_2022-5-14_0-46-18.png

    So it works in the middle: the underwater objects are darker the ones closer to the surface, lighter.
    But when I get far away the sides get corrupted.

    The code:
    Code (CSharp):
    1. fixed4 frag(v2f i) : SV_Target{
    2.     // Sample the color texture
    3.     fixed4 col = tex2D(_MainTex, i.uv);
    4.  
    5.     float3 viewPixelPos = viewSpacePosAtScreenUV(i.uv);
    6.     float3 worldPixelPos = mul(unity_CameraToWorld, float4(viewPixelPos.xy, -viewPixelPos.z, 1.0)).xyz;
    7.     float3 localCameraPixelPos = worldPixelPos - _WorldSpaceCameraPos;
    8.  
    9.     // God response: https://answers.unity.com/questions/877170/render-scene-depth-to-a-texture.html
    10.     float logarithmic_depth = UNITY_SAMPLE_DEPTH(tex2D(_CameraDepthTexture, i.uv));
    11.     float depth = Linear01Depth(logarithmic_depth);
    12.     float worldDepth = LinearEyeDepth(logarithmic_depth);    // Real z value away from camera
    13.     float3 dir = normalize(localCameraPixelPos);
    14.  
    15.     // We are above water
    16.     if (_WorldSpaceCameraPos.y >= _OceanSurface) {
    17.         if (dir.y < 0) {
    18.             dir = mul(dir, 1.0f / dir.y);
    19.             dir = mul(dir, _WorldSpaceCameraPos.y - _OceanSurface);
    20.  
    21.             float underwater_depth = worldDepth - length(dir);
    22.  
    23.             // This checks if the object sampled is outside the water/above surface
    24.             if (underwater_depth < 0) return col;
    25.  
    26.             // Superimpose fog
    27.             float fog_start = 0;
    28.             float fog_end = 10;
    29.             float fog_var = saturate(1.0 - (fog_end - underwater_depth) / (fog_end - fog_start));
    30.             fixed4 fog_color = lerp(_OceanShallowColor, _OceanDeepColor, fog_var);
    31.  
    32.             return fog_color;
    33.         }
    34.     }
    35.     return col;
    36. }
    I have a feeling it's got something to do with the near plane and the difference between LinearEyeDepth and viewSpacePos().

    So it seems this happens:
    upload_2022-5-14_1-19-25.png
     
    Last edited: May 13, 2022
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,255
    Yep. Your intuition is working.

    Though your diagram is kind of backwards. The depth is the flat dotted line, and distance is a "curve". A normalized ray vector has a distance of 1 from the camera, which has a varying depth depending on what direction that ray is in, and you're kind of trying to account for that by dividing the dir by the dir.y, but that's only correct if your camera is pointed straight down, which it's not. So you're taking a camera depth value and the world y axis and assuming they're on the same axis, which they're not.

    There's no need to do that extra code to get the
    worldDepth
    , because the shader already has. It's
    -viewPixelPos.z
    . The "
    worldDepth
    " is a world unit scale depth, but not a world space depth. View space is a world unit scale space, so it would be less confusingly named
    viewDepth
    , or
    cameraDepth
    , or
    eyeDepth
    . In fact you're doing a lot of work to recalculate the values you already have in
    viewPixelPos
    and
    worldPixelPos
    .



    So ... going back to what you're trying to do.
    Code (csharp):
    1. float underwater_depth = worldDepth - length(dir);
    worldDepth
    and
    dir
    aren't on the same axis, so this is roughly meaningless. It kind of works out in the cases where they happen to get close, and are wrong everywhere else.

    What you really want is this:
    Code (csharp):
    1. // world y depth for how far underwater the surface is
    2. float underwater_world_depth = _OceanSurface - worldPixelPos.y;
    3.  
    4. // if above water, skip
    5. if (underwater_world_depth < 0) return col;
    6.  
    7. // do world space view direction correction to convert the world y axis depth to camera relative distance
    8. float water_to_underwater_world_distance = underwater_world_depth / -dir.y;
     
  7. AntonDelta

    AntonDelta

    Joined:
    Oct 7, 2021
    Posts:
    4
    Thanks a lot it worked (I'd get you a beer if I wasn't a broke student ;) ) .
    But 1) I don't quite understand what you said about the difference between the worldDepth as calculated and the depth to surface as calculated with length(dir). (I understand the `underwater_world_depth / -dir.y` trick which is mostly what I tried with the intial problem but more elegant). Could you tell me more exactly how different they are?

    2) Just checking but from code I can see that worldDepth or viewPixelPos.z actually start from 0 which is the near plane and not the true center of the camera. This means these values are not a proper space (no center) .. or is it?

    3) `-viewPixelPos.z` is negated. Is it because `unity_CameraInvProjection` returns negative z values because the camera Z axis is reversed?

    I think you meant the issue with worldDepth and dir was this (assume the two rays are the same one):
    upload_2022-5-15_2-0-25.png

    I actually tried to offset for the "unknown formula" by calculating that small length (from the camera to the near plane) but it didn't quite fix the output (perhaps I code it wrong).
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,255
    So, I’ll go backwards through your questions.

    3) Yes, view space is -Z forward in the
    unity_CameraInvProjection
    matrix. This is also the case for the
    UNITY_MATRIX_V
    , as -Z forward is the standard for OpenGL rendering (and Unity kept this for Direct3D and other APIs).

    2) Not true. They both start from the camera origin, not the near plane.

    1) The main problem in your code is
    length(dir)
    . The length of that vector is the distance from the camera. It’s not the view depth of that vector. So
    worldDepth - length(dir)
    is kind of nonsense math.
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,255
    Here's a visual representation of depth and distance to a point on the surface of a sphere.

    The depth is always a "distance" along the camera forward vector. It's the distance from the camera to a plane parallel to the near plane that touches the surface.

    The distance is just that, the distance from the camera to that point.
    upload_2022-5-15_1-40-10.png

    In your original code you're doing the correct steps to modify the
    dir
    from a world view direction vector to be a point on the surface of the water. The length of that gets you the distance from the camera to the water surface. But
    worldDepth
    isn't a distance, it's a still a depth along the camera's forward vector.