Search Unity

What is "eye space" in Unity shaders? (NB: it's not view space)

Discussion in 'Shaders' started by a436t4ataf, Dec 21, 2019.

  1. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,933
    ...because after lots of pain and suffering, I finally went and measured everything in the shader pipeline. The Unity docs state that sampling their Camera depth textures (https://docs.unity3d.com/Manual/SL-DepthTextures.html):

    "LinearEyeDepth(i): given high precision value from depth texture i, returns corresponding eye space depth."

    Coming from many years of OpenGL, I read "eye space" and I assume "view space".

    However, as far as I can tell deductively, they actually return a value relative to the near-plane of the current camera. This makes very little difference from a distance of meters (Unity's default near-plane is 0.3m), but makes a huge difference when you have objects close to the viewer - e.g. anything that's right in front of the player, occupying a large amount of the screen.

    (for a long time, I'd been wondering why my depth values looked almost-but-not-quite right, and triple-checking, quadruple-checking all my math. I'd re-written the projection, sampling, distance calculations over and over again, using different Unity magic functions and features, and just kept on getting the exact same "almost, but not quite" correct data)
     
    DragonCoder, A132LW and bobbaluba like this.
  2. wwaero

    wwaero

    Joined:
    Feb 18, 2020
    Posts:
    42
    Did you figure out how to get proper depth values?
     
  3. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,933
    I used the guess I described above - it seems to work correctly - but I have no idea if this is deliberate by Unity or a bug.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,354
    LinearEyeDepth(depthTexture)
    is absolutely the same as view space depth, within floating point precision error at least. You can test with this shader, which samples from the depth texture and passes its view depth (which is what the
    COMPUTE_EYEDEPTH
    calculates).
    To see anything but black, you'll have to change the Depth Difference Scale to >100000.
    Code (CSharp):
    1. Shader "Unlit/EyeVsViewDepth"
    2. {
    3.     Properties
    4.     {
    5.         [PowerSlider(2.0)] _DiffScale("Depth Difference Scale", Range(1,100000)) = 1
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "Queue"="Geometry" }
    10.         LOD 100
    11.  
    12.         Pass
    13.         {
    14.             Tags { "LightMode" = "ForwardBase" }
    15.  
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.  
    20.             #include "UnityCG.cginc"
    21.  
    22.             struct appdata
    23.             {
    24.                 float4 vertex : POSITION;
    25.             };
    26.  
    27.             struct v2f
    28.             {
    29.                 float4 vertex : SV_POSITION;
    30.                 float4 projPos : TEXCOORD0;
    31.             };
    32.  
    33.             UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
    34.             float _DiffScale;
    35.  
    36.             v2f vert (appdata v)
    37.             {
    38.                 v2f o;
    39.                 o.vertex = UnityObjectToClipPos(v.vertex);
    40.                 o.projPos = ComputeScreenPos (o.vertex);
    41.                 COMPUTE_EYEDEPTH(o.projPos.z);
    42.                 return o;
    43.             }
    44.  
    45.             fixed4 frag (v2f i) : SV_Target
    46.             {
    47.                 // raw depth from the depth texture
    48.                 float depthZ = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.projPos));
    49.  
    50.                 // linear eye depth recovered from the depth texture
    51.                 float sceneZ = LinearEyeDepth(depthZ);
    52.  
    53.                 // linear eye depth from the vertex shader
    54.                 float fragZ = i.projPos.z;
    55.  
    56.                 // difference between sceneZ and fragZ
    57.                 float diff = sceneZ - fragZ;
    58.  
    59.                 return float4(
    60.                     saturate(-diff * _DiffScale), // red if fragZ is closer than sceneZ
    61.                     saturate( diff * _DiffScale), // green if sceneZ is closer than fragZ
    62.                     0.0, 1.0);
    63.             }
    64.             ENDCG
    65.         }
    66.     }
    67.  
    68.     FallBack "VertexLit"
    69. }
    The
    COMPUTE_EYEDEPTH
    macro is defined as this:
    Code (csharp):
    1. #define COMPUTE_EYEDEPTH(o) o = -UnityObjectToViewPos( v.vertex ).z
    2.  
    3. inline float3 UnityObjectToViewPos( in float3 pos )
    4. {
    5.     return mul(UNITY_MATRIX_V, mul(unity_ObjectToWorld, float4(pos, 1.0))).xyz;
    6. }
    Unity uses standard OpenGL view space, which means -Z is forward, hence the negative sign in the macro. And the macro just calls the function to convert from object space vertex position to world space, and then to view space.

    Even the
    COMPUTE_DEPTH_01
    macro and
    Linear01Depth(depthTexture)
    function should match nearly perfectly. The only situation I know of where Unity's code doesn't account for the near plane is the
    UNITY_Z_0_FAR_FROM_CLIPSPACE
    macro used for fog, and even then only in the specific situation of OpenGL using a reversed Z depth ... which AFAIK it never does ... so it's never actually a problem.

    If you're having a problem with the values not matching, something else must be off.
     
    Last edited: Nov 12, 2020
  5. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,933
    That's what I expected - and when I went digging in code I could only find exactly what you pasted above - but measurably: when I added the Camera's near-plane distance the calculations were exactly correct, and without it they weren't. I got this down to circa 5 lines of code at one point while I thought I was going insane :).

    I haven't touched that code in > 6 months - it worked with the above adjustment, so I just shrugged and moved on. I couldn't find *any documentation* from Unity explicitly defining these terms, so I figured it was a dead-end.

    If @wwaero is seeing similar offsets to the data, maybe we can narrow down what we've done differently that's causing this?
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,354
    What platform? On Windows using Direct3D 11 and a hand full of consoles, I've never seen them not match.
     
  7. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,933
    I was only testing on Windows, I'm pretty sure (95%) D3D11, with Unity 2018 and 2019. It really surprised me, and I spent many days narrowing it down to this one discrepancy - I even created new projects and copy/pasted complete examples from the web from other people's tutorials on the depth buffer, and theirs had the exact same problem.

    So I ended up thinking it was either a driver bug or "by design" feature of Unity. The latter seemed more likely (nothing exotic, I was working with nVidia GTX 10xx cards).

    ...it's still possible that it was something hilariously simple, like me having some code somewhere that munged the buffer, but ... the fact that I could show the same problem with 3rd party examples eventually convinced me otherwise.

    (I'm not currently working on that project, and it would take me a lot of time to dig back into it, otherwise I'd go back and try to rebuild the shortest example I made before)
     
  8. Darkgaze

    Darkgaze

    Joined:
    Apr 3, 2017
    Posts:
    397
    Just for anybody looking into this, after some testing with shader graph due to lack of specific documentation anywhere in the Unity manual...
    (Warning: it can change for each rendering pipeline so this is only for HDRP. Specifically because HDRP does camera-relative rendering, represents everything as seen from the camera to avoid precision errors.)

    View space (for example using a position node and setting the space to "View") will return the position of that interpolated vertex in meters from the camera position (not taking into account the near plane). But be careful, this uses OpenGL standards, so -Z is front. (more on this)

    Eye space (for example, when you get the scene depth and set it to Eye sampling) will return the depth in meters of the opaque rendered objects in the scene, and it is expressed exactly the same as View Space (I think they should set the same name instead of confusing people with "eye" and "view"...). Near plane has no effect here.

    BUT if you set the Scene Depth node from eye to sampling Linear01... then you get the distance normalized from 0 to 1, starting from the CAMERA position to the far plane. Confusing as it is, I'd expect to have 0 at the near plane... Changing the near plane changes nothing, but the far plane does.

    Camera space is, in general, expressed normalized from 0 to 1 inside the frustum, 0 in the near plane and 1 in the far plane, and x and y to the width and height 0 to 1 of the frustum. But I haven't tested this in HDRP, so not sure 100% about this.
     
    DragonCoder and a436t4ataf like this.
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,354
    In Unity terminology, Camera space vs View space differ by Z axis convention and that's it. View space being -Z forward as you noted above, and Camera space being +Z forward.

    In HDRP "camera relative space" is simply called World Space, which is why there's also Absolute World Space which correlates to the Unity scene world position.

    The whole "Eye Space" thing seems to come from legacy OpenGL naming conventions, circa early 2000s, and continues to live on within Unity's shader code even into the HDRP.
     
    FriedrichHumboldt likes this.
  10. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    742
    So if HRDP is relative to the camera, what is depth relative to for URP with eye space ? I assumed eye space was also the camera so it returns the distance from the Camera to the opaque objects in the scene?
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,354
    That depends on which depth you're reading about. Raw Z Depth is something else entirely. It's not strictly in any of the spaces listed above, but rather in normalized clip space. For perspective camera views it's a non-linear value between 1.0 and 0.0 going from the near plane to the far plane. At least on anything not using OpenGL. OpenGL raw depth isn't even in clip space, it's kind of its own thing though it closely matches the raw depth of other APIs, it doesn't match OpenGL clipspace which is different from every other API.

    LinearEyeDepth and Linear01Depth are indeed camera relative depth though. Because URP and the built in rendering path, along with all real time rendering ultimately transforms everything to be camera view relative.

    The difference is HDRP sends positions to the GPU already relative to the camera position, where most others (including the URP and BIRP) send the positions in world space. The benefit being higher precision because the further away from 0,0,0 the less precision floating point numbers have, and the more artifacts you end up getting in vertex positions.
     
  12. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    742
    So in shader graph with the depth node, if you choose eye space units (i presume eye means camera space) is this always a positive value or is it platform dependant where some times it might use negative z? Or does shader graph return results with consistency without worrying about what platform on our behalf ?
     
  13. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,354
    The scene depth node set to "Eye" (or "Linear 01") will always be a positive value, regardless of platform or rendering pipeline you've chosen. 0.0 will always be at the camera, and also never visible since anything closer than the camera's near plane will be clipped. Values for the "Eye" depth will also always be in world space units, just remember that depth and distance are not the same thing.

    Technically the "Raw" option will also always be a positive value between 0.0 and 1.0, but the platform will change if 0.0 is near or far.
     
    TheCelt likes this.