Search Unity

Custom depth read when projecting decals - precision issue?

Discussion in 'Shaders' started by Thomas-Mountainborn, Nov 16, 2017.

  1. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    I'm creating a bit of simulation software for tank cleaning applications. The idea is that a heat map is generated, showing where the tank is being cleaned.



    It's important that the recessed area (a manhole) is not cleaned properly, because of the angle it is being hit at. This works more or less fine, but the projected dots which form the line are cut off about halfway when performing the depth check:



    This appears to be related to the near and far plane of the camera. In the above two pictures, the near plane starts at 0.01. If I bump it up, say to 0.1, the dots are projected fully, creating a continuous line, but there is also quite a bit of leaking going on in the depth test. The top of the recessed area should not have been painted:



    Currently, I'm doing the depth check as follows:

    - The vertices are transformed into projected space in the vertex shader:
    o.projSpace = mul(_ProjectorMVP, v.vertex);

    - The uv coordinates are calculated from these coordinates in the fragment shader:
    float2 projUV = i.projSpace.xy / i.projSpace.w + float2(0.5,0.5);

    - The depth texture is sampled, and on Direct X the z direction is reversed:
    float texDepth = tex2D(_ProjectorDepthTex, projUV);
    texDepth = 1.0f - texDepth;


    The texture sample is clamped to the projection space:
    float clampToProjection = step(0, projUV.x) * step(projUV.x, 1) * step(0, projUV.y) * step(projUV.y, 1);
    texDepth *= clampToProjection;


    The projection space z value is converted into the same range as the depth texture:
    float currentDepth = i.projSpace.z;
    //params.y = near clip, params.z = far clip, params.w = 1 / far clip
    currentDepth = (1 / currentDepth - 1 / _ProjectorParams.y) / (_ProjectorParams.w - 1 / _ProjectorParams.y);


    // Depth check passes if own depth is closer than the z buffer. Check that w is equal or greater to 0, anything less is facing away from the projector and will create artefacts.
    float depthPass = step(0, i.projSpace.w) * step(currentDepth, texDepth);
    return clampToProjection * depthPass * tex;



    Am I doing something wrong in the depth check, or is there some other reason why I either have a too severe or too loose depth pass? Thanks!
     
  2. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    This is what it looks like without the depth test: nice, smooth lines.



    I've done some more testing, changing the fragment shader to calculate the depth lookup into eye space instead of converting the projected vertex position into the non-linear 0-1 range thinking the calculations might've been wrong, but the result is exactly the same.

    Code (csharp):
    1. fixed4 frag (v2f i) : SV_Target
    2. {
    3. float isPerspective = abs(_PaintProjectorP[3][2]);
    4. float2 projUV = lerp(i.projSpace.xy / 2 + float2(.5,.5), i.projSpace.xy  / i.projSpace.w + float2(0.5,0.5), isPerspective);
    5. float4 tex = tex2D(_ProjectedTex, projUV) * _Color;
    6.  
    7. // Sample the depth texture.
    8. float texDepth = tex2D(_ProjectorDepthTex, projUV);
    9. // Clamp the depth texture to the projection.
    10. float clampToProjection = step(0, projUV.x) * step(projUV.x, 1) * step(0, projUV.y) * step(projUV.y, 1);
    11. texDepth *= clampToProjection;
    12. // Flip z direction on DirectX.
    13. #if defined(UNITY_REVERSED_Z)
    14. texDepth = 1.0 - texDepth;
    15. #endif
    16.              
    17. // Linearize the sample, converting it into the same space (eye space) with LinearEyeDepth https://gist.github.com/hecomi/9580605 https://docs.unity3d.com/Manual/SL-UnityShaderVariables.html
    18. float eyeSpaceZ = 1.0 / (((1.0 - _ProjectorParams.z / _ProjectorParams.y) / _ProjectorParams.z) * texDepth + ((_ProjectorParams.z / _ProjectorParams.y) / _ProjectorParams.z));
    19.  
    20. // Depth check passes if own depth is closer than the z buffer. Check that w is equal or greater to 0, anything less is facing away from the projector and will create artefacts.
    21. float depthPass = step(0, i.projSpace.w) * step(i.projSpace.z, eyeSpaceZ);
    22.  
    23. return clampToProjection * depthPass * tex;
    24. }
     
    Last edited: Nov 22, 2017
  3. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Gonna go ahead and give this a little bump - still stumped on this issue.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    The questions I have are:

    How are you creating the projector depth texture? Is it in projection space, or linear depth, or maybe linear distance? Is it normalized depth? With out knowing how you they're generated, I cant know for sure that's not the problem.

    Do the projector UVs and the depth texture use the same space? I'm not talking about if the projector's projection matrix is the same as what's used to make the depth texture, though that's a good question. I'm asking if the resulting projUVs are flipped from what the depth texture is. There are lots of weird cases where render textures are upside-down from what you might expect, add Unity may silently flip a camera's projection matrix on you further adding issues. I would render out the depth texture as the projector's output and make sure it looks right.

    Has the projPos.z been properly normalized? By that I mean did you every do projPos.z / projPos.w in the fragment shader? This goes a bit into how the depth texture was created, but it may be needed as well otherwise that value won't necessarily match up.
     
  5. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    Thanks for your interest!

    The depth texture is just the regular depth map created by Unity. The projector script is combined with a camera, which renders to the depth buffer of a render texture.

    Code (csharp):
    1. void Awake()
    2. {
    3. //..
    4. _depthTarget = new RenderTexture(256, 256, 24, RenderTextureFormat.Depth, RenderTextureReadWrite.Default) { anisoLevel = 0 };
    5. _colorTarget = new RenderTexture(256, 256, 0);
    6.  
    7. _camera = GetComponent<Camera>();
    8. _camera.depthTextureMode = DepthTextureMode.Depth;
    9. _camera.SetTargetBuffers(_colorTarget.colorBuffer, _depthTarget.depthBuffer);
    10. _camera.enabled = false;
    11. }
    The projection matrix of the camera is passed into the shader, which uses it to transform the verts into projection space.

    Code (csharp):
    1.  
    2. // Called every frame.
    3. // Render the camera to update the depth texture.
    4. _camera.Render();
    5.  
    6. // Update the paint material.
    7. _paintMaterial.SetTexture(PaintDepthTextureName, _depthTarget);
    8. // Set M, V and P matrices on the shader - the GPU will multiply them faster than the CPU, even though setting more properties also incurs a cost.
    9. _paintMaterial.SetMatrix(ProjectorMatrixNamePrefix + "M", Target.transform.localToWorldMatrix);
    10. _paintMaterial.SetMatrix(ProjectorMatrixNamePrefix + "V", _camera.worldToCameraMatrix);
    11. _paintMaterial.SetMatrix(ProjectorMatrixNamePrefix + "P", _camera.projectionMatrix);
    12. _paintMaterial.SetColor("_Color", Color);
    13. _paintMaterial.SetVector(ProjectorParamsName, new Vector4(1, _camera.nearClipPlane, _camera.farClipPlane, 1f / _camera.farClipPlane));
    Code (csharp):
    1. v2f vert (appdata v)
    2. {
    3. v2f o;
    4. // Position vertices in clip space using uv coordinates, so the result can be rendered to a render texture and be displayed by a regular shader.
    5. o.vertex.xy = (v.uv.xy * 2) - float2(1,0);
    6. // Flip Y on DirectX.
    7. if (_ProjectionParams.x < 0)
    8.     o.vertex.y = 1 - o.vertex.y;
    9. o.vertex.z = 0;
    10. o.vertex.w = 1;
    11.  
    12. // Transform the vertex into projector space for painting.
    13. o.projSpace = mul(mul(_PaintProjectorP, mul(_PaintProjectorV, _PaintProjectorM)), v.vertex);
    14.  
    15. return o;
    16. }
    The full fragment shader is posted above. I'm not normalizing projSpace.z - I believe that transforming the 0-1 depth value into view space puts it in the same space as projSpace? (adding the perspective divide there just results in fully continuous lines again)

    I'm not sure if the depth texture is being filled / accessed correctly though. When I try to display the depth render target, it's black. Probably related to this. However, the depth test does seem to work as displayed in the OP, just not entirely correct.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    The depth test is doing something, that’s for sure.

    Projection space and view space are very different. View / eye space is linear depth. View depth is calculated as mul(MV, v.vertex).z rather than mul(MVP, v.vertex).

    Projection space is also what the depth texture is going to be, as will projSpace.z/projSpace.w. In fact the proj.z/proj.w value is explicitly what gets written to the depth buffer.

    So right now you're comparing unnormalized projection space depth with linear eye depth and getting something that works kind of by luck.
     
  7. Thomas-Mountainborn

    Thomas-Mountainborn

    Joined:
    Jun 11, 2015
    Posts:
    501
    So in theory, this should be the correct way?

    Code (csharp):
    1. float2 projUV =  i.projSpace.xy  / i.projSpace.w + float2(0.5,0.5);
    2. float texDepth = tex2D(_ProjectorDepthTex, projUV);
    3. #if defined(UNITY_REVERSED_Z)
    4. texDepth = 1.0 - texDepth;
    5. #endif
    6. float depthPass = step(0, i.projSpace.w) * step(i.projSpace.z / i.projSpace.w, texDepth);
    However, it's just continuous lines again. It's hard to debug without being able to tell what's in the depth texture. I don't really understand why my attempt in the OP is yielding some kind of expected result - I'm only using the depth buffer of the render target, but I'm reading the color value of it in the shader, which is entirely black since it's not being rendered to.

    Maybe I have to render the depth manually into a color render target using a replacement shader?
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    When I last attempted to copy the depth in the manor you have, I never got anything at all to work. But clearly something is in that depth texture. I would try rendering out abs(texDepth) as it's possible the values in it are negative for some reason. Or don't use step, but saturate(abs(texDepth - i.projSpace.z)) and see what you get.

    I would also try to only have a few projectors in your scene at a time so you can see what's actually happening vs. what you have now where you're trying to figuring it out from the amalgamation of several overlapping.

    Shaders can be quite frustrating to debug. Unity's Frame Debugger and RenderDoc are the tools I personally use, and others use Visual Studio.

    https://docs.unity3d.com/Manual/FrameDebugger.html
    https://docs.unity3d.com/Manual/RenderDocIntegration.html
    https://docs.unity3d.com/Manual/SL-DebuggingD3D11ShadersWithVS.html


    One last wrinkle I'll throw at you. The matrix in camera.projectionMatrix is not the matrix the camera uses when rendering. You need to pass that matrix through another function if you want it to match up exactly.
    https://docs.unity3d.com/ScriptReference/GL.GetGPUProjectionMatrix.html