Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Depth Pass is offset in Game Camera but not Scene Camera

Discussion in 'Shaders' started by Max_Chaos, Jul 24, 2020.

  1. Max_Chaos

    Max_Chaos

    Joined:
    Dec 18, 2013
    Posts:
    12
    Hi All

    I am writing a volume shader and need help getting the correct Zdepth Screen Position so as i march through the volume and hit another object with the same Zdepth, it will stop marching and return.

    The shader is attached to the game object (cube) ray marching a torus at the moment.

    Zdepth mapping is working great in the scene view, but game view it's offset for some reason in the y direction.

    As you can tell I have tried everything and this code works the best.

    Code (CSharp):
    1. float _depth = LinearEyeDepth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPosition)).r);
    Any help would be much appreciated. Even if i have to just offset the depthmap texture some how manually that will do.

    Code (CSharp):
    1.  void frag(v2f i, out fixed4 color : SV_TARGET, out float depth : SV_Depth)//fixed4 frag (v2f i) : SV_Target
    2.             {
    3. //                float nonLinearDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
    4.   //              float depth = LinearEyeDepth(nonLinearDepth);
    5.                 float viewLength = length(i.viewVector);
    6.                 float3 rayDir = i.viewVector / viewLength;
    7.  
    8.                 float2 textureCoordinate = i.screenPosition.xy / i.screenPosition.w;
    9.                 float aspect = _ScreenParams.x / _ScreenParams.y;
    10.                 textureCoordinate.x = textureCoordinate.x * aspect;
    11.                 textureCoordinate = TRANSFORM_TEX(textureCoordinate, _MainTex);
    12.                
    13.                 // Depth and cloud container intersection info:
    14.                 //float nonlin_depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.screenPosition);
    15. //                textureCoordinate = TRANSFORM_TEX(textureCoordinate, _CameraDepthTexture);
    16.                 //float _depth = LinearEyeDepth(nonlin_depth) * viewLength;
    17.                 float _depth = LinearEyeDepth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPosition + depthOffset)).r);
    18. //                float rawZ = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPosition / i.screenPosition.w));
    19. //                textureCoordinate = TRANSFORM_TEX(textureCoordinate, _CameraDepthTexture);
    20.                 //float corrawz = tex2D(_CameraDepthTexture, textureCoordinate);
    21. //                float _depth = LinearEyeDepth(rawZ);
    22.  
    Depth01.PNG Depth02.PNG
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    How are you calculating the screen position?
     
  3. Max_Chaos

    Max_Chaos

    Joined:
    Dec 18, 2013
    Posts:
    12
    Code (CSharp):
    1. v2f object;
    2.                 object.vertex = UnityObjectToClipPos(v.vertex);
    3.  
    4.                 //Mesh UV's
    5.                 object.uv = TRANSFORM_TEX(v.uv, _MainTex);
    6.  
    7.                 object.rayOrigin = mul(unity_WorldToObject, float4(_WorldSpaceCameraPos, 1));
    8.                 object.hitPos = v.vertex;
    9.  
    10.                 //Tri Planar UV's????
    11.                 float3 viewVector = mul(unity_CameraInvProjection, float4(v.uv * 2 - 1, 0, -1));
    12.                 object.viewVector = mul(unity_CameraToWorld, float4(viewVector,0));
    13.                
    14.                 //Screen Space UV's
    15.                 object.screenPosition = ComputeScreenPos(object.vertex);
    16.                 return object;
     
  4. Max_Chaos

    Max_Chaos

    Joined:
    Dec 18, 2013
    Posts:
    12
    This code gave me the same result, work fine in scene few, same y offset in game camera.

    Code (CSharp):
    1. float2 textureCoordinate = i.screenPosition.xy / i.screenPosition.w;
    2. SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, textureCoordinate);
    3. float _depth = LinearEyeDepth(corrawz);
    This worked perfectly for maping color to the object, but can't get it do do the same for depth.

    Code (CSharp):
    1. float2 textureCoordinate = i.screenPosition.xy / i.screenPosition.w;
    2. float aspect = _ScreenParams.x / _ScreenParams.y;
    3. textureCoordinate.x = textureCoordinate.x * aspect;
    4. textureCoordinate = TRANSFORM_TEX(textureCoordinate, _MainTex);
    5.            
    6. fixed4 col = tex2D(_MainTex, textureCoordinate);
     
  5. Max_Chaos

    Max_Chaos

    Joined:
    Dec 18, 2013
    Posts:
    12
    Here is the raymarching controller and shader. Just Add the shader via material and controller to a normal cube in a new scene with a light. Add another cube with default material and floor and see if it works for you.

    I'm using 2019.4.3f
     

    Attached Files:

  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    So it matters significantly if you’re planning on doing this via a Blit or object rendered in the world. If you’re using a Blit, you have to reconstruct the view direction from the inverse projection matrix, and sample the depth texture using the vertex UVs. If you’re doing this as a mesh rendered in the world, then you’ll want to get the view vector from the actual vertex positions and sample the depth texture from the screen space position you’re already using to sample with. As for why it’s offset / upside down, try outputting the raw depth using the screen position as the shader’s color output and I suspect you’ll find it’s correct ... again, as long as you’re doing this on an object rendered in the world, and not as a Blit.

    If you look at the built in shaders that use the depth texture, like the particle shaders, you’ll see they’re not doing anything more.
    https://github.com/TwoTailsGames/Un...ultResourcesExtra/Particle Alpha Blend.shader

    My guess is the real problem is with your view vector being calculated incorrectly. If you use the camera projection matrix (
    unity_CameraProjection
    ), that may indeed be flipped vs the actual projection matrix used for rendering (
    UNITY_MATRIX_P
    ). And when rendering an object in the scene, there’s no reason to use the inverse camera projection matrix to calculate the view vector since the object exists in 3D space. In fact there’s even a built in function for getting the object space view dir!
    o.viewVector = ObjSpaceViewDir(v.vertex);


    Also, random aside, the “
    o
    ” in a vertex shader is usually short for “output”, not “object”.
     
  7. Max_Chaos

    Max_Chaos

    Joined:
    Dec 18, 2013
    Posts:
    12
    Thanks bgolus. I am using the shader on an object. This is very helpful. I managed to get it working as well as pass a 3D texture to the object as well which scales and rotates with the object.

    float4 sampledColor = ControlTexture.SampleLevel(samplerControlTexture, rayPos * scale + texture_offset, 0);

    DepthFixed.PNG
     
  8. Tomyn

    Tomyn

    Joined:
    Jan 19, 2018
    Posts:
    5
    Apologies for digging up an old topic. I've also run into a similar issue with my depth texture looking fine in editor but once it's built it appears offset in the y position on some devices. After some debugging I found when I removed the use of the depth texture from the effect everything renders properly on all devices.

    My camera captures the depth with a replacement shader and creates the target texture before writing it to a global render texture:
    Code (CSharp):
    1. using UnityEngine;
    2.  
    3. [ExecuteInEditMode]
    4. public class DepthPostProcessing : MonoBehaviour
    5. {
    6.     public Shader cameraDepthPass;
    7.     [SerializeField] private Material postProcessMaterial;
    8.     [SerializeField] private int downResFactor = 1;
    9.  
    10.     private string _globalTextureName = "_DepthRenderTex";
    11.     private Camera cam;
    12.  
    13.     private void OnEnable()
    14.     {
    15.         if (cameraDepthPass != null)
    16.         {
    17.             GetComponent<Camera>().SetReplacementShader(cameraDepthPass, "");
    18.         }
    19.  
    20.         GenerateRT();
    21.     }
    22.  
    23.     void OnDisable()
    24.     {
    25.         GetComponent<Camera>().ResetReplacementShader();
    26.     }
    27.  
    28.     void GenerateRT()
    29.     {
    30.         cam = GetComponent<Camera>();
    31.  
    32.         if (cam.targetTexture != null)
    33.         {
    34.             RenderTexture temp = cam.targetTexture;
    35.  
    36.             cam.targetTexture = null;
    37.             DestroyImmediate(temp, true);
    38.         }
    39.  
    40.         cam.targetTexture = new RenderTexture(cam.pixelWidth >> downResFactor, cam.pixelHeight >> downResFactor, 16);
    41.         cam.targetTexture.filterMode = FilterMode.Bilinear;
    42.  
    43.         Shader.SetGlobalTexture(_globalTextureName, cam.targetTexture);
    44.     }
    45. }
    This is the replacement shader:
    Code (CSharp):
    1. Shader "CameraDepthPass"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags
    10.         {
    11.             "RenderType"="Opaque"
    12.         }
    13.  
    14.         Pass
    15.         {
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.  
    20.             #include "UnityCG.cginc"
    21.  
    22.             struct appdata
    23.             {
    24.                 float4 vertex : POSITION;
    25.             };
    26.  
    27.             struct v2f
    28.             {
    29.                 float4 vertex : SV_POSITION;
    30.                 float depth : DEPTH;
    31.             };
    32.  
    33.             sampler2D _MainTex;
    34.             float4 _MainTex_ST;
    35.  
    36.             v2f vert (appdata v)
    37.             {
    38.                 v2f o;
    39.                 o.vertex = UnityObjectToClipPos(v.vertex);
    40.                 o.depth = -mul(UNITY_MATRIX_MV, v.vertex).z * _ProjectionParams.w;
    41.                 return o;
    42.             }
    43.  
    44.             fixed4 frag (v2f i) : SV_Target
    45.             {
    46.                 float invert = 1 - i.depth;
    47.  
    48.                 return fixed4(invert.rrr,1);
    49.             }
    50.             ENDCG
    51.         }
    52.     }
    53. }
    54.  
    Then I access it via shader and it sample the texture to apply a distortion to the outer edge of the mesh captured within the render texture:
    Code (CSharp):
    1. Shader "Depth Edge Distortion Shader"
    2. {
    3.     Properties
    4.     {
    5.         [NoScaleOffset] _MainTex ("Mask Texture", 2D) = "white" {}
    6.         _Sensitivity("Sensitivity", Vector) = (1,1,1,1)
    7.         _EdgeTex("Edge Texture", 2D) = "white" {}
    8.         _EdgeTilingOffset("Edge Tiling Offset", Vector) = (1,1,0,0)
    9.         _FoldAmount("Fold Amount", Range(0,1)) = 1
    10.         _DynamicDistanceAmount("Dynamic Distance Amount", Range(0,1)) = 0
    11.     }
    12.     SubShader
    13.     {
    14.         Tags
    15.         {
    16.             "RenderType"="Transparent"
    17.             "Queue"="Transparent"
    18.         }
    19.  
    20.         Cull Off
    21.         Lighting Off
    22.         ZWrite Off
    23.         ZTest Always
    24.  
    25.         Blend SrcAlpha OneMinusSrcAlpha
    26.  
    27.         Pass
    28.         {
    29.             CGPROGRAM
    30.             #pragma vertex vert
    31.             #pragma fragment frag
    32.             #pragma shader_feature DEPTH
    33.  
    34.             #include "UnityCG.cginc"
    35.             #include "HLSLSupport.cginc"
    36.  
    37.             struct appdata
    38.             {
    39.                 float4 vertex : POSITION;
    40.                 float2 uv : TEXCOORD0;
    41.             };
    42.  
    43.             struct v2f
    44.             {
    45.                 float2 uv[5] : TEXCOORD0;
    46.                 float2 screenPos : TEXCOORD5;
    47.                 float2 edgeScreenUV : TEXCOORD6;
    48.                 float3 viewVector : TEXCORRD7;
    49.                 float4 vertex : SV_POSITION;
    50.             };
    51.  
    52.             uniform sampler2D _DepthRenderTex;
    53.             float4 _DepthRenderTex_TexelSize;
    54.  
    55.             uniform sampler2D _ColorRenderTex;      
    56.             sampler2D _EdgeTex;
    57.  
    58.             half4 _Sensitivity;
    59.             float4 _EdgeTilingOffset;
    60.             float _FoldAmount;
    61.             float _DynamicDistanceAmount;
    62.  
    63.             v2f vert(appdata v)
    64.             {
    65.                 v2f o;
    66.                 o.vertex = UnityObjectToClipPos(v.vertex);
    67.  
    68.                 half2 uv = v.uv;
    69.                 o.uv[0] = uv;
    70.  
    71.                 #if UNITY_UV_STARTS_AT_TOP
    72.                 if (_DepthRenderTex_TexelSize.y < 0)
    73.                     uv.y = 1 - uv.y;
    74.                 #endif
    75.  
    76.                 // used for detecting the amount of the edge we want to distort
    77.                 float dynamicDistance = lerp(0.5, 1.5, _DynamicDistanceAmount);
    78.  
    79.                 o.uv[1] = uv + _DepthRenderTex_TexelSize.xy * half2(1, 1) * dynamicDistance;
    80.                 o.uv[2] = uv + _DepthRenderTex_TexelSize.xy * half2(-1, -1) * dynamicDistance;
    81.                 o.uv[3] = uv + _DepthRenderTex_TexelSize.xy * half2(-1, 1) * dynamicDistance;
    82.                 o.uv[4] = uv + _DepthRenderTex_TexelSize.xy * half2(1, -1) * dynamicDistance;
    83.      
    84.                 o.screenPos = ((o.vertex.xy / o.vertex.w) + 1) * 0.5;
    85.  
    86.                 // tiling for the edge ditortion texture
    87.                 o.edgeScreenUV = o.screenPos * _EdgeTilingOffset.xy + float2(_EdgeTilingOffset.zw * 1.3);
    88.  
    89.                 return o;
    90.             }
    91.  
    92.             half CheckSame(half4 center, half4 sample, float2 screenUVs, sampler2D edgeTex) {
    93.                 half edgeTexture = tex2D(edgeTex, screenUVs).r;
    94.              
    95.                 half2 centerNormal = center.xy;
    96.                 float centerDepth = DecodeFloatRG(center.zw);
    97.                 half2 sampleNormal = sample.xy;
    98.                 float sampleDepth = DecodeFloatRG(sample.zw);
    99.      
    100.                 // difference in normals
    101.                 // do not bother decoding normals - there's no need here
    102.                 half2 diffNormal = abs(centerNormal - sampleNormal) * _Sensitivity.x;
    103.                 int isSameNormal = (diffNormal.x + diffNormal.y) < 0.1;
    104.                 // difference in depth
    105.                 float diffDepth = abs(centerDepth - sampleDepth) * _Sensitivity.y;
    106.                 // scale the required threshold by the distance
    107.                 int isSameDepth = diffDepth < 0.1 * centerDepth;
    108.      
    109.                 // return:
    110.                 // 1 - if normals and depth are similar enough
    111.                 // 0 - otherwise
    112.                 float result = isSameNormal * isSameDepth ? 1.0 : 0.0;
    113.                 return smoothstep(result, 1, edgeTexture);
    114.             }
    115.  
    116.             half4 frag(v2f i) : SV_Target
    117.             {
    118.                 half4 colorRT = tex2D(_ColorRenderTex, i.screenPos);
    119.              
    120.                 half4 sample1 = tex2D(_DepthRenderTex, i.uv[1]);
    121.                 sample1 = lerp(sample1, ceil(sample1), _FoldAmount);
    122.                 half4 sample2 = tex2D(_DepthRenderTex, i.uv[2]);
    123.                 sample2 = lerp(sample2, ceil(sample2), _FoldAmount);
    124.                 half4 sample3 = tex2D(_DepthRenderTex, i.uv[3]);
    125.                 sample3 = lerp(sample3, ceil(sample3), _FoldAmount);
    126.                 half4 sample4 = tex2D(_DepthRenderTex, i.uv[4]);
    127.                 sample4 = lerp(sample4, ceil(sample4), _FoldAmount);
    128.              
    129.                 //edge detection
    130.                 half depthEdge = 1.0;
    131.              
    132.                 depthEdge *= CheckSame(sample1, sample2, i.edgeScreenUV, _EdgeTex);
    133.                 depthEdge *= CheckSame(sample3, sample4, i.edgeScreenUV, _EdgeTex);
    134.  
    135.                 colorRT.a -= depthEdge;
    136.  
    137.                 return float4(colorRT);
    138.             }
    139.             ENDCG
    140.         }
    141.     }
    142. }
    143.  
    After reading your suggestion I figured my problem has something to do with not blitting to screen but instead capturing the depth of an object within the scene incorrectly. Admittedly I don't fully understand on how I would use the ObjSpaceViewDir in this setup or if that even is the appropriate solution in this case. May I get help to understand where I went wrong with this? Any advice would be greatly appreciated.