Search Unity

Resolved How to make a 3D GPU View Cone in HDRP?

Discussion in 'High Definition Render Pipeline' started by John_Leorid, May 1, 2021.

  1. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    To show you what I mean:



    https://github.com/EntroPi-Games/GPU-Line-of-Sight

    The github page leads to a version that is only compatible with the Standard Rendering Pipeline and it would be quite cool if the same result can be archived in HDRP.

    The logic behind this is probably quite straight-forward. Render the Scene from the Perspective of the Enemy.
    Then have a post processing effect, which checks the depth of each point on screen and overlaps it with the depth textures of the Enemy.

    So for each pixel on the player camera screen ->
    get depth -> get world position by camera.localToWorld -> transform the point to the enemy local space (camera.worldToLocal) -> get enemyCamera Depth at the screen position -> compare those two points, if the PlayerCam-Point is futher away than the EnemyCam-Point, do not render it.

    BUT, I have no idea how I can get the depth texture only from a camera. Rendering everything seems very performance heavy (even when disabling almost everything in "CustomFrameSettings" on the camera and adjusting the clipping planes drastically (40m = enemy view distance = far clipping plane).

    I tried adjusting a spotlight heavily but it didn't work out. There is always some kind of falloff and walls are lit too, I only want "light" (color) on floors.

    Any idea how I could pull off some kind of custom shadow mapping to archive the effect?

    Performance is key here - and I have no idea how I could archive any good results on the CPU, since the world is 3D with steps, ramps and so on, so I can't just raycast on one plane, then generate a mesh.

    Edit:
    Custom Pass alows to render the depth only, as it can be seen in the CustomPassExamples DepthCapture , but in terms of Performance, this still seems pretty horrible..
     
    Last edited: May 2, 2021
  2. Olmi

    Olmi

    Joined:
    Nov 29, 2012
    Posts:
    1,553
    Maybe you could approach this as a 2D problem, if you just care about the floor? You could generate a 2D mask for the shapes that make the obstacles (pillars etc.) and then use that mask to cast shadows. Perform the shadow rendering in a compute shader on the GPU side so that you end up with a shadow texture which you could then somehow map back to your game world. And with this kind of texture approach you could just project the shadows back vertically without caring about the elevation. Of course this too can create many sorts of artifacts. I know this is somewhat high level and sketch-like idea but I did some tests a few years ago when I tried to make a 2D lighting system without using any geometry but only textures.

    Here's one screenshot of my old test. It renders 1-16 light sources with a compute shader and generates the texture shadows from a mask that it is given (the "environment" you see in my screenshot.) In this shot there's just two, those green and blue.

    20210501_shadows.png
     
  3. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    If my game would have a flat floor, this would most likely work.
    With a flat floor, I could just do raycasting and mesh generation, using the Job System and only calculating it when needed, it would probably satisfy my performance requirements.

    Unfortunately that's not the case, we have many different heights, connected with stairs, ramps and elevators.
    And the player should know when there is a blind spot right below an enemy he can use to hide.
     
  4. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    265
    I'd be interested to know a bit more about this. The depth capture example only renders the object in your scene from another point of view in the depth buffer and nothing more, so you should have the exact same performances as in the depth-prepass of HDRP or when you add a spot light casting shadows in your scenes.
     
  5. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    I downloaded the newest version of the same right now and the frame drop is only 10fps (not tested in build) which seems probably fine.
    The old one had a frame drop of ~35fps. Without the custom pass and the special two cameras, just adding a normal camera to the given example scene, I had ~70fps - switching back and it dropped to ~30fps.

    Guess I'll just try it, but this seems quite above my skill level.

    That's what I got so far:
    upload_2021-5-4_15-34-34.png

    Hurdle Nr1: Accessing scene normals from within the decal shader. (or any shader)
    Something like a SceneDepth node, just for normals, like SceneNormals - well it doesn't exist. I know the camera is rendering a normal texture (which is needed for AO and ContactShadows) but I have absolutely no idea how I could get it from within a shader graph.
    Use case: only drawing on the floor, not on walls (the pillar).

    Hurdle Nr2: The "custom shadow mapping".
    Transforming the depth texture values to something that can be compared to screen depth which brought me here in the first place.
    I have the texture, rendered from the view of the enemy, it is linked to the shader .. I know that I probably have something to do with the transformMatrix of the enemy-camera, which I can pass to the shader via code. But I have no idea how to convert a pixel in the depth texture to a 3D point / Vector3.
    And how to go about it, so I get the correct point - becasue I am viewing the object (view cone mesh (just a decal projector cube)) from the position of the player.
    So for each pixel the player sees, I have to do some transforming to see if the enemy can see the same point in space (Vector3).
    Use case: not drawing the view cone where the enemy can't see (because his view is blocked by an object)
     
  6. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Ok after I had no chance to get the normal vector, I took on "Hurdle Nr2" first.
    For each pixel, getting the world position, then using the VP (View Projection) Matrix to get screen-coordinates and depth.

    I am looking through the eyes of the enemy in this picture. Setting a gradient for X values (Y looks the same, just vertical).
    upload_2021-5-5_14-3-28.png

    Green should be a very small stripe at the very left of the screen, purple too, on the right side of the screen.
    Turns out, the projection matrix is not working. I can replace it with the identity matrix and the result is exactly the same.

    Here is my code for setting the matrix:


    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering.HighDefinition;
    3.  
    4. [ExecuteAlways]
    5. public class SetMatrix : MonoBehaviour
    6. {
    7.     [SerializeField] DecalProjector _renderer;
    8.     [SerializeField] Camera _cam;
    9.  
    10.     // will be set to true when disabled in the inspector
    11.     // used to see if the code is executing
    12.     [SerializeField] bool _checkActive;
    13.  
    14.     // expose matrices in inspector
    15.     [SerializeField] Matrix4x4 _currentMatrix;
    16.  
    17.     [SerializeField] Matrix4x4 V;
    18.     [SerializeField] Matrix4x4 P;
    19.     [SerializeField] Matrix4x4 VP;
    20.  
    21.     void Update()
    22.     {
    23.         if (!_renderer || !_cam) return;
    24.  
    25.         V = _cam.worldToCameraMatrix;
    26.         P = _cam.projectionMatrix;
    27.         P = GL.GetGPUProjectionMatrix(P, false);
    28.         //Matrix4x4 P = Matrix4x4.identity;
    29.         VP = P * V; // view projection matrix
    30.  
    31.         _currentMatrix = VP;
    32.  
    33.         _renderer.material.SetMatrix("_EnemyCamViewMatrix", V);
    34.         _renderer.material.SetMatrix("_EnemyCamProjectionMatrix", P);
    35.  
    36.         _checkActive = true;
    37.     }
    38. }
    39.  
    And here is the ShaderGraph so far:
     

    Attached Files:

  7. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Further explanation:

    As I couldn't really debug what's happening in the shader code, I wrote the entire thing with all matrix multiplications in C# - got it working within a few hours, including some debug drawing

    upload_2021-5-5_14-10-52.png

    Now all I want to do is the exact same thing in Shader Code, but MultiplyPoint() does not exist there, which says "MultiplyPoint is slower, but can handle projective transformations as well."

    Seems like Matrix4x4 * Vector4 does not return the same results when dealing with perspective matrices - so in shader code, it seems like the perspecive matrix multiplication does not work.

    Any help would be highly appreciated.

    (also on the scene normal topic)
     
  8. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Progress.
    upload_2021-5-6_0-57-4.png

    After 2 days fiddling with matrices, I finally found the solution.
    upload_2021-5-6_0-58-3.png
    Dividing by "w".
    Seems like, if you multiply by the VP matrix (from MVP, but model is not needed, as the position is already in world space), you have to divide the result by "w" or in shader graph "A".

    Now the only thing left is excluding the vertical faces - so I need world normals for each screen position.

    And then checking the performance when 5-10 of these vision cones are visible at the same time.
     
    Olmi and mgear like this.
  9. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Got it
    upload_2021-5-6_15-17-18.png

    Just using the AngleFade on Decals worked for excluding vertical faces.

    Performance seems fine, as I only calculate the depth texture, when the enemy moves.

    And to make life easier for everyone, I decided to put the whole thing on Github, free to use for everyone.

    Enjoy:

    https://github.com/leorid/Unity-HDRP-GPU-View-Cone
     
  10. alex22121991

    alex22121991

    Joined:
    Jul 31, 2017
    Posts:
    1
    Very nice package. Helped me a lot and saved me a lot of time.

    Do you guys have any idea how to interact with a cone like this? Like Mouse over, Click, Drag events..
     
  11. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    If your colliders are set up correctly, you can just use Raycasts.

    1) The first raycast is just ScreenPointToRay() to get the world point under the mouse.

    2) Then search through all enemies and find those, who can theoretically see the point (radius +
    Vector3.Angle(transform.forward, floorHitPoint - eyeTransform.position)< viewAngle
    )

    3) Then for all those enemies, just cast a ray from the hit-point towards their eyes or vice versa.

    If your colliders are not setup correctly, you still do everything up to point 2) - but then, idk if that's possible, you could recreate the inverse projection matrix from the shader (without the division, as unity handles that for you, so just Matrix.MultiplyPoint, if I remember correctly) and then check the pixel on the depth texture, the red channel should tell you, how far away the enemy can see there. 0-1, where 1 is camera near plane and 0 is camera far plane I think.
     
    Last edited: Dec 3, 2022
    alex22121991 likes this.
  12. nicoSdal

    nicoSdal

    Joined:
    Sep 16, 2022
    Posts:
    3
    That's quite an interesting topic.
    Awesome work guys:)

    I'm also playing around with the above mentioned package. And now I am trying to draw an outline around the view
    cones. But I couldn't work it out so far

    My approaches:
    1. Tried to draw a second Cone (slightly bigger then the original) and put it “under” the original cone. So only the Outlines of the bigger cone are visible, which look like the
      outlines of the original Cone.
      à the problems with this solution: I can’t scale the whole cone congruently up. And the resulting cone is starting from the same point as the original.
    2. Itried to implement some edge detection algorithm(like sobel filter) and tried to “print” the resulting image of LOS Mask to a Render Texture. And then tried
      to calculate the edges of the triangle on the Render Texture with a sobel filter.
    3. I thought about using some edge detection math to calculate the points on the edges based on their vector directions but I don’t know exactly which variables of the LOS Mask shader are the one to use for this calculation.

    Has anyone here tried something similiar and made it work?
     
  13. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Do you want an outline around objects inside the cone or just a border around the cone?
    The border should be quite simple, just add the math for it to the view-cone-shader (and a color field of course).

    To draw an outline around objects, you should probably do this from C# and not on the GPU. Someone above asked how to detect objects inside the view cone, just use this, then apply the outline.

    Or do you need something different? Then please elaborate (a picture would help a lot in this case)
     
    nicoSdal likes this.
  14. nicoSdal

    nicoSdal

    Joined:
    Sep 16, 2022
    Posts:
    3
    Thank you for your reply :)


    upload_2023-1-23_9-53-32.jpeg

    This would be the desired outcome, so I "just" need a border around the cone.
    But here is, where i have troubles. I dont't understand the mathematical operations for
    outline calculation in this case.


    Code (CSharp):
    1. Shader "Hidden/Line Of Sight Mask"
    2. {
    3.     CGINCLUDE
    4.  
    5.         #include "/LOSInclude.cginc"
    6.  
    7.         // Samplers
    8.         uniform sampler2D _SourceDepthTex;
    9.         uniform sampler2D _CameraDepthNormalsTexture;
    10.  
    11.         // For fast world space reconstruction
    12.         uniform float4x4 _FrustumRays;
    13.         uniform float4x4 _FrustumOrigins;
    14.         uniform float4x4 _SourceWorldProj;
    15.         uniform float4x4 _WorldToCameraMatrix;
    16.  
    17.         uniform float4 _SourceInfo; // xyz = source position, w = source far plane
    18.         uniform float4 _ColorMask;
    19.         uniform float4 _Settings; // x = distance fade, y = edge fade, z = min variance, w = backface fade
    20.         uniform float4 _Flags; // x = clamp out of bound pixels, y = include / exclude out of bound pixels, z = invert mask, w = exclude backfaces
    21.         uniform float4 _MainTex_TexelSize;
    22.        
    23.         v2f_img_ray Vert( appdata_img v )
    24.         {
    25.             v2f_img_ray o;
    26.             int index = v.vertex.z;
    27.             v.vertex.z = 0.0f;
    28.  
    29.             o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    30.             o.uv = v.texcoord.xy;
    31.  
    32.     #if UNITY_UV_STARTS_AT_TOP
    33.             if (_MainTex_TexelSize.y < 0)
    34.                 o.uv.y = 1-o.uv.y;
    35.     #endif
    36.  
    37.             o.interpolatedRay = _FrustumRays[index];
    38.             o.interpolatedRay.w = index;
    39.  
    40.             o.interpolatedOrigin = _FrustumOrigins[index];
    41.             o.interpolatedOrigin.w = index;
    42.  
    43.             return o;
    44.         }
    45.  
    46.         float CalculateBackfaceFade(float4 pixelWorldPos, float3 pixelViewNormals)
    47.         {
    48.             float3 directionWorld = normalize(pixelWorldPos - _SourceInfo.xyz );
    49.             float3 directionView = mul((float3x3)_WorldToCameraMatrix, directionWorld);
    50.  
    51.             float backfaceFade = dot(directionView, pixelViewNormals);
    52.             backfaceFade = smoothstep(0, -_Settings.w, backfaceFade);
    53.  
    54.             return backfaceFade;
    55.         }
    56.  
    57.         float CalculateVisibility(float4 pixelWorldPos, float3 pixelViewNormals)
    58.         {
    59.             // Calculate distance to source in range[0 - far plane]
    60.             float sourceDistance = distance(pixelWorldPos.xyz, _SourceInfo.xyz);
    61.  
    62.             // Convert world space to LOS cam depth texture UV's
    63.             float4 sourcePos = mul(_SourceWorldProj, pixelWorldPos);
    64.             float3 sourceNDC = sourcePos.xyz / sourcePos.w;
    65.  
    66.             // Clip pixels outside of source
    67.             clip(max(min(sourcePos.w, 1 - abs(sourceNDC.x)), _Flags.z - 0.5));
    68.  
    69.             // Convert from NDC to UV
    70.             float2 sourceUV = sourceNDC.xy;
    71.             sourceUV *= 0.5f;
    72.             sourceUV += 0.5f;
    73.  
    74.             // VSM
    75.             float2 moments = tex2D(_SourceDepthTex, sourceUV).rg;
    76.             float visible = ChebyshevUpperBound(moments, sourceDistance, _Settings.z);
    77.  
    78.             // Backface Fade
    79.             float backfaceFade = CalculateBackfaceFade(pixelWorldPos, pixelViewNormals);
    80.             visible *= lerp(1, backfaceFade, _Flags.w);
    81.  
    82.             // Handle vertical out of bound pixels
    83.             visible += _Flags.x * _Flags.y * (1 - step(abs(sourceNDC.y), 1.0));
    84.             visible = saturate(visible);
    85.  
    86.             // Ignore pixels behind source
    87.             visible *= step(-sourcePos.w, 0);
    88.  
    89.             // Calculate fading
    90.             float edgeFade = CalculateFade(abs(sourceNDC.x), _Settings.y);
    91.             float distanceFade = CalculateFade(sourceDistance / _SourceInfo.w, _Settings.x);
    92.  
    93.             // Apply fading
    94.             visible *= distanceFade;
    95.             visible *= edgeFade;
    96.  
    97.             return visible;
    98.         }
    99.  
    100.         float4 GenerateMask(float visible)
    101.         {
    102.             // Invert visibility if needed
    103.             if(_Flags.z > 0.0)
    104.             {
    105.                 visible = 1 - visible;
    106.             }
    107.  
    108.             // Apply mask color
    109.             float4 mainColor = visible * _ColorMask;
    110.  
    111.             return mainColor;
    112.         }
    113.  
    114.         half4 Frag (v2f_img_ray i) : COLOR
    115.         {
    116.             float4 normalDepth = SampleAndDecodeDepthNormal(_CameraDepthNormalsTexture, i.uv);
    117.             float4 positionWorld = DepthToWorldPosition(normalDepth.w, i.interpolatedRay, i.interpolatedOrigin);
    118.             float visible = CalculateVisibility(positionWorld, normalDepth.xyz);
    119.  
    120.             return GenerateMask(visible);
    121.         }
    122.  
    123.     ENDCG
    124.  
    125.     SubShader
    126.     {
    127.         Pass
    128.         {
    129.             ZTest Always
    130.             ZWrite Off
    131.             Cull Off
    132.             Blend One One
    133.  
    134.             Fog { Mode off }
    135.  
    136.             CGPROGRAM
    137.  
    138.             #pragma vertex Vert
    139.             #pragma fragment Frag
    140.             #pragma fragmentoption ARB_precision_hint_nicest
    141.             #pragma exclude_renderers flash
    142.             #pragma target 3.0
    143.  
    144.             ENDCG
    145.         }
    146.     }
    147.  
    148.     Fallback off
    149. }
    This is how the cone is drawn in the mentioned project.
    Would it be possible, to take the resutling value of GenerateMask(visible)
    and calculate the outline from there?
    If yes, what would be the best approach to solve this?

    Sorry if these questions seem dumb, but I am really struggling with
    this
     
  15. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Well that's a lot of shader code xD

    You are generating the cone based on the camera, in my original shader (made in shader graph) iirc I had to set an "angle" value that had to match the camera cone.

    IDK if there is a better way to do this, probably there are quite a few ways, but I would write the viewspace position into the vertices, so you have values from 0-1 there, representing the position on your enemy-camera. Draw an outline where this value is 0-0.1 and 0.9-1.
    Then, the only remaining thing is the arc at the end of your cone and I have no idea how you calculate it. I don't see any "discard(vecMagnitude > 50)", so I don't understand how the arc is even created. But there I would do the same, just drawing pixels where the value is 50-50.1 .

    Everything else seems way more complicated. Getting neighbour pixels (which isn't really possible or at least very expensive) or doing it in post processing (which would probably also draw an outline behind your enemies or anything blocking the cone). Or drawing it based on stencil masks ... all of that seems way more complicated than passing one more float (or two, if the Arc is based on the Camera Pitch Angle, then it should be the viewspace Y-Coordinate) to your vertices.

    I'm not a shader expert at all, I'm just messing around from time to time. But yea, that's how I would approach it.
     
  16. nicoSdal

    nicoSdal

    Joined:
    Sep 16, 2022
    Posts:
    3
    Thanks again Hannibal, you are giving me a lot of insights:)

    Yes, it is a lot of shader code:D I am still wondering if this is the right package to build up from:D

    I am trying your approach and give you feedback.

    And dont be so modest, you are really appreciated.
     
    John_Leorid likes this.
  17. deXter_969

    deXter_969

    Joined:
    Nov 17, 2018
    Posts:
    4
    If one had to add two layers of cones (far and near zones), would we have to use two CCTVs and then figure out the player's position relative to these two cameras OR is there a better way like using a circular texture of two layers? Also, is there a simple way to shade the shadow in a different color or does it require complex knowledge of shaders?
     
  18. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    Should be doable in the (decal) shader. You already have each world position of every pixel, so all you need to do is to add
    if(Distance(pixelPos, camPos) < farDistance)
    and then use style 1 or style 2.

    That's what shaders are all about, right? Defining the shading (reaction to light, including shadows) of surfaces.
    I think I've used an Unlit-Decal-Shader. Unlit-Shaders don't receive any Light-Information afaik, so you'd have to switch the shader type (just create a new one of the corret type and copy/paste all nodes) and then react to shadows and lights however you want.
     
    deXter_969 likes this.
  19. StrangeWays777

    StrangeWays777

    Joined:
    Jul 21, 2018
    Posts:
    31
    Would this be possible on URP? Awesome work btw, very interesting. You don't know how hard it was to find this, I can't find anything else that comes close but it's exactly what I need.
     
  20. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    As URP supports decals, it should be possible on URP, yes. Back in May 2021 this wasn't the case. Now the main difference between HDRP and URP should be the component that gets the Depth Texture from the enemys view. Everything else should be quite the same.
     
    StrangeWays777 likes this.
  21. StrangeWays777

    StrangeWays777

    Joined:
    Jul 21, 2018
    Posts:
    31
    Thank you for the quick response. I think it will be possible but I am struggling to find any decent documentation or anything on the forums regarding how to get the Depth Texture of a camera in URP. Apparently you have to enable Depth Texture on your URP Asset which I have done and the cameras should then automatically create depth textures which can then be accessed with a global property called _cameraDepthTexture in a shader, but how on earth you are suppose to use that to get the Depth Texture of a specific camera I have no idea.

    EDIT: Actually I may have found a solution, or atleast a good starting point.
    Looks like you need to use a Custom Render Feature and a Render Pass.
    I guess it's time for me to learn how to use these.
    https://twitter.com/rn49rn49/status/1180115225829203969
     
    Last edited: Jul 5, 2023
  22. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    I just quickly searched for a bit myself and I also couldn't find anyone doing this.

    Tho, I think just setting the output texture and adjusting other settings should actually work just fine.

    upload_2023-7-6_0-25-21.png

    I havent tested this in context of the view cone project and it seems like it renders the colored image to the render texture and (maybe?) saving depth also into the render texture but all in all it should output the depth texture to the render texture - just in another layer of it.

    But yea, maybe I'm on the wrong track and it really works with render features - I think in the HDRP version I've used some kind of custom pass on each camera? It's been a while since I've had a look at it.
     
    StrangeWays777 likes this.
  23. StrangeWays777

    StrangeWays777

    Joined:
    Jul 21, 2018
    Posts:
    31
    Thanks again, I tried what you suggested but it just resulted in a regular image of what the cameras sees with no depth data. After a lot of trial and error I found a way to get the depth textures of specified cameras, it's a bit awkward but saved me from having to write a bunch of render passes (which I can't for the life of me figure out because of the poor documentation atm) and I'm happy with the method I found and I think I prefer this method anyway but I'll admit I am concerned it might not be as optimal as getting the depth texture from a render pass and I'll explain why soon. Regardless it will do for now until we have some proper documentation on Render Passes and Scriptable Renderer Features which I can actually learn and understand.

    So my plan was to get a specific cameras depth texture from _CameraDepthTexture, the problem was that it's a global shader property so it will always return the depth texture of the last camera that was rendered. So then I thought about trying to copy the texture from that property post render of the cameras I want to get the depth textures for. So I had to find a way to do that now that OnPostRender doesn't get called in URP. So I created a CameraManager script to call the URP equivalent of OnPreRender and OnPostRender by delegating methods to the RenderPipelineManager.

    Here's the CameraManager script. Put it on one object in your scene, like a GameController object or something.
    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering;
    3.  
    4. public class CameraManager : MonoBehaviour
    5. {
    6.     void OnEnable()
    7.     {
    8.         RenderPipelineManager.beginCameraRendering += PreRender;
    9.         RenderPipelineManager.endCameraRendering += PostRender;
    10.     }
    11.  
    12.     private void PreRender(ScriptableRenderContext _context, Camera _camera)
    13.     {
    14.         if (_camera.TryGetComponent<CameraRenderControl>(out CameraRenderControl _cameraRenderControl))
    15.         {
    16.             _cameraRenderControl.PreRender(_context, _camera);
    17.         }
    18.     }
    19.  
    20.     private void PostRender(ScriptableRenderContext _context, Camera _camera)
    21.     {
    22.         if (_camera.TryGetComponent<CameraRenderControl>(out CameraRenderControl _cameraRenderControl))
    23.         {
    24.             _cameraRenderControl.PostRender(_context, _camera);
    25.         }
    26.     }
    27.  
    28.     private void OnDisable()
    29.     {
    30.         RenderPipelineManager.beginCameraRendering -= PreRender;
    31.         RenderPipelineManager.endCameraRendering -= PostRender;
    32.     }
    33. }
    34.  
    I then created a script called CameraRenderControl (Put this on all the cameras you want to get depth textures for) that would handle the PreRender and PostRender calls. This is where the magic happens. It's worth noting that a cameras depth texture doesn't seem to be created if it isn't currently being rendered to a display or to a render texture and the only way I could find around that is to render the cameras I want to get the depth textures from to a render texture (If there's a better way I'd like to know because I imagine this will have an effect on performance if you are rendering a ton of cameras to their own render textures, it's ok for my particular use case because the only FOV I'm interested in drawing is the players, but I imagine other users will have other use cases and I'm sure I will too in the future, so something to keep in mind)

    Code (CSharp):
    1. using System;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4. using UnityEngine.Rendering;
    5. using UnityEngine.Rendering.Universal;
    6.  
    7. public class CameraRenderControl : MonoBehaviour
    8. {
    9.     public Texture depthTexture;
    10.  
    11.     public List<GameObject> thingsToHide = new List<GameObject>();
    12.     private List<GameObject> hiddenThings = new List<GameObject>();
    13.  
    14.     [Obsolete]
    15.     public void PreRender(ScriptableRenderContext _context, Camera _camera)
    16.     {
    17.         // Do stuff here before the render, i.e. you could hide things specifically from this camera
    18.         foreach (GameObject _thingToHide in thingsToHide)
    19.         {
    20.             _thingToHide.SetActive(false);
    21.             hiddenThings.Add(_thingToHide);
    22.         }
    23.  
    24.         // Manual Render (This might actually not be necessary but good to know how to do)
    25.         UniversalRenderPipeline.RenderSingleCamera(_context, _camera); // NOTE: This is the obsolete method
    26.                                                                        // the one below is the new broken one, use this until Unity fixes it
    27.  
    28.         //UniversalRenderPipeline.SubmitRenderRequest<UniversalAdditionalCameraData>(_camera, _camera.GetUniversalAdditionalCameraData());
    29.     }
    30.  
    31.     public void PostRender(ScriptableRenderContext _context, Camera _camera)
    32.     {
    33.         // Get Camera depth texture (Must be rendering to a used display OR a render texture)
    34.         Texture _camDepthTexture = Shader.GetGlobalTexture("_CameraDepthTexture");
    35.         if (!depthTexture) depthTexture = new Texture2D(_camDepthTexture.width, _camDepthTexture.height, TextureFormat.RFloat, false);
    36.         Graphics.CopyTexture(_camDepthTexture, depthTexture);
    37.  
    38.         // Reactivate the hidden things after the render
    39.         foreach (GameObject _hiddenThing in hiddenThings)
    40.         {
    41.             _hiddenThing.SetActive(true);
    42.         }
    43.         hiddenThings.Clear();
    44.     }
    45. }
    Now this script has a public reference to a usable depth texture which we can feed in to a shader!
    So now that's out of the way I will move on to the next step and see if I can get this thing working.




    EDIT (I don't want to needlessly bump the post and anger the mods lol): I have it working, just a few things I need to clear up. Currently I'm updating the material properties (the matrix and the depth texture) from an Update method from another script and it's causing some latency so I think I would be better off updating the material properties from the PreRender, but other than that it's perfect! I can't thank you enough.

    One last thing I need to figure out though is how to make a fog of war using this decal, I either need to find a way to render only the decals to a camera so I can use the render texture and use it as a mask for a cutout shader (not sure this is possible because I'm sure decals are applied retrospectively so if you don't render anything else there's nothing for the decals to render on to) or I need to make them have alpha and figure out how to stop the overlapping parts from looking darker and make the environmental lighting a little darker which should produce a simular effect. Not sure if either or these are possible but I will keep trying.
     
    Last edited: Jul 7, 2023
  24. John_Leorid

    John_Leorid

    Joined:
    Nov 5, 2012
    Posts:
    651
    You don't actually need the CameraManager. ^^

    Instead you can subscribe to the static event in the "CameraRenderControl"-Script and add a simple
    Code (CSharp):
    1.     public void PostRender(ScriptableRenderContext _context, Camera _camera)
    2.     {
    3.         if(_camera.transform != transform) return;
    4.  
    5.         // your code here
    6.     }
    (so you get the event on all cameras with the script, but only execute it on the cam that is currently rendering. A simple bool-check per cam isn't really a performance concern, considering how expensive the actual render-process is)

    Also I think you don't have to use the "_CameraDepthTexture" because the depth texture is saved to the RenderTexture on Render -> https://docs.unity3d.com/ScriptReference/RenderTexture-depthBuffer.html
    I think you'd just have to read the contents of the RenderTexture-DepthBuffer and write it to the RenderTexture itself.

    But I haven't used this feature yet. As said earlier, I am not a Pro with GPU/Shader-Things.

    Unfortunately I am busy with making a YouTube Video (about my Event-System, to write scaleable code) and working at my dayjob as well as on my Game. And my whole weekend if full of friends birthdays and other events, so I have no time to really look into it.
    Tho, I'd like to provide URP Support for my GitHub-Package and maybe turn it into an actual package .. we'll see .. some day.
     
    Last edited: Jul 7, 2023