Search Unity

Question Rendering a Depth Only Pass on a Seperate Camera

Discussion in 'General Graphics' started by PeterSmithQUT, Sep 22, 2021.

  1. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    9
    I've been attempting to have an additional camera render a depth texture while still having other cameras rendering normally.

    The use case here is robotic simulation, A LiDAR or Sonar can be simulated with either physics raycasts or by sampling a depth texture, the latter is what I'm trying to implement.

    I have been able to create a Camera rendering into a Render Texture in HDRP where the Render Texture is using the GraphicsFormat.DepthAuto and this works, but the problem is one of optimization. Using the profiler, I have found that the Render Pipeline renders the entire colour image including shadows, transparent objects and post-processing only for this additional data to be discarded.



    So far I've investigated three ways to optimize this all of which I've run into blockers, so if I could get some help that would be great:
    1. Create a CustomPass with the injection point 'After Opaque Depth and Normal' which provides me access to the CustomPassContext but I couldn't find a way to tell that render to stop after this point.
    2. Have a different render pipeline per camera. If you can provide information about how to do this (and if it's even possible), that would be great. As far as I can tell, there's only one Render Pipeline Asset which is set per project in the Quality Settings.
    3. Have a render pipeline extend the HDRP pipeline allowing me to modify it to work differently for different cameras. I'm not sure how to extend the HDRP, so some tips on doing that would be great.
    Also if there are any other suggestions to optimize the rendering to stop the pipeline after depth has been rendered, but only for specific cameras, that would be good also.
     

    Attached Files:

  2. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    9
    So it seems you can set 'Custom Frame Settings' on a per-camera basis which I can use to disable a lot of the rendering overhead that I was looking to avoid such as shadows, post-processing, transparents, distortion, fog and more. It seems to still be rendering a colour image, but the amount of GPU load has decreased substantially.



    This seems to have moved the problem to the CPU where the culling and the initialisation of the Render Graph is now taking a lot of time causing a stall on the Render Thread while it waits for these tasks to complete.

     

    Attached Files:

  3. Julien_Unity

    Julien_Unity

    Unity Technologies

    Joined:
    Nov 17, 2015
    Posts:
    72
    Hi,

    Is the camera you use to generate the depth texture seeing the same scene as the regular camera? If so then there is a couple ways you can access the depth texture. Either by using the AOV APIs as described here https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@13.1/manual/AOVs.html or by using
    requestGraphicsBuffer and GetGraphicsBuffers on the HDAdditionalCameraData component.
    If you want to do another camera with custom rendering then you can try using the customRender API on the same component. This basically allows you to write your own mini SRP within HDRP by bypassing all normal HDRP rendering for this camera.

    Hope this helps.
     
    LooperVFX and PeterSmithQUT like this.
  4. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    9
    Hi @Julien_Unity
    Sounds like exactly what I'm looking for! Would you happen to have any documentation on this? Googling "unity cusomRender API" doesn't yield any results of the same name.
     
  5. Julien_Unity

    Julien_Unity

    Unity Technologies

    Joined:
    Nov 17, 2015
    Posts:
    72
  6. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    9
    Hi @Julien_Unity

    I have implemented the CustomRender with a ScriptableRenderContext on the HDAdditionalCameraData and it seems to have worked rather well for what I was trying to achieve! Thanks!



    As you can see, the actual time spent rendering is now really low so that's awesome!

    I still have some improvements to make on the CullScriptable side, mostly on the terrain, but that's another issue.

    For those wondering, there's a very useful tutorial by CatlikeCoding on creating a custom scriptable render pipeline which I used and can be found here.

    Some additional notes for this specific case is that I only needed to render opaques from the GBuffer and the draw settings were as follows:

    Code (CSharp):
    1. SortingSettings sortingSettings = new SortingSettings()
    2. {
    3.     criteria = SortingCriteria.CommonOpaque
    4. };
    5. DrawingSettings drawingSettings = new DrawingSettings(HDShaderPassNames.s_GBufferName, sortingSettings);
    6. FilteringSettings filteringSettings = new FilteringSettings(RenderQueueRange.opaque);
     

    Attached Files:

    MaxWitsch and PutridEx like this.
  7. alexandre-fiset

    alexandre-fiset

    Joined:
    Mar 19, 2012
    Posts:
    715
    I stumbled upon this thread which helped us drive the cost of our snow trail depth camera rendering in half, so I'm thankful for that!

    Still, we think 2ms is still quite heavy for such as simple task. The timeline view displays things we do not need, so I'd like to know if anyone here has knowledge to reduce this overhead even more. Our camera is orthographic, renders only some spheres on a depth texture, that is used to displace our snow meshes, so:
    1. We do not need anything related to post processing
    2. We do not need LODs
    3. We do not even need any form of culling

    upload_2023-3-29_11-28-57.png

    This is a lot of wasted CPU time for something really basic.
     
  8. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    333
    Is there a good reason that you're not manually rendering to that texture with a CommandBuffer? Does a camera need to be involved?
     
  9. MaxWitsch

    MaxWitsch

    Joined:
    Jul 6, 2015
    Posts:
    114
    When you want a RenderTexture rendered from above with only Depth for example, you need a camera mainly for the culling results.
    Overriding the Camera matrix without a camera is easy.
     
  10. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    333
    In this case, they apparently don't want culling and are only rendering the depths of a few spheres. This whole thing could be a single draw call from a CommandBuffer.

    If I were implementing this, I probably wouldn't even bother with sphere meshes. An instanced circle mesh would do. Either precompute the depth in a texture on the circle, or compute or approximate it directly in the fragment shader. We should probably be dealing with fractions of a millisecond here.
     
  11. MaxWitsch

    MaxWitsch

    Joined:
    Jul 6, 2015
    Posts:
    114
    You could also use a custom rendertexture with a sdf shader.
    I think this would be the simplest solution.
     
  12. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    333
    An SDF seems way over complicated and slow. You simply need to draw this sprite (with max blending and a scale factor for depth) for each sphere.

    But this thread is getting dragged way off topic now, so @alexandre-fiset you should make a new thread if you want to discuss this further, or a mod could split this discussion.

    upload_2023-3-30_20-43-4.png
     
    Last edited: Mar 30, 2023
  13. alexandre-fiset

    alexandre-fiset

    Joined:
    Mar 19, 2012
    Posts:
    715
    @SamOld I wouldn't even know where to start to do such a thing :(. The concept sounds like it could work. We'd use maximum 11 circles like that (main character, a sled, 6 dogs, plus a possibility of 4 wild animals rendered on screen at once). I'll spend some time reading about command buffer and see if that could be implemented relatively fast.
     
  14. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    333
    It should be fairly easy. Give it a go, and if you have trouble with it make a dedicated thread and tag me, and I'll see if I can help out.
     
    Last edited: Mar 30, 2023
  15. alexandre-fiset

    alexandre-fiset

    Joined:
    Mar 19, 2012
    Posts:
    715
    My colleague created this thread, but he finally figured it out. For now his system is capable of drawing instanced spheres without using any camera, which saves between 1ms and 2ms of CPU time, depending on the platform.

    Thanks for pointing out toward that direction!
     
    Gasimo and SamOld like this.