Search Unity

Help Wanted Rendering a Depth Only Pass on a Seperate Camera

Discussion in 'General Graphics' started by PeterSmithQUT, Sep 22, 2021.

  1. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    6
    I've been attempting to have an additional camera render a depth texture while still having other cameras rendering normally.

    The use case here is robotic simulation, A LiDAR or Sonar can be simulated with either physics raycasts or by sampling a depth texture, the latter is what I'm trying to implement.

    I have been able to create a Camera rendering into a Render Texture in HDRP where the Render Texture is using the GraphicsFormat.DepthAuto and this works, but the problem is one of optimization. Using the profiler, I have found that the Render Pipeline renders the entire colour image including shadows, transparent objects and post-processing only for this additional data to be discarded.



    So far I've investigated three ways to optimize this all of which I've run into blockers, so if I could get some help that would be great:
    1. Create a CustomPass with the injection point 'After Opaque Depth and Normal' which provides me access to the CustomPassContext but I couldn't find a way to tell that render to stop after this point.
    2. Have a different render pipeline per camera. If you can provide information about how to do this (and if it's even possible), that would be great. As far as I can tell, there's only one Render Pipeline Asset which is set per project in the Quality Settings.
    3. Have a render pipeline extend the HDRP pipeline allowing me to modify it to work differently for different cameras. I'm not sure how to extend the HDRP, so some tips on doing that would be great.
    Also if there are any other suggestions to optimize the rendering to stop the pipeline after depth has been rendered, but only for specific cameras, that would be good also.
     

    Attached Files:

  2. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    6
    So it seems you can set 'Custom Frame Settings' on a per-camera basis which I can use to disable a lot of the rendering overhead that I was looking to avoid such as shadows, post-processing, transparents, distortion, fog and more. It seems to still be rendering a colour image, but the amount of GPU load has decreased substantially.



    This seems to have moved the problem to the CPU where the culling and the initialisation of the Render Graph is now taking a lot of time causing a stall on the Render Thread while it waits for these tasks to complete.

     

    Attached Files:

  3. Julien_Unity

    Julien_Unity

    Unity Technologies

    Joined:
    Nov 17, 2015
    Posts:
    44
    Hi,

    Is the camera you use to generate the depth texture seeing the same scene as the regular camera? If so then there is a couple ways you can access the depth texture. Either by using the AOV APIs as described here https://docs.unity3d.com/Packages/com.unity.render-pipelines.high-definition@13.1/manual/AOVs.html or by using
    requestGraphicsBuffer and GetGraphicsBuffers on the HDAdditionalCameraData component.
    If you want to do another camera with custom rendering then you can try using the customRender API on the same component. This basically allows you to write your own mini SRP within HDRP by bypassing all normal HDRP rendering for this camera.

    Hope this helps.
     
    PeterSmithQUT likes this.
  4. PeterSmithQUT

    PeterSmithQUT

    Joined:
    Sep 22, 2021
    Posts:
    6
    Hi @Julien_Unity
    Sounds like exactly what I'm looking for! Would you happen to have any documentation on this? Googling "unity cusomRender API" doesn't yield any results of the same name.
     
  5. Julien_Unity

    Julien_Unity

    Unity Technologies

    Joined:
    Nov 17, 2015
    Posts:
    44
unityunity