I've been attempting to have an additional camera render a depth texture while still having other cameras rendering normally. The use case here is robotic simulation, A LiDAR or Sonar can be simulated with either physics raycasts or by sampling a depth texture, the latter is what I'm trying to implement. I have been able to create a Camera rendering into a Render Texture in HDRP where the Render Texture is using the GraphicsFormat.DepthAuto and this works, but the problem is one of optimization. Using the profiler, I have found that the Render Pipeline renders the entire colour image including shadows, transparent objects and post-processing only for this additional data to be discarded. So far I've investigated three ways to optimize this all of which I've run into blockers, so if I could get some help that would be great: Create a CustomPass with the injection point 'After Opaque Depth and Normal' which provides me access to the CustomPassContext but I couldn't find a way to tell that render to stop after this point. Have a different render pipeline per camera. If you can provide information about how to do this (and if it's even possible), that would be great. As far as I can tell, there's only one Render Pipeline Asset which is set per project in the Quality Settings. Have a render pipeline extend the HDRP pipeline allowing me to modify it to work differently for different cameras. I'm not sure how to extend the HDRP, so some tips on doing that would be great. Also if there are any other suggestions to optimize the rendering to stop the pipeline after depth has been rendered, but only for specific cameras, that would be good also.