Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Rendering using another camera's depth buffer

Discussion in 'General Graphics' started by LukePeek, Sep 23, 2019.

  1. LukePeek

    LukePeek

    Joined:
    Nov 29, 2013
    Posts:
    38
    Hey! I've been struggling to work out a way to get around the notorious blurry particles when using Depth of Field issue for a while, and I feel like I'm very close to a solution (although have felt like that quite a few times).

    One point I can't figure out, if it's even possible, is how to use a depth buffer (which I have managed to save to a render texture), to render a different camera to the one that created the depth texture.

    A few answers have mentioned using
    Camera.SetTargetBuffers
    , but this doesn't seem to tell the second camera to USE the depth texture, it tells it thats where it should save it's own depth buffer to?

    It seems like there's no way for one camera to use the depth buffer of other camera, other than in a shader? Which I don't really want to do because that then means a custom particle shader using the depth buffer to cull? Is there no way to do this JUST with cameras?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    No, it absolutely tells it to use that depth buffer. However unless you change the camera’s clear flags, it’ll clear the buffer before it’s used. Make sure you set the camera to Clear Flags to Don’t Clear, and manually clear the color buffer you’re using if you need to. Any other setting will clear the depth buffer before rendering, including any you set via SetTargetBuffers.
     
  3. LukePeek

    LukePeek

    Joined:
    Nov 29, 2013
    Posts:
    38
    Thanks @bgolus! Good to know it does do that, I must just have the setup totally wrong somewhere.

    I manage to render the depth of my main camera fine, using OnRenderImage:

    Code (CSharp):
    1. private void OnRenderImage(RenderTexture source, RenderTexture destination)
    2. {
    3.     Graphics.Blit(source, destination); // Render the camera normally
    4.     Graphics.Blit(source, DepthRenderTexture, Material); // Render the depth texture. Material references a mat with a depth-only shader.
    5. }
    Then in my second camera that I want to use that depth on I try to render to a render texture doing something like the following. RenderVFXCamera is called every frame. I can look at the depth render texture and see the depth fine. But the VFX render texture is always black.

    Code (CSharp):
    1.  
    2. private void RenderVFXCamera()
    3. {
    4.     VFXCamera.SetTargetBuffers(VFXRenderTexture.colorBuffer, DepthRenderTexture.depthBuffer);
    5.     Graphics.SetRenderTarget(VFXRenderTexture);
    6.     GL.Clear(false, true, Color.clear);
    7.     VFXCamera.Render();
    8. }
    9.  
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    I'd be uncomfortable setting the Graphics.SetRenderTarget after using Camera.SetTargetBuffers. Technically it should be fine, and it also doesn't explain what's going on.

    Some other thoughts: why are you blitting the source image to copy the depth buffer? Why not just use the source image's depth buffer, as well as render the VFXCamera and do the compsite in that OnRenderImage? ie:
    Code (csharp):
    1. private void OnRenderImage(RenderTexture source, RenderTexture destination)
    2. {
    3.     Graphics.SetRenderTarget(VFXRenderTexture);
    4.     GL.Clear(false, true, Color.clear);
    5.     Graphics.SetRenderTarget(null);
    6.  
    7.     VFXCamera.SetTargetBuffers(VFXRenderTexture.colorBuffer, source.depthBuffer);
    8.     VFXCamera.Render();
    9.  
    10.     // presumably you have to composite the vfx cam's output back into the main image?
    11.     // and presumably you've already assigned the VFXRenderTarget as a texture for the composite material
    12.  
    13.     Graphics.Blit(source, destination, compositeMat);
    14. }
    One other thing is the source image may not even have a valid depth buffer anymore by the time you get to OnRenderImage. If you're using MSAA the source image is a resolved non-AA render texture which AFAIK no longer has a depth buffer. For it to work with MSAA you'd need to do this by setting a render texture on the main camera too and use the depth buffer from that.
     
    Last edited: Sep 25, 2019
  5. LukePeek

    LukePeek

    Joined:
    Nov 29, 2013
    Posts:
    38
    Honestly, I didn't realise you could do
    source.depthBuffer
    . Everything I had seen and read, the unity docs and other people trying to solve similar problems said the only way to get the depth buffer output is via a shader using
    _CameraDepthTexture

    I'll give those suggestions a try and report back thank you!

    And yes the VFXRenderTarget is supposed to be output to a render texture that then gets composite back as part of a custom effect in the post processing stack, applied after the depth of field effect.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    As per my above caveat, it's entire possible you can't. If the depth copy shader you're using is sampling a depth texture, then it's cheating. The _CameraDepthTexture is not the depth buffer. However in the past I've been able to make use of the stencil buffer during OnRenderImage in some specific scenarios. The stencil buffer is the same buffer as the depth buffer.

    I have written stuff that uses Blit() to fill in the depth buffer of a separate render texture, so it's a viable option. I posted an example here:
    https://forum.unity.com/threads/getting-a-pixel-look-in-3d.625321/
     
  7. LukePeek

    LukePeek

    Joined:
    Nov 29, 2013
    Posts:
    38
    Thanks for the help @bgolus, but it still doesn't want to use that depth. I have no idea why, there must be somethign else I've done in there that's cancelling it out. SetTargetBuffers() does take source.depth perfectly fine. No errors. But the rendered VFXCamera still acts as if there is no depth information.

    Everything renders, everything composites as it should. It's just that nothing on the VFX cam gets culled.

    I've tried a few scenarios using the depth texture but I think that would involve a custom particle shader and I was hoping to use the standard and get the camera to do the culling cull, rather than a custom particle shader that culls using the global depth render texture.
     
  8. transporter_gate_studios

    transporter_gate_studios

    Joined:
    Oct 17, 2016
    Posts:
    219
    So, I have a similar issue and wondering if anyone has an answer. I'm running URP purely because there are many shaders i purchased that only work in URP. Im trying to get a 3d skybox working by using camera stacking (seems to be the only option) and i cant get depth based rendering to work and many people have said this is due to the fact that you must clear the depth of the overlay cameras in order to allow for compositing of meshes that would normally occlude each other if not. So, from what i understand, this breaks all rendering for depth based shading... fog doesn't work, transparents dont work, etc... some people suggested making some kind of copy of the depth buffer of the base camera and feeding it to the overlay camera.. this seems like the only solution, I tried using stencils but again the depth buffer creates a problem with the transparents. is there a way to fix this problem in URP or would i need to start my game over again in standard pipeline?
     
    phygitalism likes this.