Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

(Solved) Blit a downsample render texture with camera main texture

Discussion in 'Shaders' started by bitinn, Jun 7, 2018.

  1. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    958
    Hi all,

    Say we have a volumetric fog image effect that use camera depth texture to put fog in front and behind opaque objects. And we are downsampling it for performance reason.

    Now we need to blit it with main texture again, so that fog is downsampled, but other objects remains full-res.

    Questions:

    - I know we need to use depth texture to blend low-res fog and full-res objects, but is the camera depth texture alone enough? It seems I need to compute a depth texture for my low-res scene, in order to make sure fog in front of objects are preserved, right?

    - Or is there an alternative I am not seeing?

    Thx!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,236
    You can use the _CameraDepthTexture by itself, or calculate a lower res version that matches your low res target. The usual solution is to downres the texture storing the min, max, average, or point sampled depth (with different visual artifacts for each option). Usually point sampled depth and then bilinear upsampling is most straight forward approach. See Jason Booth's offscreen particles:
    https://assetstore.unity.com/packages/tools/particles-effects/off-screen-particles-46208
    https://github.com/slipster216/OffScreenParticleRendering

    Using max depth may produce slightly less offensive artifacts depending on the effect. See this GPU Gems article on the topic for an example:
    https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch23.html

    Or you can go crazy and instead of comparing against a depth buffer in your low res pass, render out the transparency depth as part of the low res offscreen target. Destiny stores out a min & max depth (or variance depth map, aka VDM) to composite with the full resolution depth.
    http://advances.realtimerendering.com/s2013/Tatarchuk-Destiny-SIGGRAPH2013.pdf (page 122 or so)
    Also see follow up here around page 63 were they go into more details, and also talk about how it didn't work as described in the first paper.
    http://advances.realtimerendering.com/destiny/i3d_2015/I3D_Tatarchuk_keynote_2015_for_web.pdf

    I've also seen storing out only a max depth and then using depth aware upsampling. In that GPU Gems article they suggest rendering out at full resolution just around the edge discontinuities using a stencil mask.



    Short version: Lots of alternatives, some complex, some simple. Try the simple case first and try others if you don't like it.
     
    Last edited: Jun 7, 2018
    bitinn likes this.
  3. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    958
    @bgolus Thx!

    A question specific on depth: without generating fog's depth, how may shader know when to render the low-res & upscaled fog pixel in front of the full-res source pixel?

    EDIT: Specifically this quote in GPU Gems:

    We only know the depth of fog during the ray marching step, after that it's lost, we only have a low res fog texture.

    (I am not talking about the ray march + test against scene depth part, it's done; I am talking about testing against the scene depth again now that we have the low res fog texture)
     
    Last edited: Jun 8, 2018
  4. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    958
    Oh wait, I think I got it now: I don't need the depth texture, I just need to make my low-res fog texture transparent/semi-transparent when source texture is opaque, so my final blit is a simple alpha blend...
     
    bgolus likes this.
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,236
    Yep, that's the simplest solution. It works pretty well most of the time but can introduce haloing and obvious resolution drop artifacts like the GPU Gems page shows in this image (c):

    Or page 136 of the 2013 Destiny talk.

    You might look at that first asset I posted (or the github source) which does an additional upsampling pass which compares the downsampled depth with the full resolution depth to pick a better low-res sample location. It's actually a version of the "depth aware upsampling" technique I mentioned at the end of my post.
     
  6. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    958
    A follow-up question if you don't mind: after trying a few tricks, I decide Destiny's approach of storing depth for low res alpha is my best bet for a clean edge when blending.

    But I am not quite sure what depth value I should be storing?

    - I intend to compare it to the linearized value of camera depth texture.
    - given I am ray marching, I already know the alpha position in world space (and the distance we travelled).
    - So I think I can calculate this linear depth value directly, without going through matrix reprojections and Linear01Depth.

    Is it as simple as: currentDepth / (_ProjectionParams.z - _ProjectionParams.y)?

    Basically, I am trying to understand what exactly Unity stores in the depth texture (z buffer), so that I can do it myself.

    (The best resource I can find on this is from Unity 5.5 upgrade guide and Linear01Depth itself, which talks about inverting depth for better depth accuracy, but still, I don't fully understand what projection I should use in my case)
     
    Last edited: Jun 13, 2018
  7. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    958
    Also the depth aware sampling (where we move uv slightly) is pretty good, I have seen it used on other fog shader. But I am just interested in seeing if Destiny's approach might work better for me.
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,236
    In the 2013 Destiny paper they're using this specifically with particles. I'm guessing they're rendering out a premultiplied color buffer, and two depth buffers, one set to BlendOp Min, and the other BlendOp Max. For particles this makes sense as they're cards with actual depth. For ray marched volumes or fog this approach isn't really useful as the min and max are effectively the min and max of the full resolution depth. If you read the 2015 Destiny paper you'll see they ended up not using the 2013 approach at all and instead store the min and max depth from the full resolution depth buffer and render out two color buffers, one at each depth. This is more doable for ray marched fog. Setup two low resolution color targets, and march through until you reach the first depth and set that as the first target's color, then continue marching & accumulating until you hit the second.
     
    bitinn likes this.
  9. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    958
    That's interesting, I read the 2015 paper and thought they were combining 2 approaches, didn't realize they completely gave up the gaussian blend.

    So the gist of 2015 approach is: downsample depth texture using min/max, then simply march towards both and keep both color result, and blend them somehow when merging with scene.

    I will give this a go, as my current problem is really down to the part where low res fog in front of full res object are showing artifacts (the fog behind is more or less solved via depth aware sampling).

    Screen Shot 2018-06-14 at 0.40.50.png
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,236
    Ah, yeah. For that the only real solution is to use a softer intersection, or maybe a blur during the composite (but that's hard to do while still keeping the depth resolved edges clean).
     
  11. brn

    brn

    Joined:
    Feb 8, 2011
    Posts:
    320
    I wish I'd found this post earlier. I've just independently tried doing something similar to the Destiny 2013 approach only to find some very similar short falls. Its seems I've spent a hole heap of time re inventing a square wheel :D :| :(