Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

[SOLVED] Zelda BotW Terrain Intersection Blending

Discussion in 'Shaders' started by flogelz, Oct 6, 2019.

  1. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    I want to implement some asset blending into my terrain shader and I really like the approach of the technique used in Zelda, like @emilmeiton explained here: https://forums.tigsource.com/index.php?topic=67068.0

    Instead of letting every object read form the terrain individually, the terrain itself just gets rendered after all objects in the scene and then rendered with the softparticle depth intersection. Now I have two questions:

    1. Rendering the whole terrain in forward (because of transparency) would be not ideal. So is there a way to extract the depth texture at a point, where everything is rendered except the terrain for example via commandbuffer? Or blend i the terrain on another way (but still as an opaque object, to avoid forward?)

    2. From a performance standpoint, I'm questioning a bit if this is even worth it? Because if the terrain gets rendered last, no occlusion/culling of pixels is happening until the terrain gets rendered-
     
  2. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    Ok, I thought about this all day and after some research, I noticed that in-game, the shadows disappear under the terrain. The blending on object intersections allows to see below the terrain onto the intersecting object under the terrain and this is a big thing. Because if the Terrain would be a transparent object, that would throw shadows, all objects would become shadowed as soon as they're underground.

    I later remembered that a similar effect happens with the usage of deferred decals. By modifying the Gbuffer using the finalgbuffer, I should be able to make the terrain seetrough, but still letting it render under deferred! The shadows dissappearing under the terrain is perfect too, since this would look wrong anyways.

    When I can get depth intersection on opaque objects working, I should be good to go-
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    11,918
    No. You render everything you want the terrain to have a soft transition to into a depth texture before hand. When using deferred there’s no way to read back or extract the depth buffer mid-stream. To use the built in camera depth texture you have to wait until Unity resolves the depth buffer to the depth texture itself at the end of the gbuffer passes.

    Technically you could render out the terrain as a depth only pass to solve the overdraw problem. The issue is doing this in the built in rendering paths may be non-trivial. Ideally you'd be able to use something like commandBuffer.DrawRenderer(terrainRenderer, depthOnlyMat) on the start of the gbuffer pass to fill it out. I have no idea if that would actually work though.

    For rendering the terrain itself, there's nothing stopping you from using a custom shader with blending enabled. Being in the opaque queue or deferred doesn't prevent that, it's just not something usually done. If you look at some examples of deferred mesh decals, like in this thread. If it's not immediately obvious, these are decals that are made using mesh strips that are very close to the surface that then use alpha blending to mix with the existing gbuffer.


    Now this isn't using any kind of soft blending, but that could be added with the separate depth render pass I mentioned. That said I'm pretty sure BotW renders the terrain as a forward pass separate from the deferred shading used by everything else so it can just directly sample the depth from that (yeah, really).
     
    flogelz and MadeFromPolygons like this.
  4. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    Ah ok, what a bummer. I also tried doing a multi camera setup, where the first one renders every other object, while the second renders the terrain. This way I can get the depthtexture of everything to get the blending done on a opaque object. But because it's a multicam setup, shadows don't affect the second layer and that's not ideal either. (Getting the depthtexture on an opqaue object really doesn't work and if, the texture lags one frame behind the camera movement. Which is kind of logical, since it's getting created after opaque objects anyways.)

    Sounds interesting for sure, but does it draw the depth itself (just by rendering) or would I have to provide the depth via shader?

    Actually that's how I imagined this in my second post! Blending with the finalgbuffer function. And it works- Just that i can't grab the depth makes it impossible to get a mask for blending. This whole thing would be solvable with this mask-

    The reason why I don't think they use forward is, that the terrain throws/receives shadows, but where the intersections are, you can the the meshes underneath. And those aren't shadowed at all. This effect is exactly happening, when making the deferred decal completly seethrough. It removes all shadows behind it. A forward object would make everything behind it shadowed instead.

    Here are some examples, but maybe I missed something:

    Deferred Blending Ref_1.png

    Deferred Blending Ref_2.png

    Deferred Blending Ref_3.png
    (Also there's this line, which either could be a shadow artifact of the terrain or maybe ao? Not sure)
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    11,918
    You are correct, it is not using a forward pass to render the terrain. It's rendering everything in the scene, then the terrain afterwards, both into the same GBuffers.

    Lets just say I ... looked into it.
    upload_2019-10-8_10-39-21.png

    For some it appears to be AO, for others it appears to be something in the art made assuming the "real" intersection (like the second image). They do the AO purely from the depth buffer, and that is still a hard edge.
     
    flogelz and neoshaman like this.
  6. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,329
    Maybe this is a case for a custom srp?
     
    flogelz likes this.
  7. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    Wouldn't I be able to render the terrain via a commandbuffer after the depth resolve stage? The shader would then have to modify the Gbuffers and depth buffer (which are the only textures needed in deferred to make everything else happen, right?). Because I could call the just created depthtexture, blending would also work. I don't have much experience with commandbuffers tho and am not sure about how to calculate depth via shader and blitting it onto the depthbuffer correctly-

    (And also thanks for the image! This is literally the first time seeing the normal buffer of the game, after looking for it for 1 year:'D)

    Probably yes, but commandbuffers could solve this i guess?
     
    Last edited: Oct 9, 2019
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    11,918
    Not while keeping it deferred, no. Between the initial rendering of the gbuffers and the creation of the camera depth texture, the rest of the gbuffers are unbound as render targets and setup as global texture properties. This means there’s no easy way to render to those textures again. By the time the depth texture exists the only render target bound is the “emission” gbuffer, which is used as the final frame buffer. Technically you could render to them one at a time, but that’s super inefficient. Plus you’d still need to update the depth texture again after that to ensure the deferred lighting worked properly, which is another wrinkle. Unity still doesn’t seem to expose a way to resolve the depth from a depth buffer to a texture in the way the built in rendering paths do.

    Yeah, definitely custom SRP territory.
     
    Last edited: Oct 9, 2019
    flogelz likes this.
  9. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    Aww man, I thought about this the last few days, tried out some stuff, but I didn't really found a solution. I read about a depth prepass, which could solve this maybe, but rendering everything twice, just for some small object blending really isn't a great deal. Also ue4 has a function called pixel depth offset, which I'm not sure if it's a different thing that could be used in Unity too or if it's just their name for the depthtexture of the camera.

    Just out of interest, would this be possible doing in SRP? Or are there any other good terrain blending techniques somebody knows of?
     
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,329
    Try to look at URP renderpasses before gong too deep in srp, but if you go that way, the catlike coding is a good intro, you just have to keep in mind that it's based on older implementation, which mean the principle stays but the api is different.
     
    flogelz likes this.
  11. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    Ok, so a quick Update, because I solved the problem and made it work for deferred! I experimented a lot, but none of those really worked, until I tried this one here:

    So basically nothing changes from the base setup, so that the terrain shader renders after everything else in the world (but still in the opaque queue, preferably using alpha test). The terrain shader uses the approach of this thread, which @bgolus mentioned earlier, which allows us to modify the gbuffer directly. With this we can potentially blend objects into each other by blending them into the gbuffer.
    The problem then was to get access to the depth buffer for blending, which frankly isn't really possible, except by maybe using a custom renderpipeline, which I can't tho, because I want to stay with the Standard Pipeline for several reasons. In the end I solved this problem by creating a script, that detects nearby objects that should be blended into the terrain and feed them into a commandbuffer, that only renders depth via a material and sets the texture global in the "before gbuffer" stage.

    This way I have now the depth information needed for blending, while still keeping everything inside of the deferred renderpath! Finally solved! Thanks for all the help guys!

    Terrain_pic_3.png
     
    chingwa, Reanimate_L, jRocket and 4 others like this.
  12. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,706
    @flogel so the terrain acted as decal?
     
    flogelz likes this.
  13. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    Basically yes.
     
  14. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,706
    and there's no fillrate overhead? for that?
     
    flogelz likes this.
  15. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    131
    I'm not sure about the original game, but atleast in my case, since the terrain gets rendered after everything else, the overdraw will maybe become a problem. It could imagine that they draw a second version of the terrain beforehand, which is slightly shrunk down, so that objects still get culled through the depth test, but the objects get drawn deep enough, so that the blending can occur. But this is just a quick assumption of me-
     
    Alic likes this.
  16. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,763
    @flogel This looks very promising. Are you using this method on a Unity Terrain using a custom shader...? Or are you using mesh-based terrain with custom shader instead?
     
  17. LeifStro

    LeifStro

    Joined:
    Mar 23, 2015
    Posts:
    18
    @flogel @chingwa @neoshaman this technique is awesome for terrain. The only problem is that the depth buffer has no information for the terrain now, so any effects that require the depth buffer to display over the terrain are going to have issues. Any work around for this issue? Aka - is it possible to add the terrain to the depth buffer after its drawn?
     
  18. MrPawolo

    MrPawolo

    Joined:
    Mar 29, 2020
    Posts:
    2
    Hi. I found another solution that works in URP in forward. Basically I used opaque shaders like transparent one but with some changes. I will show the important steps to reproduce it with Amplify Shader.
    Blendable objects need to be drawn in "AlphaTest + 1" (it bassicaly need to be drawn after the objects you want to blend with)
    upload_2022-7-30_14-44-3.png

    then blend mode

    upload_2022-7-30_14-45-11.png

    and you need to include one file "Packages/com.unity.render-pipelines.universal/Editor/ShaderGraph/Includes/ShaderPass.hlsl"
    upload_2022-7-30_14-46-12.png

    then you need to create custom function with float3 in and float3 out for the vertex offset
    "
    #if (SHADERPASS == SHADERPASS_FORWARD )
    return FragPos;
    #elif SHADERPASS == SHADERPASS_SHADOWCASTER
    return Shadow;
    #elif SHADERPASS == SHADERPASS_DEPTHNORMALS || SHADERPASS == SHADERPASS_DEPTHNORMALSONLY
    return Shadow;
    #else
    return DephPos;
    #endif
    "
    this is the most important part of a shader
    upload_2022-7-30_14-51-26.png

    and this is the resoult
    upload_2022-7-30_14-52-32.png with the settings
    blending 0.1
    offset -0.1
    NormalOffset -0.05 //because of that the shadows dont produce artifacts in the blend zone

    In 2021.3.5 and URP 12.1.7 it has problems with SSAO in DepthNormal mode, in Depth mode it works ok. I dont know what the performance impact is.
    I think the important information is it seems like it is using last frame color to blend, at least it's look like that. And its not working with the VR in the single pass instaced mode.
     
    Noogy, tmonestudio, rostykul and 2 others like this.
  19. MrPawolo

    MrPawolo

    Joined:
    Mar 29, 2020
    Posts:
    2
    This is the shader source file if anyone want :3
     

    Attached Files:

    lilacsky824 and Radivarig like this.
  20. rostykul

    rostykul

    Joined:
    Oct 9, 2018
    Posts:
    21
    I have found the same to be true with my experimentation. The same can be done with shadergraph + render objects feature. A feature pass for the offset depth write, then a transparent feature pass to feather those edges. Any feature with DepthNormal mode will not work, however. With that mode, something strange is happening with how depth is being written. I hope unity fixes that one day :confused: