Search Unity

Question Is it possible to detect mesh from the object that is currently begin rendered by shader?

Discussion in 'Shaders' started by ScottySR, Jan 14, 2022.

  1. ScottySR

    ScottySR

    Joined:
    Sep 29, 2018
    Posts:
    48
    I'm trying to improve a water shader that applies an effect to submerged objects. However there is a problem with how it is written now. I have an idea on how to make it look more consistent, but I'm not sure if this is something that can be done with a shader.

    The following picture shows what the current shader is doing (note the submerged object is drawn on a depth texture, which is how the distance is received):
    water1.png

    The green lines are displaying what the current version is measuring to determine the strength of the effect. As you can see the problem is that when the angle of the camera changes, the length of the measured distances also change. This makes the effect look off. Lower down the effect doesn't reach very deep, but from more top down view it goes much deeper.

    Now here is my improvement idea:
    water2.png

    This would make the effect more consistent regardless of where the camera is. But is it possible to even get the distance from the point from the depth texture straight up until it hits the water geometry (or until a max distance)? It should also ignore any geometry that may be in between.

    If this is not possible, are there any other possibilities to make something like this?
     
  2. fleity

    fleity

    Joined:
    Oct 13, 2015
    Posts:
    345
    mh
    short answer: no it's probably not worth it.
    longer answer(s?): in a certain way we are constrained to the pixel which we want to render and sampling a different pixel is possible it would be nice to know the offset needed in order to avoid additional samples but if that is not a necessity...
    (thought experiments Idk if any of this is a good idea)
    something along the lines of raymarching / raytracing against the depth buffer could do what you want it to. Maybe look for SSRTGI this will be complicated but the technique could be useful in this scenario.

    using the world position of the fragment one could.. sample the depth buffer multiple times and walk towards the same horizontal position... but that implies many many texture samples.

    use another buffer, render depth straight from the top and figure out how to sample that texture using the original screen coordinates (transform position using camera projection matrix)
     
  3. ScottySR

    ScottySR

    Joined:
    Sep 29, 2018
    Posts:
    48
    I did some more research on this, an it seems that the currently rendered object's position is available as a parameter. I can also calculate the wave phases (which are already used by the fragment shader portion) to get y position of the water surface at xz coordinates. Figuring out the world coordinates of the sampled point from the depth texture is the last thing I'd need then. I can use those xz coordinates to find where to calculate the wave phases and use the y position to subtract from the water's surface to get the distance I need. To my knowledge there are some built in functions that allow you to convert screen space to world space coordinates. This is mostly theoretical, so this still may not be possible. I find that there are very little in terms of documentation on the shaders (especially built in functions) and I constantly keep finding new features in shaders that Unity's documentation has failed to mention (not that I know what those features even do most of the time, because a lot of them are not documented anywhere, which is kind of disappointing).
     
  4. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,546
    You can calculate the world space position using that depth buffer value and then use that position to sample from your wave height map, assuming you've made it a global property and included any necessary offsets needed to translate a world space position to a local position on the map.

    But I would also ask why you feel this is necessary, since the original behaviour is the physically accurate one. The further the light has to travel through the water to reach your eye the more faded it should look.

    If my eyes are only a centimeter under the water but the cube is hundreds of feet away, I'm probably not going to be able to see the cube, even though the cube is near the surface as well. But in your example you're essentially wanting it to be visible even in that case.

    I could imagine perhaps blending the two behaviours and just using this as an additional level of artistic control though.
     
  5. ScottySR

    ScottySR

    Joined:
    Sep 29, 2018
    Posts:
    48
    The original behavior doesn't work well with the waves. The effect sort of goes away as it is obscured by a wave and looks strange (maybe because the water is not transparent?). The second method attempts to get around this so that waves don't obscure the effect, but are still affecting in terms of how deep you can see. Also, another reason for doing this is just learning about different things you can do with shaders, even if it doesn't end up looking any better than the original.
     
  6. ScottySR

    ScottySR

    Joined:
    Sep 29, 2018
    Posts:
    48
    Can someone explain why the camera's height relative to the water causes the depth effect to change. Based on the second picture I added to the first post and my understanding of the math, this should be correct. The depth sample's position calculation might be the thing that is incorrectly calculated, but I can't figure out why. If you know how to fix this, let me know.

    Code (CSharp):
    1.         void surf(Input IN, inout SurfaceOutput o)
    2.         {
    3.             // Get depth from depth texture
    4.             float rawDepth = SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.screenPos));
    5.             float depth = LinearEyeDepth(rawDepth);
    6.  
    7.             // Calculate world position from sampled point from depth texture
    8.             float3 depthSamplePos = (IN.viewDir * depth) + _WorldSpaceCameraPos;
    9.  
    10.             // Calculate wave phases at depth sample XZ position
    11.             float phase0 = _WaveAmp * sin((_Time.y * _WaveSpeed) + (depthSamplePos.x * _WaveOffset) + (depthSamplePos.z * _WaveOffset));
    12.             float phase0_1 = _WaveAmp * cos((_Time.y * _WaveSpeed) - (depthSamplePos.x * -_WaveOffset) - (depthSamplePos.z * -_WaveOffset));
    13.             float offset = (phase0 + phase0_1) * _WaveAmp;
    14.  
    15.             // Calculate water height at depth sample XZ position
    16.             float3 waterSurface = float3(depthSamplePos.x, IN.objPos.y + offset, depthSamplePos.z);
    17.  
    18.             float fade = 1.0;
    19.             if (rawDepth > 0)
    20.             {
    21.                 fade = saturate(_Softness * (depthSamplePos.y - waterSurface.y));
    22.             }
    23.              
    24.             if (fade < _FadeLimit)
    25.             {
    26.                 o.Albedo = o.Albedo.rgb * fade + _BlendColour.rgb * (1 - fade);
    27.             }
    28.         }
    29.         ENDCG
    30.     }
    31.     Fallback "Diffuse"
    32. }