Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Refraction effect - how to apply it to my grab pass

Discussion in 'Shaders' started by TheCelt, Feb 10, 2019.

  1. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    741
    Hi

    I am confused how i refract my scene behind the object when looking through it.

    I get the grab pass texture but i don't know how to offset it for the ratio of refraction to distort the texture.

    How do you use vector R from world space on a sampler2D for uv's so that it distorts correctly? I'm having a hard time understanding how to do this in code.

    This is where i am at:

    Code (csharp):
    1.  
    2. //fragment shader
    3. float3 refractionDirection = refract(-i.viewDir, i.normal, _RefractionIndex); //(camera to frag, normal , refract index)
    4. float3 refractionPos = i.vertObjPos.xyz - (refractionDirection * _RefractionFactor); //refraction object pos
    5. float4 refractionClipPos = UnityObjectToClipPos(refractionPos); // obj to clip
    6. float4 refractionScreenPos = ComputeGrabScreenPos(refractionClipPos); // convert to uv on grab pass texture
    7.  
    8. float3 refractColor = float3(1,1,1);
    9. if(waterDepth >= 0){
    10.     refractColor = tex2Dproj(_BackgroundTexture, UNITY_PROJ_COORD( refractionScreenPos) ).rgb;
    11. }
    12.  
    13. return fixed4(col * refractColor ,alpha);
    14.  
    15.  
    But i am getting some wacky results - not sure how to fix it.
    Refraction values: _RefractionIndex = 1.333



    Visual of the current effect: https://i.imgur.com/8iWdIrM.gif
     
    Last edited: Feb 11, 2019
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Using a refraction to offset a world or local space position for this may seem like the correct and logical path, but it’s missing one key thing.

    It’s wrong.

    I suspect you might already know this, but let’s dive in a bit.

    Let’s step back and look at this from a ray traced perspective. A ray direction is refracted by the angle of incidence and index of refraction. That’s all good, and the reflect function with those inputs, assuming they’re all in object space, will give you the new ray direction. But here’s where things fall down. That ray should continue out until it hits something, not travel some arbitrary distance and then change direction to that new position’s view ray. Plus by using object space the object’s scale will affect the distance and direction too.

    So you could do all of this in world space instead, and then the offset would be at least consistent, but that doesn’t solve the ray direction going “wrong”.

    But you of course can’t really trace along the refracted ray direction*, most of the time the ray will be heading off screen so there’s nothing to sample. And this is just supposed to be an approximation anyways, right? The problem with doing this in either object or world space is there’s a not insignificant chance that the resulting offset position is outside the screen bounds too. That indeed is what looks to be happening in your example above. I’m going to guess you have an object that’s being scaled thus the refraction factor is going way outside the bounds of screen space.

    So, what can you do? A few things. The first one is to do all of this in world space so that object scale doesn’t have any impact. Second is to calculate the direction in world space, but apply it as an offset direction in screen UV space. Why? Because you will likely need to clamp it, or fade it out when the offset goes off screen. Alternatively you could fade to a refleflecfion probe sample when the ray goes toward the camera or the offset goes off screen.

    Really this usually gets solved in a super hacky way of converting the surface normal direction into screen UV space and using that multiplied by some small scaler and being done. No real refraction at all.

    This old GPU gems article skips even bothering to covert to screen space and just uses the normal map as is.
    https://developer.nvidia.com/gpugems/GPUGems2/gpugems2_chapter19.html

    * Actually, you can, this is what screen space reflections do.
     
  3. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    741
    Hi thanks for the reply.

    I have the GPU gems bookmarked, that was my backup plan for in the event this ray casting approach failed.

    I have made much better progress which you can see here: https://i.imgur.com/EetnZic.gif

    The problem is its doing refraction on the water behind the object even if it's above the water. I tried doing depth comparison to skip where its above the water, but it doesn't quite work.

    So some one suggested (but it was not easy to understand) to make a render texture of only whats under the water and then also some how put the depth values in the alpha channel to rid of the depth buffer entirely.

    The issue is i am not even sure how i would render a texture of objects under the water including objects "partially" through the water. I haven't found much info on it for unity or at least the technique they are trying to describe. I also am not sure how it works if the water has animated waves.

    Perhaps you know what they were talking about with that?


    Also encase you are wondering why i am adamantly going down this route still at the moment, it's because of this webGL demo that used the same technique: http://madebyevan.com/webgl-water/

    They used the same approach and the refraction looks damn good in my opinion, but sadly they didn't do a write up on it other than the caustics effects.

    Current code i have:


    Code (CSharp):
    1.  
    2. //vertex shader
    3. float3 worldNormal = UnityObjectToWorldNormal(v.normal);
    4. float3 objToEye = WorldSpaceViewDir (v.vertex); //obj to camera in world space
    5. float3 refraction = normalize( refract(-objToEye,worldNormal, 1.0/_RefractIndex));
    6. float3 objRefraction = mul(unity_WorldToObject,refraction) * _RefractDistance; //zoom scale
    7. float4 newvertex = UnityObjectToClipPos(float4(objRefraction,v.vertex.w));
    8.  
    9. o.refractuv = ComputeGrabScreenPos(newvertex);
    10. COMPUTE_EYEDEPTH(o.refractuv.z);
    And


    Code (CSharp):
    1.  
    2. //frag shader
    3. float sceneDepth = tex2Dproj(_CameraDepthTexture,  UNITY_PROJ_COORD( i.refractuv)  ).r; //sample depth texture
    4. sceneDepth = LinearEyeDepth(sceneDepth); // from perspective to linear distribution
    5.          
    6. float waterDepth = (((sceneDepth-i.refractuv.z)/_FadeFactor));
    7. float uvDepth = saturate (waterDepth);
    8.  
    9. fixed3 col = tex2D(_WaterDepth,float2(uvDepth * _ColorRange , 1)).rgb;
    10. half alpha = tex2D(_WaterDepth,float2(uvDepth ,0)).r;
    11. alpha = saturate(alpha + _MinimumAlpha);
    12.            
    13. if(sceneDepth<0) return (col,alpha);
    14. float3 refractColor = tex2Dproj(_BackgroundTexture, UNITY_PROJ_COORD( i.refractuv )).rgb;
    15.  
    16. return fixed4(col * refractColor, alpha);
    17.  
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Like this?
    https://catlikecoding.com/unity/tutorials/flow/looking-through-water/

    It’s not perfect, but it works well enough that most people won’t notice.

    Sure. It requires rendering with a custom projection matrix that places the near plane at the water surface. Real time planar reflections work the same way, just with the projection flipped. I think Half-Life 2: Lost Coast was one of the first games to use this technique. It’s pretty much never used anymore because depth rejection is good enough. At most they make a copy of the screen buffer with the above water area blacked out to do stuff like blurs. See this page:
    https://eidosmontreal.com/en/news/hitman-ocean-technology

    Except that’s doing actual full on actual raytracing. Raytracing a sphere and box in a shader is fairly straightforward and cheap, so he can do real refraction and then follow the ray until it intersects the wall or sphere, and otherwise falls back to a cubemap. The entire scene’s contents fit into the shader itself. You’re not raytracing, you’re doing a screen grab and displacing the UV sample.
    https://github.com/evanw/webgl-water/blob/master/renderer.js
     
  5. TheCelt

    TheCelt

    Joined:
    Feb 27, 2013
    Posts:
    741

    Thanks for the reply.

    Regarding this part i am not sure why but my depth rejection does not work at all. As you can see in my second post i do skip where the depth value is negative and if you look at the gif i still get refraction occurring behind my object as if i am not doing any form of checking. So i am confused why its the general approach but i can't get mine to work at all.
     
  6. kripto289

    kripto289

    Joined:
    Feb 21, 2013
    Posts:
    501
    Hello,
    It makes no sense to create a new topic, so I’ll ask here.
    It's possible use physical height instead of _RefractDistance?
    I don't want to use raymarch for finding new depth. Is any other simple way to get new depth?
    ( I need find a length of the red ray in the screen)
     

    Attached Files:

  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Nope.

    I mean, you could do the naive screen space offset, then get the depth at that offset, but that wouldn’t get you the depth of an actual refracted ray. You have to raymarch the depth buffer (ala screen space reflections), do actual proper raytracing of some other representation of the scene, or re-render the underwater area using some kind of skewed projection matrix that mimics the refraction.
     
  8. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,924
    I've been thinking about this, and made a quick demo to check. If the background topology is simple (e.g. a plane with known orientation), then straightforward refract + sample grabpass works fine, modulo (as you point out) the obvious problem that as you get close to the edge of the screen and/or tilt the camera relative to the glass/water/whatever surface (increasing refraction angles) you sample off the edge of your grabpass.

    It looks great, up until that point - much better than standard refraction hacks (visibly, obviously, painfully better - shows how ugly the "fudge the offset a little" approaches really are :)).

    So ... I was wondering: what if you simply insert a pre-render of the scene in which you shift the camera frustum so that clip-space is expanded, and then sample that instead of the normal grab-pass?

    Off the top of my head, this would require maintaining FOV (otherwise your coords will be incompatible) and pulling the camera backwards.

    Net effect is that your grabbed texture would be lower texel res than 1:1 pixels, but for most refraction effects you're going to do some blurring/downsampling anyway, so that might not be a problem (and if you really care, you could render the pre-pass at higher res, although that's likely to really start hurting framerate (*))

    * - and then I thought: but you probably could filter out everything except the background and underwater objects when doing this pass, so the render cost could still be low even at eg 2x higher resolution.
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    It’s a pretty common suggestion from academic papers on any kind of screen space effect to use a projection that’s larger than the main view to cover the areas off screen. It’s a bandaid though, especially for things like refraction that might be way off screen. Using a pre-rendered image is also not uncommon. More often used for faking interior spaces inside of windows, like the various interior mapping techniques. For water it’d be a bit more difficult unless it’s a very limited area, or a fixed camera view. I mean once you go the route of pre-rendering it, you could render out a much wider view than you would otherwise need and as long as your camera doesn’t shift too much it’d work pretty well.
     
  10. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,924
    So I tried it out (briefly). Here we have:

    1. "none": Fake refractions (simply jittering where we sample the grabpass)
    2. "low": Real refractions, but when light bends off edge of the screen you get texture-clamp artifact
    3. "high": Real refractions, with a second camera pre-generating a second grab-pass, and the GrabScreenPos altered by w to make the main camera sample from 2nd camera when it goes outside the range of its own grabpass's valid data


    NB: because the second-cam has different orientation of the near/far planes (I rotated the cam rather than reposition it - on a first attempt, rotation is only 1 line of code), the bottom edge of the refractions are warped, but the fact that human eye is confused by refraction anyway goes a long way towards hiding this artifact - it might even be good enough for some games.

    I also had an attempt at lining-up the second cam with same origin but using an offset projection matrix to see how good/bad that would look, but couldn't get my math right ... probably worth trying, to get rid of the subtle distortion.