Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Seeking a background fade out for the frustum border.

Discussion in 'Shaders' started by DanVioletSagmiller, Aug 14, 2019.

  1. DanVioletSagmiller

    DanVioletSagmiller

    Joined:
    Aug 26, 2010
    Posts:
    203
    I have a webGL that shows in a page. That pages background could be White, Black, or some other solid color. I'm looking for a shader that I can apply to certain objects (floor/walls) that will fade to the background color at the edge of the frustum. The objects of focus should still go all the way to the edges; without that requirement, I could just use post processing to generically overwrite the edges. And Maybe I can still approach this through post processing.

    But ultimately, I want a way to still show items in their proper setting, but that fade into the page, with the exception of the object of focus. I.e. if a sword is shown, I might have it on a wall over a fire place. I would want the fireplace to fade to a white page background, but if the sword gets to the edge, then it will cut off.

    I could do two cameras, with different views, and have a UI Layer with the fade out border, then a transparent image over that showing the items of focus. The items live in the same space, but filters prevent it.

    I'm wondering if there is a shader that already does similar effects though.

    - Thanks.
     
  2. Namey5

    Namey5

    Joined:
    Jul 5, 2013
    Posts:
    188
    Not entirely sure what you mean by 'edge of the frustum'. If you mean fade near the edges of the camera frustum, then all you would need to do is fade to the background colour as the NDC-relative position reaches its extremes [-1,1]. For example, the following is a basic shader that support a main texture and fading parameters;

    Code (CSharp):
    1. Shader "Unlit/FrustumFade"
    2. {
    3.     Properties
    4.     {
    5.         _Color ("Background Colour", Color) = (1,1,1,1)
    6.         _MainTex ("Texture", 2D) = "white" {}
    7.         _Fade ("Fade Range", Range (0,1)) = 0.6
    8.     }
    9.     SubShader
    10.     {
    11.         Tags { "RenderType"="Opaque" }
    12.         LOD 100
    13.  
    14.         Pass
    15.         {
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.             // make fog work
    20.             #pragma multi_compile_fog
    21.  
    22.             #include "UnityCG.cginc"
    23.  
    24.             struct appdata
    25.             {
    26.                 float4 vertex : POSITION;
    27.                 float2 uv : TEXCOORD0;
    28.             };
    29.  
    30.             struct v2f
    31.             {
    32.                 float4 vertex : SV_POSITION;
    33.                 float2 uv : TEXCOORD0;
    34.                 float3 pos : TEXCOORD1;
    35.                 UNITY_FOG_COORDS(2)
    36.             };
    37.  
    38.             sampler2D _MainTex;
    39.             float4 _MainTex_ST;
    40.  
    41.             fixed4 _Color;
    42.             half _Fade;
    43.  
    44.             v2f vert (appdata v)
    45.             {
    46.                 v2f o;
    47.                 o.vertex = UnityObjectToClipPos(v.vertex);
    48.                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    49.                 o.pos = o.vertex;
    50.                 UNITY_TRANSFER_FOG(o,o.vertex);
    51.                 return o;
    52.             }
    53.  
    54.             fixed4 frag (v2f i) : SV_Target
    55.             {
    56.                 fixed4 col = tex2D (_MainTex, i.uv);
    57.                 UNITY_APPLY_FOG(i.fogCoord, col);
    58.  
    59.                 i.pos.xyz = abs (i.pos.xyz / i.pos.w);
    60.                 half fade = smoothstep (_Fade, 1.0, max (max (i.pos.x, i.pos.y), i.pos.z));
    61.  
    62.                 return lerp (col, _Color, fade);
    63.             }
    64.             ENDCG
    65.         }
    66.     }
    67. }
     
    Last edited: Aug 18, 2019
  3. iSinner

    iSinner

    Joined:
    Dec 5, 2013
    Posts:
    201
    What is stored in each value of o.vertex?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Don’t divide by w in the vertex shader, you need to do this in the fragment shader, or it’ll lead to significant distortion. Also Z differs between OpenGL and everything else, so you’ll need to account for that. In OpenGL it’s a -w to w range, in everything else it’s a w to zero range.

    Also, either way, after the divide by w, the x and y will be a -1 to 1 range, so you presumably want the abs of that in the fragment shader otherwise it’ll only fade out correctly in one corner of the view.

    The homogeneous clip space position. Can be thought of as a “frustum space” position, but the xy values aren’t in a -1 to 1 range. They’re in a -w to w range, and the w is the view space depth. The divide by w is known as the “perspective divide”, as this position format corrects for perspective distortion of linearly interpolated values... the distortion you’ll see if you divide by w in the vertex shader.

    Passing the “o.vertex” in both the o.vertex and o.pos separately might seem redundant, but the SV_POSITION gets transformed by the GPU and isn’t the same data by the time it gets to the fragment shader.
     
    iSinner likes this.
  5. Namey5

    Namey5

    Joined:
    Jul 5, 2013
    Posts:
    188
    Yeah, that was my bad. I tend to write these answers in a fragmented pattern while switching back in and out of Unity, so I tend to end up changing how I word the answer while still accidentally leaving in some of the old wording (hence the 'clip space'). I also wasn't really thinking when I went for the vertex shader optimisation of the fade parameter and just assumed it would interpolate fine (even though clip depth isn't linear). Wasn't sure if we needed a fade at the camera's near plane, so I left out the near check. I've updated my original answer.
     
  6. iSinner

    iSinner

    Joined:
    Dec 5, 2013
    Posts:
    201
    If xy is from -w to w, and w is the distance from the far plane(correct?) then what is in the z? seems like xyw accounts for everything we need.
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Not correct.
    The clip space value of w is not affected by the near or far planes, or even anything about the perspective*. It's the -z position of the vertex in view space. The GPU view space matrix has the z axis inverted compared to the Unity scene coordinate system, so the w value would be identical to the z position shown when you have a transform as a child of the camera game object.

    As for what the clip space z is, that was in the first paragraph:
    * If the projection matrix is orthographic, w is always 1. But for any perspective projection matrix where the focus point is the camera (which is true for any perspective projection matrix generated by Unity itself) the above is true.
     
    iSinner likes this.
  8. iSinner

    iSinner

    Joined:
    Dec 5, 2013
    Posts:
    201
    For perspective:
    xy in clip space is -w to w range
    w in clip space is -z vertex position from view space
    z in clip space is -w to w range(in opengl)

    is that correct?

    If yes, then what are the limits of the -w..w range? because if the w in clip space is dependent on the vertex position in view space, then isn't the w limited by the frustum far and near planes?

    I'm trying to understand it but i need to double check it back and forth so that i am sure i interpreted it correctly and didn't miss anything.
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Technically there are no “limits” to w, outside of floating point precision. The near and far plane control where the GPU clips when rasterizing, but you can have an object 10000 units away from the camera and a far clip of 100, but you’ll still see that w of 10000 since it doesn’t get clipped until after the vertex shader.
     
    iSinner likes this.
  10. iSinner

    iSinner

    Joined:
    Dec 5, 2013
    Posts:
    201
    How would you normalize a value on an undefined range then?
    You say it is clamped after vertex shader, so in fragment W can have a maximum and minimum, but what is it in vertex then? It has to have a max and min in order to normalize by it.
     
  11. Namey5

    Namey5

    Joined:
    Jul 5, 2013
    Posts:
    188
    w is the range definition. The other components of the vector are normalized relative to the w component, whereas the w component after normalization is simply 1 (a number divided by itself will always be 1). You can think of the w component less as a distance value, and more as a perspective value - its purpose (in an incredibly simplified and not very correct manner) is to decrease the size of objects on screen as they get further away from the camera during the normalization process. There are many resources out there that explain clip space and its relationship with NDC;

    https://learnopengl.com/Getting-started/Coordinate-Systems
    https://answers.unity.com/questions/1443941/shaders-what-is-clip-space.html
     
    bgolus likes this.
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    You don't normalize w, it is the normalizing term in itself. And you don't ever want to "normalize" it in the vertex shader.

    The xyzw values of clip space are all in a range defined by the current w value itself. So you "normalize" the value by dividing by the w, resulting in a -1 to 1 range for all on screen positions. They key is on screen positions.

    The vertex shader isn't confined by the frustum of the current projection matrix. That is to say the vertex shader runs on all vertices, not just the ones that end up visible on screen. The vertex shader is actually part of how the GPU determines if something is visible on screen or not by transforming the vertex position into the clip space position. So you'll have clip space positions in the vertex shader that are far, far, outside the frustum bounds, and that's okay. A vertex that is 300 units above the camera and far out of view may still be part of a triangle that is in view, so it still needs to be calculated. Most real time rendering engines make use of CPU side frustum culling to skip rendering of objects that are fully outside of the view frustum to avoid calculating meshes that for sure no triangle of which will be seen, but you can't do that on a per vertex level. That vertex that's 300 units out of view may be part of a triangle that's 1000 units across and which you're looking at the dead center off. Thus all 3 vertices of that triangle aren't anywhere near that "-w to w" range, but are still needed.

    As for why you don't want to do the normalization in the vertex shader, it's because the values are linearly interpolated in screen space. If you just pass the normalized positions it doesn't interpolate properly, which is the entire point of using a float4 to begin with.

    Here's an easy example. Take a shader that just renders a texture using the normalized xy clip space positions as its UVs. First try dividing by w in the vertex shader. If the object is something like a view facing quad, it'll look perfectly fine.
    upload_2019-8-19_23-24-40.png

    But try taking that quad and rotating it so it's not facing the view and the texture will start to warp wildly.
    upload_2019-8-19_23-26-0.png

    But if you do the divide in the fragment shader, after the interpolation, everything is correct.
    upload_2019-8-19_23-26-6.png

    If you look closely at those last two images you'll notice the UV positions at each vertex is the same spot on the texture, but everything in between is wrong. This is because the interpolated values aren't correctly taking into account the perspective when you do the divide in the vertex shader.

    Code (csharp):
    1. Shader "Custom/Perspective Divide Test"
    2. {
    3.     Properties {
    4.         _MainTex ("Texture", 2D) = "white" {}
    5.         [KeywordEnum(Vertex, Fragment)] _Do_Perspective_Divide_In ("Do Perspective Divide in:", Float) = 0
    6.     }
    7.     SubShader {
    8.         Pass {
    9.             CGPROGRAM
    10.             #pragma vertex vert
    11.             #pragma fragment frag
    12.             #include "UnityCG.cginc"
    13.             #pragma shader_feature _ _DO_PERSPECTIVE_DIVIDE_IN_FRAGMENT
    14.  
    15.             struct v2f {
    16.                 float4 pos : SV_Position;
    17.                 float4 screenPos : TEXCOORD0;
    18.             };
    19.  
    20.             sampler2D _MainTex;
    21.  
    22.             v2f vert(appdata_base v)
    23.             {
    24.                 v2f o;
    25.                 o.pos = UnityObjectToClipPos(v.vertex);
    26.                 o.screenPos = o.pos;
    27.  
    28.             #if !defined(_DO_PERSPECTIVE_DIVIDE_IN_FRAGMENT)
    29.                 o.screenPos /= o.screenPos.w;
    30.             #endif
    31.  
    32.                 return o;
    33.             }
    34.  
    35.             half4 frag(v2f i) : SV_Target
    36.             {
    37.             #if defined(_DO_PERSPECTIVE_DIVIDE_IN_FRAGMENT)
    38.                 i.screenPos /= i.screenPos.w;
    39.             #endif
    40.  
    41.                 return tex2D(_MainTex, i.screenPos.xy);
    42.             }
    43.             ENDCG
    44.         }
    45.     }
    46. }
     
    Namey5 likes this.