Search Unity

How to find pixel to world unit ratio per fragment?

Discussion in 'Shaders' started by Deleted User, Sep 21, 2017.

  1. Deleted User

    Deleted User

    Guest

    How can I calculate the pixel to world unit ratio per-fragment? I thought at first it could be as simple as calculating the camera frustrum plane size in world units at the current depth and then using the screen size in pixels to figure it out but I think it may be a naive approach since I'm sure things like camera FOV and screen size ratio need to be factored in.

    What I'm trying to do with this ratio is to normalize the scale of my displacement shader since the further away from the camera you get ideally the distortion should be weaker because the resolution of the surface gets smaller relative to screen space.

    Thanks!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    There are several ways to do this. The easiest way is probably to use unity_CameraProjection, _ScreenParams, and view space depth.

    Something like this:

    float viewDepth = -UnityObjectToViewPos(v.vertex);
    float pixelToWorldScale = viewDepth * unity_CameraProjection._m00 * _ScreenParams.x;

    I think that's right, but I haven't tested it. I think if you use that to multiply a texture UV it'll stay constantly scaled regardless of distance for example. For a pixel displacement you'll want to divide the displacement amount by that pixel to world scale.
     
  3. Deleted User

    Deleted User

    Guest

    @bgolus Hmm doesn't seem to be working as expected, it looks like it's constantly returning a value of 1 (or higher?)

    Here's what I have in my shader:

    Code (CSharp):
    1.  
    2. void vert(inout appdata_full v, out Input o) {
    3.     UNITY_INITIALIZE_OUTPUT(Input, o);
    4.     COMPUTE_EYEDEPTH(o.eyeDepth);
    5. }
    6.  
    7. /* surf program... */
    8. float depth = i.eyeDepth;
    9. float pixelToWorldScale = depth * unity_CameraProjection._m00 * _ScreenParams.x;
    10. o.Emission = pixelToWorldScale;
    11. // ...
    I didn't use your code for getting "viewDepth" because it actually won't work since it's trying to assign a float3 value to a float.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Yep, that was a good catch. I missed the .z at the end of that line. Otherwise it's exactly what the COMPUTE_EYEDEPTH macro is.

    Yep. It will be a very large number. I did have it a little wrong though to handle aspect ratio properly, beyond the obvious issue you noticed. Here's a shader using the code to scale up a texture UV by distance so it's always 1 pixel per texel.

    Code (CSharp):
    1. Shader "Unlit/ViewDistScale"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "RenderType"="Opaque" }
    10.         LOD 100
    11.  
    12.         Pass
    13.         {
    14.             CGPROGRAM
    15.             #pragma vertex vert
    16.             #pragma fragment frag
    17.             // make fog work
    18.             #pragma multi_compile_fog
    19.          
    20.             #include "UnityCG.cginc"
    21.  
    22.             struct appdata
    23.             {
    24.                 float4 vertex : POSITION;
    25.                 float2 uv : TEXCOORD0;
    26.             };
    27.  
    28.             struct v2f
    29.             {
    30.                 float2 uv : TEXCOORD0;
    31.                 UNITY_FOG_COORDS(1)
    32.                 float4 vertex : SV_POSITION;
    33.                 float eyeDepth : TEXCOORD2;
    34.             };
    35.  
    36.             sampler2D _MainTex;
    37.             float4 _MainTex_ST;
    38.             float4 _MainTex_TexelSize;
    39.          
    40.             v2f vert (appdata v)
    41.             {
    42.                 v2f o;
    43.                 o.vertex = UnityObjectToClipPos(v.vertex);
    44.                 o.uv = v.uv;
    45.                 UNITY_TRANSFER_FOG(o,o.vertex);
    46.  
    47.                 COMPUTE_EYEDEPTH(o.eyeDepth);
    48.                 return o;
    49.             }
    50.          
    51.             fixed4 frag (v2f i) : SV_Target
    52.             {
    53.                 float depth = i.eyeDepth;
    54.                 float pixelToWorldScale = depth * unity_CameraProjection._m11 * (_MainTex_TexelSize.z/_ScreenParams.x);
    55.  
    56.                 i.uv -= 0.5;
    57.                 i.uv = TRANSFORM_TEX(i.uv, _MainTex);
    58.                 i.uv /= pixelToWorldScale;
    59.                 i.uv += 0.5;
    60.  
    61.                 // sample the texture
    62.                 fixed4 col = tex2D(_MainTex, i.uv);
    63.                 // apply fog
    64.                 UNITY_APPLY_FOG(i.fogCoord, col);
    65.                 return col;
    66.             }
    67.             ENDCG
    68.         }
    69.     }
    70. }
    71.  
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Try this

    float worldUnitsPerPixel = depth * unity_CameraProjection._m11 / _ScreenParams.x;
     
  6. Deleted User

    Deleted User

    Guest

    @bgolus Yeah that seems right, because the higher the screen resolution is then worldUnitsPerPixel should get smaller. But it must be some huge number or some really small <1 number because it smears the UVs. :(

    EDIT: Here's my code:

    Code (CSharp):
    1. float worldUnitsPerPixel = depth * unity_CameraProjection._m11 / _ScreenParams.x;
    2. refractUVs /= worldUnitsPerPixel;
    3. sceneUVs += refractUVs;
     
    Last edited by a moderator: Sep 22, 2017
  7. Deleted User

    Deleted User

    Guest

    Ok so it's actually a really small number, so I need to somehow remap it between 1 and 0 and multiply my refractUVs by it OR I need to remap it to 1 to some number and divide my UVs by it. But if it represents units per pixel then the upper limit would be infinity right? Not sure how to scale it...
     
  8. Deleted User

    Deleted User

    Guest

    Ok, been thinking on it. Is the solution that I need to find the rate of change of the worldUnitsPerPixel as depth changes and then somehow apply that to my displacement UV scaling?
     
  9. Deleted User

    Deleted User

    Guest

    @bgolus While messing with this scaling problem I discovered that it looks like your formula doesn't factor in the camera FOV? I expected the worldUnitsPerPixel to be lower near the edge of the screen with high FOVs (like 150 for example) but it seems to be the same on each fragment?

    EDIT: Oops, I should have said it does factor in FOV but not the curvature of the lens?
     
    Last edited by a moderator: Sep 22, 2017
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    So you need to think about what the various values mean.
    What do the UVs represent? The UV for the grab pass is a 0 to 1 range from one side of the screen to the other.
    What does a displacement of "1" mean? Is that one pixel, or 1 screen width, or 1 world space unit? What does that mean for being scaled by distance? By world space per pixel? Do the pixels even matter if you're already in UV space?

    The unity_CameraProjection is there to factor in camera FOV. The thing is real time rendering uses a flat projection, there's no distortion in the corners like a real camera with a spherical lens would have. The "worldUnitsPerPixel" is constant at a fixed depth. Note, I'm using the term depth here explicitly. Depth and distance are different things.
     
  11. Deleted User

    Deleted User

    Guest

    @bgolus I did a bit of research and now I understand that Unity only has flat projection and that actually creates distortion near the edges at high FOV values. So if that's true, you mentioned distance vs depth, does that mean I should actually use the distance from the fragment in view space to where it gets projected to on the near clipping plane instead of depth? If that's even possible I mean.

    I also understand what you're saying w/r/t the displacement UVs. I need to decide what a displacement of "1" means. I don't know how to express other than: If I have a sphere with a diameter of 1 world units then a pixel with a displacement of (1, 0) should sample the pixel 1.66 world units to its right (relative to the camera). Here's a picture of what I mean:



    So I would need to figure out how much to adjust the current sceneUV based on the worldUnitsPerPixel and the screen resolution, right? Then I guess multiply it by 1.66? I'm going to need to work through this one...
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    No. Depth is the correct thing to use. However I assume you're trying to replicate refraction, in which case it's all a big hack to do the offset grabpass thing anyway, so maybe it would give you results you find more pleasing.

    The screen resolution then isn't a factor at all. You're dealing with UVs and world space. Never does the actual pixel count need to be thought about, apart from correcting for aspect ratio in the offset.

    Code (CSharp):
    1. // displacement offset direction
    2. float2 offsetVector = normalize(viewNormal).xy; // or how ever you're calculating this
    3. offsetVector.y *= _ProjectionParams.x // note, y might be upside down from view space to UVs, this should flip it?
    4.  
    5. // get world space offset in UV space
    6. float2 refractUVOffset = (offsetVector * offsetWorldDistance) / (depth * unity_CameraProjection._m11);
    7.  
    8. // correct for aspect ratio
    9. refractUVOffset.x * = _ScreenParam.x / _ScreenParam.y; // might have this backwards, or should be refractUVOffset.y? I can never remember.
    10.  
    11. // add offset to grab tex UVs
    12. sceneUVs += refractUVOffset;
    The _ProjectionParams.x line might need to be replaced with this instead. I can never remember which to do when until I try it.

    #if UNITY_UV_STARTS_AT_TOP
    offsetVector.y = -offsetVector.y;
    #endif
     
  13. Deleted User

    Deleted User

    Guest

    @bgolus So taking your advice, specifically the line
    Code (csharp):
    1. float2 refractUVOffset = (offsetVector * offsetWorldDistance) / (depth * unity_CameraProjection._m11);
    gives me the desired behavior if I'm at roughly 16:9 and at 60 FOV (Editor default). Unfortunately changing the width or the FOV causes the refractUVOffset amount to be too large or too small.

    I think I'm missing something that relates the FOV to the screen size ratio? I need to somehow normalize the scaling so at any width/FOV I get the same effect as at 16:9 and 60 degrees.
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    That's line 9, though I did have it wrong, it should be refractOffset.y and not .x.
     
  15. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Code (CSharp):
    1. Shader "Unlit/GrabDistortion"
    2. {
    3.     Properties
    4.     {
    5.         _Distortion ("World Distortion", Float) = 1
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "Queue" = "Transparent" "RenderType"="Transparent" }
    10.         LOD 100
    11.  
    12.         GrabPass {}
    13.  
    14.         Pass
    15.         {
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.            
    20.             #include "UnityCG.cginc"
    21.  
    22.             struct appdata
    23.             {
    24.                 float4 vertex : POSITION;
    25.                 half3 normal : NORMAL;
    26.             };
    27.  
    28.             struct v2f
    29.             {
    30.                 float4 vertex : SV_POSITION;
    31.                 half3 viewNormal : NORMAL;
    32.                 float4 grabUV : TEXCOORD0;
    33.             };
    34.  
    35.             float _Distortion;
    36.  
    37.             sampler2D _GrabTexture;
    38.             half4 _GrabTexture_TexelSize;
    39.            
    40.             v2f vert (appdata v)
    41.             {
    42.                 v2f o;
    43.                 o.vertex = UnityObjectToClipPos(v.vertex);
    44.                 o.grabUV = ComputeGrabScreenPos(o.vertex);
    45.                 COMPUTE_EYEDEPTH(o.grabUV.z);
    46.                 o.viewNormal = mul(UNITY_MATRIX_IT_MV, float4(v.normal,0)).xyz;
    47.                 return o;
    48.             }
    49.            
    50.             fixed4 frag (v2f i) : SV_Target
    51.             {
    52.  
    53.                 float2 grabUV = i.grabUV.xy / i.grabUV.w;
    54.  
    55.                 // // displacement offset direction
    56.                 float2 offsetVector = normalize(i.viewNormal).xy;
    57.                 offsetVector.y *= -_ProjectionParams.x;
    58.                
    59.                 // // get world space offset in UV space
    60.                 float2 refractUVOffset = (offsetVector * _Distortion) / (i.grabUV.z * unity_CameraProjection._m11);
    61.                
    62.                 // correct for aspect ratio
    63.                 refractUVOffset.y *= _ScreenParams.x / _ScreenParams.y;
    64.                
    65.                 // add offset to grab tex UVs
    66.                 grabUV += refractUVOffset;
    67.  
    68.                 fixed4 col = tex2D(_GrabTexture, grabUV);
    69.  
    70.                 return col;
    71.             }
    72.             ENDCG
    73.         }
    74.     }
    75. }
    76.  
     
  16. Deleted User

    Deleted User

    Guest

    @bgolus Hmm but my code that had issues with changes in width and FOV already had correction for aspect ratio like yours:

    Code (CSharp):
    1. // ...
    2.  
    3. float2 aspectCorrection = float2(_ScreenParams.g / _ScreenParams.r, 1.0);
    4.  
    5. // Apply aspect ratio correction
    6. refractUVs *= aspectCorrection;
    7.  
    8. // Apply depth correction
    9. refractUVs = (refractUVs * 1.66) / (depth * unity_CameraProjection._m11);
    10.  
    11. // Apply refractUVs to sceneUVs
    12. sceneUVs += refractUVs;
    13.  
    14. // ...
     
  17. Deleted User

    Deleted User

    Guest

    @bgolus Ok I think I figured out what's missing. refractUVOffset.y *= _ScreenParams.x / _ScreenParams.y; turns the RenderTexture from a square into the proper shape of the screen so that displacement is correctly proportional.

    However it doesn't account for how the FOV affects the scale of objects in that RenderTexture. Changing the aspect ratio in the Game view doesn't seem to affect the perspective, it sort of just crops off the image. So I think I don't need to relate FOV to aspect ratio in this additional correction but then again I don't really know how FOV affects the projection in Unity.

    So to move forward I think I need to understand: What exactly does unity_CameraProjection._m11 represent?
     
  18. Deleted User

    Deleted User

    Guest

    Wow OK I think I figured it out:

    Code (CSharp):
    1.  
    2. refractUVs /= depth * float2(unity_CameraProjection._m00, unity_CameraProjection._m11);
    3. refractUVs *= pow(float2(unity_CameraProjection._m00, unity_CameraProjection._m11), 2);
    4.  
    Two things though:
    1. I don't understand why I needed to square the projection values, I just did it on a hunch and it seems to be right.
    2. It's stretching the displacement out vertically. I think I might need to use m00 or m11 but not both?
     
  19. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Honestly when ever I'm doing something like this I have to spend some time remembering what it all is too.

    However the basic answer is _m00 and _m11 are parts of the projection matrix that are responsible for the horizontal and vertical FOV. They are not the FOV in of themselves, but they hold some useful calculations derived from the FOV.

    Lets step back a little bit and think about what you're trying to do. You want to know big 1 world unit is in screen space at a given depth. To know that you need to know the FOV. If you know the FOV and the depth you can find out how wide the view is with some basic trig. This is the TOA of your old right triangle SOHCAHTOA trig.

    tan(angle) = opposite / adjacent

    We want the width (opposite) of the screen at a given depth (adjacent) with a specific FOV (angle), so we can solve that with:

    tan(FOV / 2) * depth * 2 = screen width

    Once you have that screen width, 1 / screenwidth gives you the distance of 1 unit in screenspace, at least along one axis. The problem is the FOV is only correct for either the width or height, and you have to know which. Also tan() is kind of expensive in a shader.

    Luckily the _m00 and _m11 components of the projection matrix are equivalent to 1 / tan(FOV / 2) for the horizontal (_m00) and vertical (_m11) FOVs.
    https://www.scratchapixel.com/lesso.../building-basic-perspective-projection-matrix

    In that link they describe it as 1 / tan((FOV / 2) * (pi / 180)) because they're converting from degrees to radians, which I'm glossing over. They're also using a square aspect ratio in that example so the FOV is equal for both the horizontal and vertical axis.

    So that's the basics. Now for answering your most recent questions ... I have no idea what that second line is actually doing, or why that would work for you. However it might be stretching because _m11 might be negative (Unity does lots of odd stuff with flipping the projection to deal with different platforms), and the power of 2 is going to make that value positive. That's just a guess though.
     
    Joshdbb likes this.