Search Unity

  1. We would like to hear your feedback about Unity and our products. Click here for more information.
    Dismiss Notice

Force low precision float on Unity Editor

Discussion in 'Shaders' started by Olivier356, May 12, 2019.

  1. Olivier356

    Olivier356

    Joined:
    Sep 14, 2015
    Posts:
    12
    Hi,

    Is it possible to force fragment shader to use 8bit and 16bit precision float (and even 24) in the Unity Editor ?
    I would like to emulate low-end mobile gpu because some of my users are reporting pixelised texture on a big mesh that is using world space coordinate texture and i would like to find a way to fix this.
    It would also be nice to easily optimize shaders for mobile.

    Problem is, on PC fixed and half always use a 32bit representation.

    Thanks in advance.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,825
    This is a hardware thing, and not easily emulated due to different hardware implementing fixed and half differently. Also fixed floating point isn’t 8 bit, a lot of mobile GPUs use the same 16 bit floating point representation as half, but others use between 10 and 13 bits.

    That said the cause is pretty simple. Most mobile devices use a half for their UVs. This is a problem for world space texturing as it doesn’t take long before the precision is less than the resolution of the texture.
     
    Last edited: May 12, 2019
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,825
    So, my previous post didn't talk about solutions. The usual solution for precision related issues usually boil down to keeping things close to the origin (that includes the UVs, the mesh's own pivot,and the world origin, depending on how you're getting the UVs), or liberal use of frac(). Neither of those will really help here since having a large mesh means it kind of has to be far from the origin, and using frac() in the shader will either not show any difference if done in the fragment shader or cause all sorts of weird problems if done in the vertex shader.

    One solution would be to cut up your mesh into smaller chunks and use object space UVs or actual UVs rather than world space, which limit the UV range and avoiding the precision issues. But that's annoying and removes the whole benefit of using world space UVs.

    Another is to move the camera back to the world origin when you move too far, and move the scene with it, but that might cause the world space UVs to obviously pop, and requires changes to a lot of systems.


    However you could do something similar in the shader. You could move the UVs so they're always close to the camera. The trick would be to only move them so they still tile as if they're centered at the world origin, but as an offset from the camera. You can't do this in the fragment shader or surf function since the UVs are already quantized at that point from the precision loss during the vertex interpolation. It has to be done in the vertex shader.

    If the UVs are being scaled in whole integers, you can get away with something like this:

    o.CamRelativeWorldUVW = (worldPos.xyz - floor(_WorldSpaceCameraPos.xyz)) * _TexScale;

    Otherwise you need to work out how much to offset the position while keeping the tiling the same. That looks something like this:

    o.CamRelativeWorldUVW = (worldPos.xyz * _TexScale) - floor(_WorldSpaceCameraPos.xyz * _TexScale);
     
    Invertex likes this.
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,825
    Here's your basic world UV shader on a plane placed at 75000, 0, 75000 in the editor. At this position, just rotating the camera causes the camera's position to jitter and fly around randomly due to floating point precision issues. This is an insane distance to be from the world origin in any situation.
    upload_2019-5-13_0-10-41.png
    The behavior between mobile 16 bit and desktop 32 bit floating point looks a little different*, but it's still pretty clear something odd is happening to this texture. It's very noisy, and the actual texture is not. For example, here's what it should look like.
    upload_2019-5-13_0-13-24.png
    Compression artifacts, sure, but the bilinear interpolation is smooth.

    That is the same plane at the same position, just using the last line in the previous post instead of:
    o.CamRelativeWorldUVW = worldPos.xyz * _TexScale;

    Code (CSharp):
    1. Shader "Unlit/Camera Relative World UVW Triplanar"
    2. {
    3.     Properties
    4.     {
    5.         [NoScaleOffset] _MainTex ("Texture", 2D) = "white" {}
    6.         _TexScale ("Texture Scale", Float) = 1
    7.     }
    8.     SubShader
    9.     {
    10.         Tags { "RenderType"="Opaque" }
    11.         LOD 100
    12.  
    13.         Pass
    14.         {
    15.             CGPROGRAM
    16.             #pragma vertex vert
    17.             #pragma fragment frag
    18.          
    19.             #include "UnityCG.cginc"
    20.  
    21.             struct appdata
    22.             {
    23.                 float4 vertex : POSITION;
    24.                 float3 normal : NORMAL;
    25.             };
    26.  
    27.             struct v2f
    28.             {
    29.                 float4 vertex : SV_POSITION;
    30.                 float3 CamRelativeWorldUVW : TEXCOORD0;
    31.                 float3 normal : TEXCOORD1;
    32.             };
    33.  
    34.             sampler2D _MainTex;
    35.             float _TexScale;
    36.          
    37.             v2f vert (appdata v)
    38.             {
    39.                 v2f o;
    40.                 o.vertex = UnityObjectToClipPos(v.vertex);
    41.                 o.normal = UnityObjectToWorldNormal(v.normal);
    42.  
    43.  
    44.                 float4 worldPos = mul(unity_ObjectToWorld, float4(v.vertex.xyz, 1.0));
    45.  
    46.                 // "the usual way"
    47.                 // o.CamRelativeWorldUVW = worldPos.xyz * _TexScale;
    48.  
    49.                 // no issues even at extreme distances from world center
    50.                 o.CamRelativeWorldUVW = (worldPos.xyz * _TexScale) - floor(_WorldSpaceCameraPos.xyz * _TexScale);
    51.              
    52.                 return o;
    53.             }
    54.          
    55.             fixed4 frag (v2f i) : SV_Target
    56.             {
    57.                 float3 blendNormal = pow(i.normal, 4);
    58.                 blendNormal /= dot(blendNormal, float3(1,1,1));
    59.  
    60.                 fixed4 colX = tex2D(_MainTex, i.CamRelativeWorldUVW.yz);
    61.                 fixed4 colY = tex2D(_MainTex, i.CamRelativeWorldUVW.xz);
    62.                 fixed4 colZ = tex2D(_MainTex, i.CamRelativeWorldUVW.xy);
    63.  
    64.                 fixed4 col = colX * blendNormal.x + colY * blendNormal.y + colZ * blendNormal.z;
    65.  
    66.                 return col;
    67.             }
    68.             ENDCG
    69.         }
    70.     }
    71. }
    * edit: Minor note about mobile vs desktop. Some mobile platforms the texture sampling hardware itself is limited to 16 bit floating point UVs, so the results of the limited precision ends up appearing like blocky, pixelated artifacts when the UV range grows too large. The values are cleanly quantized. You can get similar artifacts by adding a large value to the final UV in the fragment shader, but usefully accurate conversion from 32 bit to 16 bit is more involved.

    Add 8192 to your UV in the tex2D() function and watch your texture magically become point filtered!

    The artifacts in the example above are due to the precision issues happening at multiple places in the chain, from the world positions, the interpolation between 3 precision limited values, and even the clip space vertex positions themselves due to Unity using an intermediate world position in the calculations rather than a single MVP matrix. The noise comes predominantly from the interpolation though. Vertex interpolation is always noisy, it's just usually not obvious because the noise is hidden within the relatively high precision of the usual floating point values. See:
    https://forum.unity.com/threads/using-2-textures-and-mask-from-vertices.625861/#post-4193557
     
    Last edited: May 13, 2019
    SugoiDev likes this.
  5. Olivier356

    Olivier356

    Joined:
    Sep 14, 2015
    Posts:
    12
    Hi bgolus and thanks for your detailed answer.

    It seems that the problem is not only during texel fetch, the problematic GPU is a Mali 400 MP2 where every float computations seems to be done in fp16 during fragment shader stage.

    I understand the problem and i know what i have to do to resolve it but it's pretty annoying not to be able to check by myself if it's really solved.

    Seeing your answers i realise there is no way to emulate this in Unity thus i'm planning to wrap the critical computations inside a 16bit rounding function to see if i can emulate the behavior manually to be able to check if my changes will solve the problem on this GPU.

    By the way, what do you think of "#pragma fragmentoption ARB_precision_hint_nicest". Do you think it can solve my problem ? Is there a risk for my material to not even show up if i use this flag on a device that does not support it or it will be ignored ?

    Thanks !
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,825
    Some mobile GPUs use 16 bit UVs in the sampler hardware, so forcing it to use 32 bit precision in the fragment shader with the #pragma still won't fix it. Using that on hardware that defaults to 16 bit math also comes with a massive performance hit, so I'd avoid it.
     
  7. Olivier356

    Olivier356

    Joined:
    Sep 14, 2015
    Posts:
    12
    Yes but for this part of the problem i can use 'frac'.

    Yes i guess its at least twice more complex for the gpu to handle.
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,825
    No you can't. The interpolated vertex value you get in the fragment shader is already quantized, so while calling frac() will move it to a 0.0 to 1.0 range, it will still be quantized. You can't get back precision you already lost. Hence why you need to move where "zero" is to close to the camera like the example shader above.
     
  9. Olivier356

    Olivier356

    Joined:
    Sep 14, 2015
    Posts:
    12
    Oh you mean the UV part is also passed from vert to frag as fp16 ? Indeed I though it was fp32.

    But this is also not a big problem in my case since the uvs passed to the frag shader represent a rotated unitary quad. The problematic scaling happens in the frag shader.

    Here's a screenshot to show you the final render, the object in the bottom left hand corner is a supernova (a 2D ball of fire expanding continuously). In fact its a quad, oriented toward the player to limit the number of frags to be rendered. And i need these scaled uvs for texturing but i also use them to color my object (shading a wave effect at the circle edge).

     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,825
    Depends on the device. Most will do 32 bit, if you're using float2 and not half2 for the TEXCOORD, but some devices will still go "oh, they shouldn't ever need more than 16 bit precision" and do that instead even when you've specified 32 bit. Other devices might pass the data in 32 bit and immediately convert it to 16 bit on the first use, doing all math operations in 16 bit. Mobile is a huge pain.

    To be fair, most of this is limited to GLES 2.0 devices, though I believe some early GLES 3.0 devices might have these kinds of behavior.

    You can always try using frac() and see if it works.
     
    Last edited: May 17, 2019
  11. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    791
    Mali-400 is one of two mobile GPUs out there that doesn't have 32-bit floats in fragment shaders at all. All the computations that require 32 bits should be done in the vertex shader, and the results passed to fragment shader (the inputs to FS are likely to be 16-bit as well).