Search Unity

Why does unity recommend us to use float type for texture coordinates in the shader?

Discussion in 'Shaders' started by Middle-earth, Aug 23, 2019.

  1. Middle-earth

    Middle-earth

    Joined:
    Jan 23, 2018
    Posts:
    37
    float and half are two common data types when write the shader.This is about the difference between float and half from https://docs.unity3d.com/Manual/SL-DataTypesAndPrecision.html :



    High precision: float
    Highest precision floating point value; generally 32 bits (just like float from regular programming languages).
    Full float precision is generally used for world space positions, texture coordinates, or scalar computations involving complex functions such as trigonometry or power/exponentiation.

    Medium precision: half
    Medium precision floating point value; generally 16 bits (range of –60000 to +60000, with about 3 decimal digits of precision).
    Half precision is useful for short vectors, directions, object space positions, high dynamic range colors.



    I've noticed that unity recomment us to use float for texture coordinates.However,texture coordinates,which is also know as uv,are usually from 0~1.Sometimes the values of uv are a little bigger when changing tiling and offset,but they are offten not so big like world position.So What's the purpose that unity let us use float for uv instead of half?
     
    technicat and Rs like this.
  2. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    911
    If the texture is 2K or larger on any axis, half does not have enough precision to accurately represent 1-texel offsets.
     
    technicat and Rs like this.
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,154
    To further explain, the way floating point numbers work means the precision changes based on the value being stored. For example, a half can store a value with a range of +/- 65536, but between 32768 and 65536 it can only represent values in steps of 32. So 32768, 32800, 32832, etc. Any value inbetween will end up being stored as one of those values. Also 65536 can't actually be stored as that is considered infinity, so really a half can only store values between +/- 65504.

    At smaller values, the step is much smaller. Between 0.5 and 1.0 for example, the step size for half precision floating point values is:
    0.00048828125

    That looks like a small number, and it is, but that's also the exact texel size of a 2048 texture. This means a 2048 texture is essentially point sampled for half of its 0.0 to 1.0 UV range, but the values stored are offset by half a texel so even then it's not correct. This means actually means point sampled textures of almost any resolution will be noticeably offset compared to shaders using float instead of half UVs, and 1024 or 512 textures will have visible banding between texels when using bilinear or better filtering.
     
  4. tmcthee

    tmcthee

    Joined:
    Mar 8, 2013
    Posts:
    59
    Always interesting bgolus.
    Is there an argument that, on mobile at least, where possible, textures should be kept smaller and half precision used? According to this...
    https://docs.unity3d.com/Manual/SL-ShaderPerformance.html
    "Mobile GPUs have actual half precision support. This is usually faster, and uses less power to do calculations."
    Is the efficiency worth the trade off?
     
    technicat likes this.
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,154
    If you know you’re only using textures that are 512 or smaller, and won’t ever be close enough to see the texel filtering clearly... maybe? It depends on your use case. Also realistically much of the savings are going to come from using half precision for calculations, which most of the time you’re not doing much to UVs beyond applying the scale and offset (which is a single instructions) and passing them between the vertex and fragment stages. It may produce measurable savings from the data transfer, but how much depends on a lot of other factors.

    Where using half instead of float really helps is for things like lighting calculations or other more complex math. GPUs that can do 16 bit floats (what half precision means are) can do that math roughly twice as fast as full precision, so if your shader has 50 instructions for lighting that can be where you find big savings.
     
    tmcthee likes this.
  6. Middle-earth

    Middle-earth

    Joined:
    Jan 23, 2018
    Posts:
    37
    Great,bgolus! Thank you for further explanation!It's really help!
    From what you said,I can fully understand the whole contents before"This means a 2048 texture is essentially point sampled for half of its 0.0 to 1.0 UV range...",but there are still several points I can't get.What does "the values stored are offset by half a texel" mean?And "point sampled textures of almost any resolution will be noticeably offset",Almost any resolution?Why?
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,154
    Okay, let’s pretend we have a 1 dimensional texture that’s 5 pixels wide. The texel size in that case is 0.2. But the first pixel’s center is a half pixel width from the edge, so 0.1. That means the texel center positions are:
    0.1, 0.3, 0.5, 0.7, 0.9

    Now let’s pretend we have a mythical floating point format that can only store values in steps of 0.2. The problem is it can only represent values at:
    0.0, 0.2, 0.4, 0.6, 0.8, 1.0

    So now we have two problems. First is that value of 0.0 stays 0.0 until where the pixel center should be at “0.1”, then jumps to 0.2. edit: The value of 0.0 stays 0.0 until the UV is at 0.2. So there’s no additional precision to have the in between values in which to interpolate between the texel colors, hence appearing point sampled. Another problem is those values the texture is being sampled from is between the texels, so that 5 pixel wide texture is only showing the average color between each pair of pixels rather than the individual texel colors.

    As a more descriptive example, let’s say the 5 pixels are White, Red, Green, Blue, and Black. Normally you would expect to see each of those colors shown in that order, each visible for a 5th of the 0.0 to 1.0 range. With this mythical floating point format, the first 10th 5th of the UV range would show middle grey, the each next 5th would be pink, then yellow, then teal, then dark blue, and finally the last 10th middle grey again. This is that 2048 texture with half precision UVs, for the last half of the UV 0.0 to 1.0 range..

    If the texture is point sampled, the colors won’t be wrong, but that offset will still exist edit: but the color used may be offset by a texel.

    If the floating point format could store values in steps of 0.1, for bilinear filtering you’d alternate between the perfect texel color and that average, but again with an offset, so the first half a pixel would be the blend, and the second half would be the pixel color. This is basically what’s going to happen when using a 1024 texture with half precision UVs. It’s also what you’ll see for the second 4th of the UVs with the 2048.


    Point sampled ... I might be wrong there, and it might depend on the GPU, but it’s hard to know what color you’ll get when choosing a position perfecting between two texels. A UV of “0.0” isn’t really the first texel, it’s halfway between the first and last, so which color do you get? I think most GPUs would give you the first texel, but other pixel center positions? I’m not sure how consistent it is, so you might start getting one texel’s color across two texels’ worth of space, and skip others. Floating point math gets funky, especially when you add in GPU makers trying to do whatever optimizations they can get away with.

    edit: some corrections
     
    Last edited: Aug 26, 2019
    SugoiDev likes this.
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,154
    Okay, I double checked my explanation above because it was feeling a little bit off. It's partially right, and partially wrong.

    I made some visual representations of the "5 pixel texture" examples to show the issues better.
    upload_2019-8-26_14-24-22.png

    And here's a slightly more real world example.
    upload_2019-8-26_14-47-19.png

    I'm approximating the 16 bit "half" floating point accuracy here by adding 4096 to the 32 bit UVs, at which point the precision for the entire image is roughly equivalent to a true half precision UV for that top right quarter of the 0.0 to 1.0 range. The "real world" example actually looks even worse than the simulated 5 pixel version due to precision errors in the vertex data interpolation itself. And this is a "best case" where the camera is perfectly aligned with the plane we're looking at. With the camera and surface not being perfectly aligned things get even weirder.

    upload_2019-8-26_14-52-13.png
    But those crazy artifacts are more a product of how I'm approximating the half precision than what you'd actually see.

    And in case you think I must be doing some funny in the fragment shader, here it is straight from the shader:
    Code (csharp):
    1. half4 frag (v2f i) : SV_Target
    2. {
    3.     return tex2D(_MainTex, i.uv);
    4. }
     
    Last edited: Aug 26, 2019
    SugoiDev and tmcthee like this.
  9. Middle-earth

    Middle-earth

    Joined:
    Jan 23, 2018
    Posts:
    37
    I get it now,Thanks again!Learning much from you.