Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

float to int cast unexpected/inconsistent behaviour

Discussion in 'Scripting' started by SanderGoal043, Sep 15, 2021.

  1. SanderGoal043

    SanderGoal043

    Joined:
    Feb 13, 2017
    Posts:
    9
    hi everyone!

    for a project I had to remap a float from 0-1 to 0-100
    I simply multiplied it by 100 and cast it to an int to remove everything behind the decimal point.
    however I have been getting some unexpected results from this


    Code (CSharp):
    1.                
    2.                 float number = 0.7f;
    3.                 Debug.Log((number * 100));//70
    4.                 float test = number * 100;
    5.                 Debug.Log((int)test);//70
    6.                 Debug.Log((int) (number * 100.0f));//69
    7.                 Debug.Log((int)(float)(number * 100.0f));  //70
    8.  
    i understand that floats can have this tendency to turn the number into 0.69999...
    i dont understand why it gives different results in the samples given above

    i tried the same thing in a c#.net console application but everything works as expected
    Code (CSharp):
    1.  
    2.             float number = 0.7f;
    3.             Console.WriteLine((number * 100));//70
    4.             float test = number * 100;
    5.             Console.WriteLine((int)test);//70
    6.             Console.WriteLine((int)(number * 100.0f));//70
    7.             Console.WriteLine((int)(float)(number * 100.0f));//70
    8.  
    *for the project i'm using mathf.roundToInt now just in case
    i'm just curious why
    Code (CSharp):
    1.  Debug.Log((int) (number * 100.0f));
    has a different result from the rest
     
  2. PraetorBlue

    PraetorBlue

    Joined:
    Dec 13, 2012
    Posts:
    7,722
    Trying to store .7 as a binary number is kind of like trying to store 2/3 as a decimal number.

    2/3 in decimal looks like 0.666666666666 infinitely repeating.

    Similarly 7/10 in binary looks like 0.1011 0011 0011 0011 0011 0011... and so on, infinitely repeating. Obviously it just gets truncated at some point and that's how you end up with 0.69999 something

    If you pick a number that is representable in binary, say 7/8s or 0.875, it will work perfectly.

    The difference you are seeing is likely just some kind of rounding difference between Console.WriteLine and Debug.Log.
     
  3. Brathnann

    Brathnann

    Joined:
    Aug 12, 2014
    Posts:
    7,144
    Interesting enough, if you take this line

    Code (CSharp):
    1. Debug.Log((int) (number * 100.0f));
    And remove the cast to (int) and Debug it, it prints out 70. I just ran it to see if I was getting the same results.
     
  4. SanderGoal043

    SanderGoal043

    Joined:
    Feb 13, 2017
    Posts:
    9
    i understand that there are rounding errors in floats, and using 0.875 doesn't give this behaviour
    but i would still expect these to return the same number

    Code (CSharp):
    1. Debug.Log((number * 100));//70
    2. Debug.Log((int) (number * 100.0f));//69
    3. Debug.Log((int)(float)(number * 100.0f));  //70
    the fact that casting to float before casting to int works, makes me think that a float multiplication returns a double (or gets turned into one) , causing a rounding error.(69.999..) becomes (69.999000...)

    i did an extra test by casting to double instead and it shows the same rounding error

    Code (CSharp):
    1. Debug.Log((int)(double)(number * 100.0f));  //69
    note that visual studio also flags the double cast as redundant
     
  5. PraetorBlue

    PraetorBlue

    Joined:
    Dec 13, 2012
    Posts:
    7,722
    Brathnann likes this.
  6. FernandoHC

    FernandoHC

    Joined:
    Feb 6, 2018
    Posts:
    333
    You should get familiar with decimal rounding methods like ceiling and floor.
    https://docs.microsoft.com/en-us/dotnet/api/system.math.ceiling?view=net-5.0

    Also, if all you want to do is ommiting the values after decimal point, you can do it simpler just by formatting the string, like floatVariableName.ToString("N0"). There are several formatting parameters, that's something to get familiar with as well.
     
  7. Brathnann

    Brathnann

    Joined:
    Aug 12, 2014
    Posts:
    7,144
    To note, the Debug.Log is not to blame for the 69 value.
    upload_2021-9-16_9-11-4.png

    Adding a breakpoint so I can check the values and we can see that multiplying the float by 100.0f produces 70. Then casting that value to an int still produces 70. However, when combining the multiplication and casting in one line, we get 69.

    Actually, another interesting set of numbers

    upload_2021-9-16_9-21-2.png

    The multiplication still gives 70, casting to an int still gives 70. But then num is 69.
     
  8. Brathnann

    Brathnann

    Joined:
    Aug 12, 2014
    Posts:
    7,144
    Also, a c# console program produces these numbers
    upload_2021-9-16_10-0-53.png

    Notice in this case that the multiplying and then casting to int produces 69, but num is 70.
     
  9. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,380
    Note - in this post I use the word 'float' to refer to both single and double precision floats. I don't say float to refer to the shortcut C# type name 'float', but to the IEEE floating point standard.

    So I just ran this as well and got OP's results. Including in Visual Studio (outside of Unity, VS2019 to be exact, targeting 4.7.2) I don't get the result.

    So I rewrote the code like so:
    Code (csharp):
    1.         float number = 0.7f;
    2.         float test = number * 100;
    3.  
    4.         float fa = test;
    5.         float fb = (number * 100.0f);
    6.         float fc = (float)(number * 100.0f);
    7.  
    8.         int a = (int)test;
    9.         int b = (int)(number * 100.0f);
    10.         int c = (int)(float)(number * 100.0f);
    11.  
    12.         Debug.Log(fa.ToString("0.00000000")); //70.00000000
    13.         Debug.Log(fb.ToString("0.00000000")); //70.00000000
    14.         Debug.Log(fc.ToString("0.00000000")); //70.00000000
    15.  
    16.         Debug.Log(a);//70
    17.         Debug.Log(b);//69
    18.         Debug.Log(c);  //70
    And like this in visual studio:
    Code (csharp):
    1.             float number = 0.7f;
    2.             float test = number * 100;
    3.  
    4.             int a = (int)test;
    5.             int b = (int)(number * 100.0f);
    6.             int c = (int)(float)(number * 100.0f);
    7.  
    8.             Console.WriteLine(a);//70
    9.             Console.WriteLine(b);//70
    10.             Console.WriteLine(c);  //70
    And so I checked the IL and we'll notice that both have identical IL:

    (unity - only the setting of a,b,c lines)
    Code (csharp):
    1.     // [22 9 - 22 27]
    2.     IL_0023: ldloc.1      // test
    3.     IL_0024: conv.i4
    4.     IL_0025: stloc.s      a
    5.  
    6.     // [23 9 - 23 40]
    7.     IL_0027: ldloc.0      // number
    8.     IL_0028: ldc.r4       100
    9.     IL_002d: mul
    10.     IL_002e: conv.i4
    11.     IL_002f: stloc.s      b
    12.  
    13.     // [24 9 - 24 47]
    14.     IL_0031: ldloc.0      // number
    15.     IL_0032: ldc.r4       100
    16.     IL_0037: mul
    17.     IL_0038: conv.r4
    18.     IL_0039: conv.i4
    19.     IL_003a: stloc.s      c
    (vs in distinct project from unity - only the setting of a,b,c lines)
    Code (csharp):
    1.     // [129 13 - 129 31]
    2.     IL_000f: ldloc.1      // test
    3.     IL_0010: conv.i4
    4.     IL_0011: stloc.2      // a
    5.  
    6.     // [130 13 - 130 44]
    7.     IL_0012: ldloc.0      // number
    8.     IL_0013: ldc.r4       100
    9.     IL_0018: mul
    10.     IL_0019: conv.i4
    11.     IL_001a: stloc.3      // b
    12.  
    13.     // [131 13 - 131 51]
    14.     IL_001b: ldloc.0      // number
    15.     IL_001c: ldc.r4       100
    16.     IL_0021: mul
    17.     IL_0022: conv.r4
    18.     IL_0023: conv.i4
    19.     IL_0024: stloc.s      c
    Aside from the line # comments, they're identical (the line # comments of course will be different, they're different code files).

    What we can tell though is what's actually happening though IL wise...

    Code (csharp):
    1.     IL_0012: ldloc.0      // number
    2.     IL_0013: ldc.r4       100
    3.     IL_0018: mul
    4.     IL_0019: conv.i4
    5.     IL_001a: stloc.3      // b
    load the variable (0.7) into register
    load 100.0f into register
    mult those registers (this places the result on the top of the eval stack)
    conv.i4 on that result (**** this is our primary reason for the problem *** - it acts on the value on the top of the eval stack. This converts whatever is there into an int and places it on top of the eval stack)
    Move the result on top of the eval stack to variable 3 (b)

    We can compare this to the version that casts to float first before int. It just have one interim step of conv.r4 before the conv.i4. This converts the result to a float, rather than to an int, and places it ontop of the eval stack.

    The issue going on is what is on top of the eval stack when 'conv.i4' is called?

    In the case of 'b', it's the result of the multiplication. In the case of 'c', it's definitely a float (since conv.r4 was just called).

    Thing is... what's at the top of the eval stack relies on how mul was jitted into machine code by the runtime. And Unity uses a different runtime than Visual Studio 2019 on its own.

    The thing is... 'mul' is just telling the runtime we need to do a float multiplication (since the inputs are floats). How that actually gets performed is up to what the runtime decides.

    This is the biggest part about "float error" in general... floating point standards really only define how a float is stored. It's loose on how operations actually occur. Different CPUs will behave differently. Some CPUs don't even have a floating point operator and instead rely on software implementations offered often by the OS or some other source.

    It's not even required to be operated in the same word size as the float (so yes you could operate a single float with a double float hardware operator)!

    Usually hardware implementations perform all FPU operations in a word size larger than the actual data. For example (and don't hold me to this... I'm working on limited knowledge of the CPU architecture), AMD Ryzen seems to have a 256-bit data path for its FPU operations (the architecture I'm on). It's usually bigger so that way the results contain all of the overflow (since float arithmetic can have spanning sig ranges).

    And also keep in mind that usually speaking a CPU will use the same FPU path for all floats regardless of size! Why have 2 FPU's when a large one can cover the same operations on both singles and doubles?

    Float error doesn't just have to do with rounding and sig value ranges. It also has to do with the target platforms implementations of operations. And 'platform' can refer not just to hardware, but software (including version of that software).

    So the "result" sitting at the top of the eval stack isn't necessarily a single precision float. It could very well be a double, or even larger. Depends on what the runtime decided to do for 'mul'.

    So this leaves the only question...

    "Why does Unity's runtime appear to do something different than VS 2019 targeting 4.7.2?"

    And... :shrug:

    We don't know what Unity does in their version of the runtime.

    It could very well be that they're using a modified version of the runtime from a long while ago (this gets into the muddy history of Unity with mono/.net/xamarin/etc). The runtime may have used to do this and Unity still has it because they still use some modified version of that.

    Or their version is more heavily mono based (likely) than the one used by Windows/VS2019 (note that unity only uses VS20XX for editor/debug purposes... the compiler and runtime is distinct from this).

    Or... who knows. Maybe the Windows/Microsoft/VS2019 runtime implicitly casts the result of the operation to a single float in the end through software as part of its implemenation of 'mul' to create a more consistent result at the expense of speed. While Unity just takes the raw result and doesn't force a cast until told to (or forced to by moving it off the eval stack and into a typed field/variable) to increase performance (or just because its version of the runtime it inherited historically did so for performance).

    ...

    The why? It's hard to tell why.

    All we know for sure though is that its all to do with how the runtime decides to perform 'mul'. And those choices are different in the Unity runtime rather than the Windows/MS/VS2019 runtime.

    Heck I'm willing to bet if you IL2CPP'd, or you targeted switch/PS/xbox you may very well get varying results there as well.

    ...

    In the end... don't trust floats (single or double)!

    It's part of the definition of how floats work. They're fundamentally prone to error.
     
    Last edited: Sep 16, 2021
    SanderGoal043 and Brathnann like this.