Search Unity

.Net 4.x Math problems?

Discussion in 'Scripting' started by joedurb, Jul 14, 2018.

  1. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    38
    I have been hoping to transition a Unity project to newer .Net.

    I experimented a while back, and when going from 2017.4 -> Experimental .Net 4.6 most of the project ran great, and faster, but some key parts were "messed up" (described below).

    Today I tried 2018.2 -> .Net 4.X Same deal, runs great, other than same key parts funky from previous tries.

    The issues probably relate to math functions. is there a list of *ANY* known differences in the way the two different .Net Implementations handle math, etc. or the basic behavior of doubles/floats/etc This project does lots of positioning and mesh building and floating origins, etc. all c#

    Issues:
    1a) My run-time built terrain meshes get offset from some unknown cause by like 50 meters in one axis.
    1b) Or my coordinate system is getting offset somehow
    2) staticly seeded randoms appear to be producing different values, not sure if related to 1 or not.. maybe .Net Random Seeds are not compatible across versions?

    I realize this question is very nebulous, so I'm hoping there is just some list of behavior changes from 3.5 ->4.x As I have not had luck finding anything, and haven't found others complaining about issues...

    For my problem to appear I am:
    1) Run project, works great.
    2) Edit -> Project Settings -> Player -> CHANGE Scripting Runtime Version to .NET 4.x
    3) Run project, mostly works, but not quite. No errors, just "Math" off...

    Thanks for any tips/links/etc!
     
  2. MaskedMouse

    MaskedMouse

    Joined:
    Jul 8, 2014
    Posts:
    1,092
    I don't know how many mathmatical calculations you have but why not have a copy of your project, open one with 3.5 and the other with 4.x
    Then check the values you're getting, are they still the same?
    is the randomization in your calculations still within the same boundaries as the previous
    Are you using a lot of randomization in your calculations?
    if for some reason the randomization is updated to be more optimized but gives slightly different values, that could be a change.
     
  3. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    38
    Yeah, I'll keep digging in to try to track exactly what functions are delivering differing results, just was hoping there was a known list of "differing behaviors".
    Thanks,
    -Joe
     
  4. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    38
    Partial Solution:
    Well, this math behavior for one has changed: (To be more correct, as It's behavior was buggy i guess.)

    double v = 1000.111111111111111111;
    int iv = (int)v; //integer v (rounded down)
    float dv = (float)v - iv; // decimal remainder v
    Debug.Log("Should be .11111111:"+ dv);
    iv = (int)v; //integer v (rounded down)
    dv = (float)(v - iv); // decimal remainder v
    Debug.Log("TEST", "Should be .11111111:" + dv);

    Results On Unity .NET 3.5: First dv=0.1111111 Second=0.1111111

    Results On Unity .NET 4.x: First dv=0.111084 Second=0.1111111

    So, The order of operations with casting apparently has been *fixed* as i think
    The 4.x behavior is proper.

    I'm not sure if casting precedence will be enough to explain all of my math glitches, but will review my whole project.
     
    Last edited: Jul 14, 2018
  5. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    38
    Full Solution:
    Okay. turned out changes in cast precedence / cast behaviors between unity .Net 3.5 and 4.x were the cause for all glitches.

    Previously in 3.5 code like this worked:
    double D;
    float F;
    int I;
    D=I+F; //this would work in 3.5, compiler must have saw a double was coming, so converted int and float to double prior to adding.
    But in 4.x it's accuracy was waaaaay off, since it does the addition of an int to float, casting to float, THEN casting to double.
    so, my fix for 4.x:
    D=(double)I + (double)F; //Fixes the 4.x accuracy

    I'm guessing 3.5 just allowed some sloppy casting code. and 4.x requires more accurate/explicit code now.
    Just a heads up for anyone else who does big math, and shoddy type casting :)

    Note:
    would still appreciate if anyone else knows of a link to changes unity users would be subjected to in the 3.5 to 4.x .Net conversion. unless this is literally the only little gotcha..

    Thanks!
     
    Last edited: Jul 15, 2018