Search Unity

Determinism of math equations (reduced precision). Any thoughts?

Discussion in 'Game Design' started by Antypodish, Dec 6, 2018.

  1. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,777
    One of my agenda with my project, is to ensure determinism.
    Specially, if I want in the far future implement multiplier, or execute replay.

    My project is aiming to utilize modding system, with custom functions, possibly written by players.
    Also, I want to build up the game mechanics on this modding tool. This way I can test it, as I progress.
    Just to mention, I will be using ECS

    So we are probably aware, of issue with floating point imprecision, hence following possibility of breaking determinism.

    For that matter, my current approach, is to use ints instead floats. Works fine so far.
    However, until ECS has solved problem with deterministic floating points, I am stay with its for now.
    But to execute for example trigonometric functions like sin, cos etc, I convert into float then back to int.

    Aim is to represent any decimal points as int, multiplied by 1000. (At least for now)
    Here float values of
    • 0.5 will be 500 int.
    • 0.002344 will be 2
    • 354.1 will be 354100

    So simple example of sinus function, may look like
    Code (CSharp):
    1. a = 500 ; // Input as 0.5f * 1000
    2. x = (int) ( sin (a * 0.0001f) * 1000 ) ; // 0.47942553 ... * 1000 = 479
    At this stage, I decided to use 3 points precision, so I can multiply, or divide int by 1000, to leave some resolution in both direction.

    If I will get some more complex functions, that I can in-build, I may increase precision internally to 6 decimal points. While taking and returning int, outside the function. That should reduce amount of required mult / div by 1000.

    There is really plenty to testing for me, if current methodology will indeed ensure fine determinism.
    While atm, is hard for me predict the outcome, I hope not to run into major quirks. Expecting at least near values across desktops.

    Of course there is also physics determinism, or lack of, but now, I think I can not do much about it.


    So the question would be
    Any thoughts on current approach?

    Can you think of any issues, I may face additionally in front ahead?
    Do you think precision of 3 decimal points would be enough?
    Or should provide higher resolution?
     
  2. newjerseyrunner

    newjerseyrunner

    Joined:
    Jul 20, 2017
    Posts:
    966
    I’m the old days when we didn’t have the processing power to do real trig in our raycasters we precalicated lookup tables.

    Code (csharp):
    1. float precalcSin(float angle){
    2.      static  float[] precalc;
    3.      if (precalc == null){
    4.          //either assign hard coded array or just calculate it
    5.      }
    6.      return precalc[Mathf.FloorToInt(angle) % 360];
    7. }
    Hard coded would be preferable for true determinism.
     
    Antypodish likes this.
  3. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,777
    I was so focused on part of project, that actually you reminded me, there are options, for trigonometry approximation.
    Yep, something I will look at. Good stuff.

    That will be a little extra, to bigger picture ;)
     
  4. newjerseyrunner

    newjerseyrunner

    Joined:
    Jul 20, 2017
    Posts:
    966
    Correct me if I’m wrong but all trig in computers are approximations. People think of trig as maths dealing with triangles but that’s an emergent property. Trig is calculus of the unit circle, which computers can’t really do (at least not quickly.). I’d be very surprised if Mathf.Sin reduces to anything other than a Taylor series perdabation, which is very easy to calculate.
     
    Antypodish likes this.
  5. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,777
    If you would use doubles, you could calculate pretty precise trig. Approximation with lower res, my loose precision.
    However, I am not sure, if CPU ALU this days, has dedicated area for trig, or is simply calculated to certain extent / lookup.

    But I need recap my memory on trig approximation and what CPU actually is doing.
    For my application, lookup maybe sufficient. Yet to see. I just must be careful, not to loose too much precision.
    Probably approx trig with 5 to 6 (or maybe even 4) dec points could be sufficient, which allows for some additional math and hopefully avoiding overflow.

    Edit:
    Either I will find somewhere approx equations, or lookups, or I derive them. Excel my help with it as well ;)

    Edit2:
    Just checked. In fact you that would make sense. I also looked FOURIER SERIES
     
    Last edited: Dec 7, 2018