Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice
  3. Dismiss Notice

Square Root runs 1000 times in ~0.01ms?

Discussion in 'Scripting' started by Matt-Downey, Aug 16, 2012.

  1. Matt-Downey

    Matt-Downey

    Joined:
    Apr 22, 2012
    Posts:
    11
    I've always been told square root was expensive, so generally, I presumed it meant on an order of 100x as slow as a multiplication or division, potentially even slower. As it turns out a quick debug says I can run 1000 square roots in about 10 microseconds (0.01ms).

    Could the profiler be lying to me at all when it says I can run 1000 square roots in ~10microseconds?

    I know the theory behind square roots, either use a exp(0.5*log (num)) OR use 2/6 approximation method with an extra zero for every odd number of digit places (starting at 3), then iterate the guess plus a slightly more accurate version until you get the degree of accuracy you want, which for floats should be in 3-4 iterations (for the 6-7 digit places of floats) since it doubles in accuracy each time, but this fast is unheard of to me.
     
  2. gfoot

    gfoot

    Joined:
    Jan 5, 2011
    Posts:
    550
    These kinds of benchmarks can give you a rough idea of things, but you mustn't extrapolate too much from the result. If you're worried about the performance of your algorithm which uses square roots then time the whole algorithm, not just loads of square roots, and if possible time it again without the square roots, even if it gives the wrong answers - so long as those wrong answers don't affect its complexity (e.g. recursion depth). Then you can see what they're costing you.

    However, that doesn't help you much in practice unless you have an alternative algorithm in mind. If you do, then you might as well just start by timing them both and seeing which is faster. if you don't have an alternative algorithm, then there's no choice to make, so don't worry about its cost.
     
  3. Dreamora

    Dreamora

    Joined:
    Apr 5, 2008
    Posts:
    26,601
    Best run it a million times
    At ms and sub ms ranges, the time measurement of the os itself already is no longer precise even less if you measure it with mono which lacks the high precision timer from MS .NET and the low precision one is not remotely strong enough for this type of 'conclusion'
     
    trombonaut and Infinite-3D like this.
  4. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    Things change. The square root operation in hardware is not that much slower than basic math operations any more (varies depending on the CPU of course).

    --Eric
     
  5. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,447
    Code (csharp):
    1. Debug.Log(string.Format("Is high resolution: {0}, accuracy: {1} ns", Stopwatch.IsHighResolution, 1e9 / Stopwatch.Frequency));
    Prints "Is high resolution: True, accuracy: 100 ns"
     
  6. shaderbytes

    shaderbytes

    Joined:
    Nov 11, 2010
    Posts:
    900
  7. Cameron_SM

    Cameron_SM

    Joined:
    Jun 1, 2009
    Posts:
    915
    With modern CPUs you're probably going to bottleneck yourself elsewhere before you get speed issues with math functions. In the days of fixed point numbers (pre floating point hardware standards) where most real-time graphics maths involved pipelining hand tuned assembly, square roots were very expensive, but not so much any more. Even using lookup tables will probably be slower than just letting the CPU handle it because a cache miss in a table lookup is likely going to consume more CPU cycles than a floating point sqrt.

    Also, optimisation before profiling is really damn evil. Never ever do it. Make your whole program first, if you have problems after you've implemented all the features you need then profile, figure out where the performance problem is and micro-optimise only if you absolutely have to.

    If you're working on mobiles things are different because performance is linked with battery life, but we're working with managed code so this kind of micro optimisation is the wrong way of thinking IMO - you should be thinking about ways to simply avoid having to calculate things in the first place or to defer/spread out your maths. My current mobile game is an RTS with over 1000 units - the underlying game runs at about 5fps and I simply interpolate everything so it looks smooth and fluid at 60fps. Although it doesn't seem like you're working on a mobile game if you're doing something extreme the same "execute less" philosophy can be applied.

    Micro-optimising based on tight loops benchmarks in managed code like c# is also a bit silly. It's managed code after all.
     
    Last edited: Aug 17, 2012
    VitruvianStickFigure likes this.
  8. Matt-Downey

    Matt-Downey

    Joined:
    Apr 22, 2012
    Posts:
    11
    Thank you for all the responses, everyone.

    @gfoot: it's probably necessary if I want realism, so you're probably right about the no alternatives part.

    @dreamora: good to know when testing future functions.

    @eric: I'm young and I still believed it was slow, although I've learned a lot from this over the last three days because it made me optimize everything.

    @alexzzzz: this function could prove very useful in the future. I've been annoyed about the lack of significant figures while debugging, so this should prove useful at some point in the future.

    @shaderbytes: cool method, thanks for the info!

    [edit:] @cameron: I am doing a non-mobile example, and I'm optimizing before I make it [edit2: but they happen to prevent future calculations (which coincides with your philosophy), I'm not capable of micro-optimizations since I'm an engineer who can program, not a programmer who can engineer stuff]. The optimizations functions don't actually make the program any harder to read in script, if anything you can skip the "//OPTIMIZATION" parts. I tend to add lots of notes to my code and make it very flowery and organized.

    1) Luckily the calculation I'm using is only used for one frame before jumping, so I don't think the algorithm will be an issue. I was initially planning to cancel out magnitude squared (vectors) with time squared (physics) when calculating the distance between a parabola and a point, which seemed convenient at the time. The problem happened to be that this condition only happened when the initial y velocity of the parabola was zero (I eventually derived the equation I was using in physics to prove this). It didn't work out the way I was hoping, so I thought I'd look into using exactly one square root, which (for a parabola) is probably necessary if I want to cover all cases.

    2) I was actually going to iterate between a bunch of line segments that represent a rail system (think roller coaster) and see if the player could "jump off a roller coaster and reach another line segment on the roller coaster" given a rail velocity (could be anywhere in x/y/z) and a max combined velocity in the x-z plane. Without the square it would have worked fine in most cases, but it actually made the horizontal x-z plane into a x/z - y plane (another problem, but it wasn't as big of a deal as no y-velocity conserved from the roller coaster). The x/z - y plane case only became unrealistic at near-vertical rails (since you'd expect to be able to jump in any direction while jumping off a vertical rail on a roller coaster, not just left/right/up/down/or a combination thereof). Thus I figured the single square root could really pay off (which gives me two very good reasons to use a square root once per loop step after I've already finished a bunch of optimizations anyway)

    3) I was optimizing like crazy with early assumptions about points the player will definitely not reach:

    a) if(min(point1.y,point2.y) > maxYInArc) then do not use for calculations (and altered versions for x/z),
    b) dots to make sure things are heading in the same direction otherwise ignore,
    c) crosses to find if a line segment has a point to both the left and right of the direction of motion

    all of which I can still use.
     
    Last edited: Aug 17, 2012
  9. Ferb

    Ferb

    Joined:
    Jan 4, 2014
    Posts:
    25
    I wondered this too, and this page was the Google search result, so I'll put my own results here for anyone else who's interested.

    You can do the Fast Inverse Square Root in Unity if you like.
    http://en.wikipedia.org/wiki/Fast_inverse_square_root

    The implementation in C# looks like this:
    Code (csharp):
    1. using System.Runtime.InteropServices;
    2. [StructLayout(LayoutKind.Explicit)]
    3. struct intfloatunion {
    4.     [FieldOffset(0)]
    5.     public float f;
    6.     [FieldOffset(0)]
    7.     public int i;
    8. }
    9.  
    10. float FastInvSqrt(float num){
    11.         intfloatunion ifu = new intfloatunion();
    12.         ifu.f = num;
    13.         ifu.i = 1597463174 - (ifu.i >> 1);
    14.         return ifu.f * (1.5f - (0.5f * ifu.f * ifu.f * ifu.f));
    15.     }
    16. }
    17.  
    I didn't find a noticeable difference between that and Mathf.Sqrt though. There was a bit of an improvement if I made the code inline (which also avoided allocating a new struct every time the square root was worked out) but still less than a 10% improvement over Mathf. The FastInvSqrt also runs the risk of going wrong if you run into a CPU that represents numbers differently, such as using a different endianness - and of course, is less precise. As has been said, CPUs have improved so much that square roots are now pretty quick - as shown by the fact that that such a simple function as the FastInvSqrt (just a few multiplications and subtractions) is about the same speed. So you may as well use Mathf.
     
  10. KyleStaves

    KyleStaves

    Joined:
    Nov 4, 2009
    Posts:
    821
    One thing to realize is that a lot of times when you come across "square roots are slow" what people are really trying to communicate is that "doing a square root operation is slower than not doing a square root optimization." In game development there are a lot of times when people perform square root operations that are not strictly necessary. For example, a simple algorithm that tries to find the closest object to a point could look like...

    Code (csharp):
    1.  
    2. object nearest = null;
    3. float nearestDist = float.maxValue;
    4.  
    5. Vector3 point;
    6.  
    7. foreach(var obj in objects){
    8.     float dist = Vector3.Distance(nearest.position, point);
    9.    
    10.     if (nearest == null || dist < nearestDist){
    11.         nearest = obj;
    12.         nearestDist = dist;
    13.     }
    14. }
    15.  
    But it could also look like...

    Code (csharp):
    1.  
    2. object nearest = null;
    3. float nearestDist = float.maxValue;
    4.  
    5. Vector3 point;
    6.  
    7. foreach(var obj in objects){
    8.     float dist = (nearest.position - point).sqrMagnitude;
    9.    
    10.     if (nearest == null || dist < nearestDist){
    11.         nearest = obj;
    12.         nearestDist = dist;
    13.     }
    14. }
    15.  
    Both of those psuedo-code samples are equally accurate, but the second example doesn't do a square root to solve the true distance - it only compares the square distance. You could do the same thing with tons of different algorithms that all ask distance questions to save yourself unnecessary square root operations.

    So yea, square roots used to be on the slow side. These days they're very close to being as well optimized as the other math operators on modern CPUs - but a lot of the time square root operations are altogether unnecessary.
     
    trombonaut and PitaBreadGames like this.
  11. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,568
    @Ferb -

    I'd be interested to see what the performance difference was on a mobile device (iPhone or Android). Probably the same result as PC, but not sure what the differences are with optimizations when targeting an ARM processer.
     
  12. boddole

    boddole

    Joined:
    Sep 18, 2013
    Posts:
    35
    @ Dustin Horne:
    -I just happened by this thread and was doing some testing (on a iPhone 4) on a scene (that had real performance issues) with replacing the moving kinematic object's movement scripts with a custom distance method (basically just (a-b).sqrmagnitude) over vector3.distance.

    These results are just what I'm seeing in front of me right now (and if I knew how to run real profiling tools in a meaningful way I would but...here goes):
    -From Unity free profiler for iOS (run on a iPhone 4):
    objects using Vector3.Distance (cpu time 45-70ms // physx time 40-60ms)
    objects using custom method (cpu time 50-60ms // physx time 35-40ms)
     
  13. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    Nope, they are not.

    Anyone want to hazard a guess to the problem/s?
     
  14. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    I'm looking forward to you explaining that. I get the same result from both and the second (without sqrt) is faster.

    Are you talking about the slight imprecision coming from a float when using large number?
     
  15. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    Hint, it makes a mistake, even when the gap between two numbers is ~50% :)
     
  16. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,568
    nearestDist should be squared.

    Code (csharp):
    1.  
    2. float nearestDist = float.maxValue;
    3. Vector3 point;
    4.  
    5. foreach(var obj in objects){
    6.    float dist = (nearest.position - point).sqrMagnitude;
    7.  
    8.    if (nearest == null || dist < (nearestDist * nearestDist){
    9.         nearest = obj;
    10.         nearestDist = dist;
    11.    }
    12. }
    13.  
    14.  
     
  17. KelsoMRK

    KelsoMRK

    Joined:
    Jul 18, 2010
    Posts:
    5,539
    Aren't you already assigning it a squared value when you assign dist to it?
     
  18. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    Hermpa, hrmpa, I'ma make people guess stuff.

    If you want to be all smart, don't rub it in people's faces.

    Also, the only accuracy difference that can occur between the two is float error.
     
  19. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    Some people enjoy thinking about a problem first, others can wait a tiny bit before I post an answer.

    Furthermore, this is an excellent way of learning why premature optimisation is *bad*. Here we are thinking that two pieces of code are functionally identical - but they are not. If one does not understand the subtleties of the code he is writing then how can he be sure that the optimisation is valid? Answer - he can't, which is why you should write correct code and be very careful when you are trying to be clever - even with simple things!

    It's good that you've identified this potential error (which I saw, but haven't bothered testing for). However, this would only happen when the difference between both numbers is very small. What problem could occur with the hint I provided - i.e where the numbers are 50% apart.
     
    Last edited: Mar 22, 2014
  20. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    Actually float error doesn't only effect the very small.

    Float error isn't that it can't represent small accurately. It's that it can only handle a specific significant figure range. If you perform arithmetic with floats with large differences between them, large errors will occur.

    There is also the fact that floats store fractions in binary, which also results in base conversion error (0.1 can't be represented in binary accurately as it's a non-terminating sequence. Like how 1/3 can't be represented accurately in decimal.)

    Try this:

    Code (csharp):
    1.  
    2.             float a = 34523450000000000000000000000000f;
    3.             float b = 120000000f;
    4.             float c = a - b;
    5.             Debug.Log((a == c));
    6.  
    you'll notice that a is the same as c. That's a large error. (though technically you could say it's a small error relative to a)



    Next, these aren't scalars, they're vectors, what do you mean by if they are 50% apart? When one has a magnitude 150% that of the other (one has a magnitude that is longer by 50% of the other)?

    Truthfully, I don't even know what you'd mean if it was a scalar. What is "50% apart"? How can you have half of an apart?

    And you must know some magic thing with math (other than float error) that many physics engine developers don't know. Considering that most of the physics engines I've looked at, and ones I've written myself, all utilize this trick for it's added efficiency.

    I honestly can't wait to hear what you have to say.
     
    Last edited: Mar 22, 2014
    PitaBreadGames likes this.
  21. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    I still get the same result between the two snippet, and one if faster than the other. When finding closest object, I always used the fastest one.

    As Lord said, float is only an issue when performing arithmetic between very large value versus much smaller one, because float value are stored are exponents. In the case of comparing value, the issue arise when the float becomes very large. At that point, - in case of trying to find the nearest object - two or more object that are not at the same distance would appear to be.

    However, this issue only happens when value are huge, we beyond what you should have in a video game. You will lose animation precision/physic precision way before encountering squared root distance comparison issues.
     
  22. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    I didn't say the very small, I said when the difference is very small. Sure, in absolute terms that difference might be a large number, but it'd be small in relative terms (as the errors tend to appear near the 9th (?) significant figure in float32.

    On the contrary, I suspect that most/all physics engine developers would be able to pick up my concern as it's a very basic property that occurs with *all* standard numerical types in C#.

    I'm going shopping now, but I will post the answer when I get back for your convenience :)
     
  23. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    So it's got nothing to do with math, and more to do with C#?

    Now this could have potential for something, as I'm not versed on all the nuances to the .Net/mono framework and if numerical issues fall out of them that are specific to it.

    Though this is sounding like something to do with float error.




    Oh, and you said:

    Sorry if I didn't catch all that subtext.



    I also enjoy that you still haven't explained what "50% apart" even is.
     
    Last edited: Mar 22, 2014
  24. mhtraylor

    mhtraylor

    Joined:
    Jan 13, 2013
    Posts:
    33
    Wild guess, probably: the conversion from a single to a double and back again that occurs in the Distance method's call to Math.Sqrt.
     
  25. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    Okay, time for the answer.

    What is this amazing magical property of all basic numeric types in C# have that change the results of the two algorithms?

    Simple Answer: They all have a MaxValue!

    What happens when you exceed a max value? With integer types they overflow... which is probably going to lead to some weird results (Note: check out the 'checked' keyword). With floating types they simply go to an infinite value and the decimal type helpfully throws an exception.

    Now with Math.Sqr this doesn't mean much - because we're taking a large number and ending up with a small number. However in the optimized version we're taking a small number, and making it much much larger.

    Code (csharp):
    1.  
    2. using System;
    3. using System.Linq;
    4. using System.Runtime.InteropServices;
    5.  
    6. class Program
    7. {
    8.     private static void Main(string[] args)
    9.     {
    10.         //Outputs
    11.         //340,282,300,000,000,000,000,000,000,000,000,000,000
    12.         //18,446,740,000,000,000,000
    13.         //Differance 1,844,674,000,000,000,000,000 %
    14.  
    15.         var data = new[] { float.MaxValue, IncrementFloat((float)Math.Sqrt(float.MaxValue)) };
    16.         Console.WriteLine(data.OrderBy(f => f * f).First().ToString("N0"));
    17.         Console.WriteLine(data.OrderBy(f => (float)Math.Sqrt(f)).First().ToString("N0"));
    18.         Console.WriteLine("Difference {0:P0}", data[0] / data[1]);
    19.         Console.ReadLine();
    20.     }
    21.  
    22.     //Feel free to ignore this, just a hacky way to get the next valid float
    23.     static float IncrementFloat(float f)
    24.     {
    25.         var intFloat = new IntFloat { F = f };
    26.         intFloat.I++;
    27.         return intFloat.F;
    28.     }
    29.  
    30.     [StructLayout(LayoutKind.Explicit)]
    31.     struct IntFloat
    32.     {
    33.         [FieldOffset(0)]
    34.         public float F;
    35.         [FieldOffset(0)]
    36.         public int I;
    37.     }
    38. }
    Notice that once a number is bigger than sqrt(float.maxValue) we can no longer determine if it's sqr is bigger or smaller than any other number that fits the same conditions.

    Now most basic game co-ordinate systems this isn't an issue - since even a range of 1,000,000 is typically far more than the vast majority of use-cases - and performance is generally a priority over accuracy. However you should ask yourself the following:

    1) Am I really going to need this extra speed? What is my usecase/proof of need?
    1.1) Is this the best way to get this speed.
    2) Should I be writing code that (based on this thread) only 20% of developers are going to understand the nuances of?
    3) Is this performance worth limiting myself to 0.00000000000000000542...% of the float range?

    It should be noted that UT's current implementation of magnitude appears to be simplistic and not take this into account (and therefore not be any more accurate than sqr.Magnitude), which makes life annoying. You either rely on a fast, inaccurate implementation, a slow inaccurate subject to change implementation, or your own implementation (which can be reasonably fast and accurate, but has it's own issues with maintenance etc).
     
  26. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    Sorry, but it still doesn't make any sense in the context of a video game. MaxValue is 3.402823e38. Exponent 38!

    Getting close to even 1% of MaxValue would mean that your animation will start behaving horribly and your physic engine will start missing collision. As you may know, position in space is based also on float precision.

    In case of very huge games, devs usually split the world in smaller chunks and makes sure everything is calculated compared to that cell. I've made huge game, and even the biggest one didn't come close to a fraction of float MaxValue.

    You'll have a lot more and different issues in you game before having issues with not applying sqrt to a distance.
     
  27. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    @LightStriker Thanks for that post, I will respond with three points.

    1) Knowing subtleties of your systems:

    How can you write this off as unimportant, since until I explicitly showed you the flaw you were unable to see it, let alone quantify it. How can you possibly be sure that this'll never cause any issues if you're completely blind to it?

    Even now, you grossly overestimate the potential range available to you - 1% of 3.402823e38 is ~18446744519627858364% more range than is available to you.

    To give an idea of how much range one loses, imagine you had access to all the storage of humankind - in 2013 this is about 4 zetta bytes. Now imagine if you used an 'optimisation' to increase performance, but were unaware that it gave erroneous data for all values above 216 bytes. No problem if you're writing a small note...

    2) Blindsided by small-mindedness.

    Your example relies on adding unknown variables that simply aren't universal truths. Yes, when position gameobjects in U3D one would most likely a local coordinate system - but that's by no means a universal truth. And even in that example, one may still use floats to represent the 'real' position of items e.g. for mapping, navigation etc. Heck, even simple things - what happens if you are asked to list stars/galaxy in order of distance from a point?

    A better approach, particularly as we're evaluating pseudocode in a public thread about a concept in general, is to assume we don't know what language, environment, datatype, context the algorithm will be used. We can make guesses and give suggestions, but limitations should be noted, not ignored.

    3) Thinking like a compiler.

    The more one understands how different approaches work, one can pick for themselves what to do. Now that we know the differences between the two algorithms we can pick which one suits our needs better - something a compiler cannot do due to its strict nature.
     
    Last edited: Mar 23, 2014
  28. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    I'm using similar code in a number of places and your "this code is buggy" comment actually gave me concern...

    Ignoring context and the realistic range of input one expects is a sure fire way to miss deadlines and create an extremely overburdened code base. A nice example of 'ignoring context' is assuming that in a forum devoted to unity development we need to be concerned with float overflows from Vector3.sqrMagnitude().

    That nobody guessed float overflow as the potential bug has much less to do with "keen insight" than it does with utter lack of concern for context. In the realm of unity development, a float overflow is much more likely to be associated with a bug in input (carelessly passing a float initialized with float.MaxValue) than it does with sorting galaxy distances (in miles!) using vector3's.
     
    Last edited: Mar 23, 2014
    PitaBreadGames and ghostitos like this.
  29. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    SOOOOOOOO

    float error

    Error produced by overflowing the float numeric type.


    Which would be a value greater than 3.402823e38, or 340,282,300,000,000,000,000,000,000,000,000,000,000.

    This would require you testing the distance between 2 vectors that are 18,446,742,259,813,790,763.073341875816 units apart (that's the sqrt of 3.402823e38 ), or that many units from the origin.

    Yeah... I think the float error that results from significant figure range will occur well before errors from that there. And actually it'd be like 18,446,700,000,000,000,000 units apart, because of that sig value error.

    And yeah I don't see that as being a reason why physics engine developers would avoid square magnitudes. You just accept that your physics starts to fail when you reach 18,446,742,259,813,790,763 units from the origin. Actually, they accept that things begin failing well before then... because of sig value issues.

    If they wanted accuracy in those higher areas... they would have changed to Double floating point (which just increases the range)... seeing as 64-bit double calculations are becoming very common in processor's floating point units. But they stay with the float becuase it gets that little extra oomph in speed over the double, and has broader support by computers out there.

    I as someone who has written 2 physics engines in my life, have made 1 with doubles. I wrote a simple little prototype of a physics engine in AS3/Flash, and it only has a double type, no single type. Which you can see here:
    http://www.lordofduct.com/blog/?p=440

    I should also point out, it has nothing to do with C#, it's to do with the IEEE-754 standard for floats that the .net/mono framework uses. And it's the framework that uses it, not C#, C# is just a language that could actually be used to compile for a completely different framework that doesn't have the IEEE-754 single floating point number.




    And lastly. There's still NO DIFFERENCE. Because when calculating distance you still need to find that large value that needs to have square root performed on it. So that means when using Distance or Magnitude, you still have that 18,446,742,259,813,790,763 max value issue. That max value is the same either way.

    This can be avoided by converting the parts of the vector to double before performing the square root, and performing the square root on the double. But that's not how it's done. And it still comes with a max value issue, being sqrt( 1.7976931348623157E+308 ).
     
    Last edited: Mar 23, 2014
    PitaBreadGames and ghostitos like this.
  30. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    1) I know the subtleties of the system. I didn't see your "flaw" because within the context of a video game, it does NOT occur. If you want to do math research when using Unity, good for you. If you would want a number with more precision than 7 digits, you would use double. And if you needed more than 15 digits precision, you would implement your own type.

    2) Unity is a 3D video game engine. So yes, I assume snippets posted here, unless explicitly stated otherwise, are about video game. As for someone mapping star or galaxy coordinate, they wouldn't use Unity's GameObject or even Vector2-3-4 because of their imprecision. They could make VectorDouble2-3-4 or implement their own type. Don't bullshit me, will you?

    3) Yes. And right now, NOT applying a sqrt is still the best solution. And if you have math that needs over 7 digits of precision, you will have to start looking into replacing your float for something else.
     
  31. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    Maybe they didn't see it because they found the 7 digit precision to trump that flaw, as it DOES trump that flaw long before MaxValue comes into play. If I was handed a broke down car and asked what flaws it had, I would probably say "it doesn't run", and when you said "oh, you looked over the fact that its tires aren't rated for the roads in this area". I didn't look it over, the fact it doesn't RUN was more relevant.



    @LightStriker - I'd 'like' your post if I could.
     
    Invertex likes this.
  32. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    Oh, and lets actually look at what would happen in the scenario of testing distance using sqrMagnitude.

    If the distance between both objects is larger than the range that is supported. Well, floats don't always overflow to infinity.

    float.MaxValue + 100f = float.MaxValue
    float.MaxValue + float.MaxValue = float.PositiveInfinity

    So you actually have 2 possibilities in this top area depending on their ranges. But either way... you're going to end up with very large numbers which should remain at the top of the list. And comparison wise will. Infinity comparisons don't throw errors, they evaluate as PositiveInfinity always largest, and NegativeInfinity always smallest. And adding something to either will result in infinity as well.

    This is expected behaviour when testing which is closest, the farthest will evaluate out. It's also expected behaviour when testing what is farthest. The super distant ones will evaluate in.



    Next we have the sig value issue. What happens here is that comparisons of things that are distant apart will result in error. But a value of 100 will always evaluate less than 100,000,000,000,000. The sig value range DOESN'T MATTER. Who cares if there's an error of 5 units. It's insignificant... it's still smaller.

    Where errors will arise is if you test a group of objects that are all very far away. But those objects are all close together. There close proximity to one another, relative to their large distance from the object you're testing from, will look as if they are all in the same position. Resulting in the test returning either the first or last of the group (depending the algorithm's preference) actually tested on, and not the actual closest or furthest.

    In game... this can be accepted. Lets say there was 3 groups of 2, all very far away. And we ran this test. We would get an object from the closest group, but not necessarily the closest one in that group. Good enough for me! Good enough for a game! As LightStriker said, we aren't doing science with Unity... if you wanted to, go get a big number library so you can have that accuracy... at the cost of speed.



    And this is why the sig value issue comes into play WELL before the maxvalue issue. The resulting behaviour of the MaxValue issue acts as expected. The only time an actually result comes back faulty is in the sig value thing. And even then... it's not that bad in the grand scheme of things.

    Both should be understood, totally agree.

    And if you want to understand them. Well understand that the MaxValue issue is identical for magnitude and sqrMagnitude, because the actual evaluation occurring both touch that MaxValue of float the same way.

    But you're just pompous prick about it, and I thought it hilarious you waited for this grand reveal that was total meh. So yeah, I'm a bit pompous in return... rubbing it in. Enjoy your crow.
     
    Last edited: Mar 23, 2014
    ghostitos likes this.
  33. KyleStaves

    KyleStaves

    Joined:
    Nov 4, 2009
    Posts:
    821
    I'm glad you realized this. Honestly I had forgotten about this thread, but when I saw it on the front page again I was pretty curious what your explanation was. You are right; with a different implementation of Vector3.Distance that could absolutely matter - with the implementation Unity uses it's irrelevant. You are incorrect when you say "Nope, they are not." That is not to say you couldn't write a more accurate distance algorithm, but I didn't suggest or compare that.

    For anyone interested, the Unity implementation of magnitude is:
    Code (csharp):
    1.  
    2. public static float Magnitude(Vector3 a)
    3.     {
    4.       return Mathf.Sqrt((float) ((double) a.x * (double) a.x + (double) a.y * (double) a.y + (double) a.z * (double) a.z));
    5.     }
    6.  
    Their implementation of sqrMagnitude is:

    Code (csharp):
    1.  
    2. public static float SqrMagnitude(Vector3 a)
    3.     {
    4.       return (float) ((double) a.x * (double) a.x + (double) a.y * (double) a.y + (double) a.z * (double) a.z);
    5.     }
    6.  
    It's literally the exact same logic but with a Mathf.Sqrt thrown in. You gain absolutely no accuracy by using Vector3.Distance. Perhaps you should heed your own advice when it comes to "Knowing subtleties of your systems." If you want to suggest people implement a more accurate distance algorithm themselves please fell free to do so, but I would absolutely put that in the category of very, very premature development. There is a reason you don't see the scripting forum flooded with complaints about how Vector3.Distance doesn't work correctly.

    tl;dr - Avoiding square roots when they are not necessary (i.e. - when they give you no additional accuracy or valuable information) is still a worthy optimization. Mostly because it takes no extra programming time whatsoever.
     
    PitaBreadGames and ghostitos like this.
  34. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    Okay, let's get down to business.

    Number 1 Complaint - It doesn't matter in this context/floating point errors would occur first.

    I love how everybodies giving me reasons why this problem does not come up in game development - giving me arguments based on significant figures, other errors that would occur first, context etc.

    This is great!

    I love how that *now* you are aware of this issue you can make judgement calls on it's relevance for your use-case. Because until I explicitly made clear what the problem was, for all you were aware it could have occurred at lengths greater than 42 and be breaking every single one of your games. Unless of course you *tested* the optimization first for all possible ranges you'd encounter just to make sure?

    That said some of your arguments need work, saying you'd use doubles instead of floats in this situations is poor - firstly other than this issue floats are more than adequate for said use cases - so why bother changing? Secondly, doubles have this exact same issue (except they have even *less* of their range to use!)

    Number 2 Note - Unity is a Game Engine

    Correct unity is a Game Engine. It is also used for business, visualisation, museum, education etc. One of the first things I did in unity was push floating point to the limit with mandelbrot. Furthermore within this game developers love changing data types and creating new data types - something that works perfectly fine with float32 might not with float16 or fixed point integers etc.

    Number 3 Complaint - Unity's Distance is also flawed, so how would this change anything?

    1) This is pseudocode, and as such one shouldn't rely on implementation specifics.
    2) This flaw appears to be a bug that may be fixed, as KyleStaves kindly provides they already use doubles in their math, they just round at the wrong time:

    Code (csharp):
    1. public static float Magnitude(Vector3 a)
    2. {
    3.      //Flawed
    4.      return Mathf.Sqrt((float) ((double) a.x * (double) a.x + (double) a.y * (double) a.y + (double) a.z * (double) a.z))
    5.      //Fixed
    6.      return (float) Mathf.Sqrt(((double) a.x * (double) a.x + (double) a.y * (double) a.y + (double) a.z * (double) a.z))
    7. }
     
  35. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    You're holding onto the assumption that we didn't know 'MaxValue' exists.

    We knew, sorry you have to take our word for it. It fell under what I consider the umbrella of float error, and was the lesser of the most relevant because other issues superceded. Issues that you passed over when we brought it up with statements like:

    I could say you hadn't thought of it because you only brought it up after we said it. And you over looked the glaring fact that the sig value issues supercede the maxvalue issues in relevance in favor of your "big reveal". You weren't looking for a correct answer, you were looking for YOUR answer. You passed off our answers as not the important answer, implying your answer was the more important one.

    Which it wasn't.

    As I said from the get go:

    I was right, you're here trying to stroke your own ego. Get off your high horse, it's sad.
     
    Last edited: Mar 24, 2014
    Infinite-3D and poncho_a like this.
  36. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    What makes you think that we are only *NOW* aware of issue with float precision? As if I was not aware of that a decade ago.

    Please, provide the math for that, because float is 7 digit-long precision. 42 is only a 2 digits number. You could also say a float will lose precision at ANY lengths, because of that 7 digits precision. The more I read you, the more I believe you have absolutely no idea how a float is stored or handled, or what are the animation/physic issues I'm talking about.

    Example;

    Code (csharp):
    1.  
    2.     [MenuItem("Test/Test")]
    3.     static void Execute()
    4.     {
    5.         float a = 1.00000002f;
    6.         float b = 1.00000001f;
    7.  
    8.         // Returns true
    9.         Debug.LogError(a == b);
    10.     }
    11.  
    If you need more precision, use a double.

    Code (csharp):
    1.  
    2.     [MenuItem("Test/Test")]
    3.     static void Execute()
    4.     {
    5.         double a = 1.00000000000000002d;
    6.         double b = 1.00000000000000001d;
    7.  
    8.         // Returns true
    9.         Debug.LogError(a == b);
    10.     }
    11.  
    And if you need even more precision, which is FAR outside the scope of a video game, make your own type.

    As for video game concerns, the current distance and sqrDistance methods from Unity are working perfectly to what they are asked to do.
     
  37. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    Okay, except you're getting two very different concepts confused. A float error simply means that two numbers that are very close together may be confused, the max value error means that two numbers, one potentially more than a thousand million billion times larger than the other may be confused. Furthermore, this error is not a matter of 'floating point error' - its clear and correct math - the problem is one of implementation.

    Let n be a number within the floating point range.
    Let x be an unknown error version of n.
    Let y be the condition where n results in x.

    Given that you are unaware of x, and have no idea of what y is, how can one determine that n y != x? For example n y != x where n>0 and n<=42?

    For example, someone might say that y is n being larger than '1% of MaxValue', know that their number range does not exceed that, and assume that their implementation is correct. Of course we'd now know that there safety margin would be off by 18446744519627858364% and be writing potentially buggy code.
     
  38. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    npsf,
    I understand that you've come down from on high bringing wisdom to the unwashed masses... but really dude, if you look over this thread, there is only one person who made strong factually incorrect assertions. That person is you. Seriously, go back through this thread and read your responses.

    loose phrasing -- > "using sqrMagnitude can cause problems that don't occur when using Magnitude"

    Really, go back and read the thread again... you made repeated posts that were flat out wrong. Worse, you "quizzed" everyone using these flawed assumptions. You should really be kinda embarrassed. Secretly, deep down, you probably are.

    Developer prima donnas are the worst. Don't be that.

    pri·ma don·na
    ...
    a very temperamental person with an inflated view of their own talent or importance.
    synonyms: ego, self-important person, his nibs, temperamental person, princess, diva, pooh-bah;


    Don't be a diva...or a pooh-bah. We all make mistakes, we all write S***ty code sometimes, and we all operate under incorrect assumptions. It happens. To everyone. Just try not to be an ass about it!
     
    Last edited: Mar 24, 2014
  39. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,646
    My understanding is that it's still super slow as far as calculations go, but that most things are bottlenecked by memory access speeds to the point where it's irrelevant. I'm not sure about PC CPUs as I've never had to find out, but a statistic given in a PS3 optimisation lecture I was once in was something like a single memory fetch taking so long that you could "do 80 square roots".

    Edit: I should have read more before chipping in:
     
    Last edited: Mar 24, 2014
  40. Eric5h5

    Eric5h5

    Volunteer Moderator Moderator

    Joined:
    Jul 19, 2006
    Posts:
    32,401
    According to this, the square root operation on a modern PC CPU has the same latency as floating point division, and is significantly faster than some other things such as sin, cos, etc.

    --Eric
     
    ILonion and Zaelot like this.
  41. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    Ok, now I know you're pulling numbers out of your ass.
     
  42. KyleStaves

    KyleStaves

    Joined:
    Nov 4, 2009
    Posts:
    821
    I honestly don't see why you are arguing this way in this thread on this particular forum. You went on a holier-than-thou goose chase to prove your ultimate wisdom and in the context of the situation added absolutely nothing of value to the discussion. That's the whole story. If you want to have a conversation about custom implementations of more accurate distance algorithms than that's great - have that discussion, but don't pretend that you were holding on to some nugget of divine wisdom that only you had because you wanted to interject that discussion into a thread comparing a specific implementation of magnitude verse a specific implementation of sqrMagnitude. Are you seven years old? This is, frankly, a bit ridiculous.

    My original point still stands. There is really nothing inherently wrong about running a square root on a modern CPU - don't avoid them like the plague, they really aren't that slow. However, if you find yourself in a situation where you don't need to run one at all by asking a slightly different question - that's a good choice too. Either way, if you don't know how to profile the difference between two approaches chances are you are looking at optimizing something way, way too soon and should focus on just developing your product.

    EDIT:
    For what it's worth, your 'fixed' algorithm still isn't fixed. Mathf.Sqrt does not perform logic on doubles - ever. If Unity didn't cast it as a float before the method call the compiler would cast it for them anyway (which, for all we know, is exactly what is happening since we can only see the compiled code anyway). You would also need to implement a custom square root function. Can you see why this is a ridiculously silly discussion in context yet? Or do you want to chase the rabbit hole further? This isn't some language abstract conversation on Stack Overflow - we're specifically discussing the question in the context of Unity. Unless you want to write your own Mathf class you are going to have a bad time trying to improve the accuracy of their distance function. It's not a bug with the engine, it's just a reality of their math library being based on floating point values.
     
    Last edited: Mar 24, 2014
    BobberooniTooni, ghostitos and Zaelot like this.
  43. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    My ass, your mouth - apparently they have more in common then I'd have assumed. :O

    I agree, this is a mistake on my part. I saw a thread on sqrt performance and saw some pseudo-code that uses a sqr vs a sqrt implementation and made my argument based on that. I was unaware at how strongly people would assume a specific concrete implementation of said 'pseudo-code' ignore the larger algorithmic concern.

    I apologise.

    Please understand I come from a culture where game developers experiment with scales bigger than the known universe, where we try to optimise data-types (e.g. float16), where we innovate with integer-based data types (e.g. deterministic math, pixel perfect coordinates) and where we occasionally define our own data-types and use non-unity data-types (e.g. integrating third party libraries, making cross platform libraries, working with server code).

    Which is my point too, with the addition I make knowing whether or not your code is a valid replacement an important factor (which in this case it is, I'm not arguing that the optimization is wrong in the current Unity implementation).

    So use Math.Sqrt as I did in my test-code. Simple fix.

    Because you are unaware of the Math library built into the .net framework?

    Code (csharp):
    1. return (float)Math.Sqrt(((double) a.x * (double) a.x + (double) a.y * (double) a.y + (double) a.z * (double) a.z));
     
    Last edited: Mar 24, 2014
  44. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    ::shakes head slowly::

    It's ok npsf, we will still remember you fondly.
     
  45. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    Here's the thing, you said:

    In response to:

    The mentioned psuedo-code was the use of Vector3.Distance and Vector3.sqrMagnitude.

    You said there WAS a difference, but there isn't. You didn't say anything about a custom implementation. You said there was a difference. We showed otherwise, because sqrt(float.MaxValue) is still your maximum when using Vector3.magnitude or Vector3.Distance, because you still have to calculate that very large number that needs to be square rooted.

    You don't get to change the scope of the question after the fact.


    Also 'float error' isn't just small differences between numbers. That's one of many float errors.

    Float errors, as I used the word (as it's not a technical word actually... it's a truncation of the varies float error names)... as I used it, meant any of the errors that a float can produce. Significant value round off error (which is what you were talking about, but isn't just for "small" or "close together", but large and far apart as well. See, I can twist the semantics of your words too), binary conversion round-off error, max value overflow (yes, this is included because float has a very specific way unique to floats that occurs).
     
    Last edited: Mar 24, 2014
  46. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    That's the thing, in this thread, the ONLY person spouting off factually incorrect assertions is nps... it's sort of funny and ironic all things considered.

    That said, he edited his post to bold/ital the "I apologize". Which is probably the best anyone's going to get. Deep down, he's embarrassed at his errors here.

    We all learned two things:
    - a - sqrt is faster than most of us thought.
    - b - it's awesome that we don't have to work with npsf (although, everyone has worked with someone like him at some point).
     
  47. npsf3000

    npsf3000

    Joined:
    Sep 19, 2010
    Posts:
    3,830
    I agree, I did not say anything about a 'custom implementation' Why? Because that's the nature of 'pseudo-code' - it does *not* assume implementation specifics. Straight from wikipedia:

    ...it is an efficient and environment-independent description of the key principles of an algorithm.

    The specific implementation of Vector3 in the current version of Unity is environment-dependant. Furthermore please show me where in the pseudo-code where it was specified that we are using the Unity implementation of Vector3?

    And if you truly believed that overflow was a possible float error, and one of the possible outcomes of this optimization - then why didn't you pick it up when I clarified that it was not a significant figure issue like you originally theorised?
     
    Last edited: Mar 24, 2014
  48. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    npsf,
    The code that was posted here does not meet your cited criteria for pseudo code... the code was sloppy but platform dependent. Just let it go dude. You made some flawed assumptions, made a big ass out of yourself and really annoyed some people with your tone and hubris.

    Time for all of us to move on.
     
    Zaelot likes this.
  49. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,568
    Personally I think the best thing to happen would be or this thread to be locked. As an outside observer I don't feel like the discussion is at all constructive.
     
    PitaBreadGames likes this.
  50. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,598
    +1'd

    tootle's y'all