Search Unity

Question Do transform position values sometimes cease to be a float?

Discussion in 'Scripting' started by Spacewizard-, Oct 11, 2022.

  1. Spacewizard-

    Spacewizard-

    Joined:
    Jun 7, 2019
    Posts:
    74
    Hello everyone, while dealing with something I noticed that when you write the same position value of transform as "float" and then subtract it again by referencing itself., especially when transform's rotation values are different but I don't know if it has anything to do with it, the operation does not give a zero value. (sorry if this is due to something so simple and i missed it)

    I can explain more clearly like this:
    I entered the same value as a float to the value that needs to be subtracted, but it gives different results.
    section 1.png

    but when I convert the values to decimal and then to float it gives correct result. Rather than seeing it as a problem. I'm just wondering reason of this
    section 2.png
     
  2. Brathnann

    Brathnann

    Joined:
    Aug 12, 2014
    Posts:
    7,187
  3. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    The term is ; Floating point inaccuracy.
    A small bit of electricity has confused the output of which does not distinguish the difference between zero and this microscopic yet detectable flow of electricity.
    So you are getting the exponent float and converting to decimal and back to float is rounding out that worm of voltage for you perhaps via entropy.
    So the act of converting to decimal and back is decaying a value of that unwanted voltage each time it ocurrs. As the unwanted voltage was unwanted to begin with and nigh undetectable. Each successive repeat usage of that voltage value means that your cpu clock is exponentially unable to accurately measure the amount of voltage you are recirculating and thus the value depletes.
    Or at least, that’s my theory.
     
    Spacewizard- likes this.
  4. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    Sorry, it has nothing to do with electricity.

    Floating point imprecision is there by design. There is a trade-off between precision and entropic state one can represent with 32-bits, and so it employs a design that is guaranteed to provide both the necessary scales (aka orders of magnitude), and good enough approximation of the underlying number system it tries to represent. You can only fit as much in 32 bits and so we must be mindful of that especially if we cross-compare two values with varying orders of magnitude, because some information loss must occur.

    To put it shortly. Think of it as JPEG of numbers.

    It's a complex topic, but for anyone wanting to make games with a 32-bit IEEE 754 floating point values it's essential to understand there is no such thing as this equals that. With all other exact data types that's ok, just not with the floating point.

    Regarding the E notation, also known as scientific notation, when you get E in a C# value, it's just a human readable string in the format "nEp" that tells you that a number n is actually x10^p

    So if you see -3.05E-5 that means -3.05x10^-5 which is equal to -0.0000305
    In other words, almost a zero.

    To test whether such numbers are zero or not, we all employ a very simple trick of, again, checking if it's good enough.
    Code (csharp):
    1. bool isAlmostZero = myValue < someGoodEnoughThreshold; // wrong
    But what if the value was near enough but was negative? That's why we take the absolute value
    Code (csharp):
    1. bool isAlmostZero = Mathf.Abs(myValue) < someGoodEnoughThreshold; // correct
    In this case someGoodEnoughThreshold is something that you can set up for yourself. It might be 1 or 0.1. In practice, you want to use small values that are useful to you, and for this you don't spam 0.00001 but you write, again, an E number, like so
    Code (csharp):
    1. var epsilon = 1E-6f; // epsilon is a greek letter that is typically used for such small numbers
    2. bool isAlmostZero = Mathf.Abs(myValue) < epsilon;
    Technically and historically, the name epsilon is because of mathematical analysis in approximation errors.
     
    Last edited: Oct 11, 2022
    Yoreki, Ryiah, Spacewizard- and 2 others like this.
  5. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    To extend this further, if you want to check whether two floating point numbers are approximate to each other (on some arbitrary order of magnitude), you can extrapolate from the previous test and come up with the following one
    Code (csharp):
    1. bool isAlmostZero = Mathf.Abs(myValue - 0f) < someGoodEnoughThreshold
    Doesn't make much sense, until you notice that you can swap the zero with anything else
    Code (csharp):
    1. bool isAlmostSame = Mathf.Abs(myValue - myOtherValue) < someGoodEnoughThreshold
    You can now pack this into an extension function
    Code (csharp):
    1. static public class MyExtensions {
    2.  
    3.   static public bool IsAlmost(this float n, float value, float epsilon = 1E-6f) => Mathf.Abs(n - value) < epsilon;
    4.   static public bool IsZero(this float n, float epsilon = 1E-6f) => Mathf.Abs(n) < epsilon;
    5.  
    6. }
    Which you can now use like this
    Code (csharp):
    1. Debug.Log(myValue.IsZero());
    2. Debug.Log(myValue.IsAlmost(12.5f));
    You can also use Mathf.Approximately which is implemented slightly differently, and tries to be more accurate with what epsilon truly means in this number system.

    Here's the description of what it does.
    Code (csharp):
    1. // Compares two floating point values if they are similar.
    2. public static bool Approximately(float a, float b) {
    3.   // If a or b is zero, compare that the other is less or equal to epsilon.
    4.   // If neither a or b are 0, then find an epsilon that is good for
    5.   // comparing numbers at the maximum magnitude of a and b.
    6.   // Floating points have about 7 significant digits, so
    7.   // 1.000001f can be represented while 1.0000001f is rounded to zero,
    8.   // thus we could use an epsilon of 0.000001f for comparing values close to 1.
    9.   // We multiply this epsilon by the biggest magnitude of a and b.
    10.   return Abs(b - a) < Max(0.000001f * Max(Abs(a), Abs(b)), Epsilon * 8);
    11. }
    this Epsilon constant is defined as
    Code (csharp):
    1. public static readonly float Epsilon =
    2. UnityEngineInternal.MathfInternal.IsFlushToZeroEnabled ? UnityEngineInternal.MathfInternal.FloatMinNormal
    3. : UnityEngineInternal.MathfInternal.FloatMinDenormal;
    Which is in turn something of these two
    Code (csharp):
    1. public static volatile float FloatMinNormal = 1.17549435E-38f;
    2. public static volatile float FloatMinDenormal = Single.Epsilon;
    Single.Epsilon
     
    Yoreki, Ryiah and Spacewizard- like this.
  6. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    The confusion comes from the fact there are two components to the problem

    Not all floating point arithmetic is inaccurate. And sometimes the inaccuracies are not due to the length of the number and the way it could be stored in binary. For example 0 + 0.1 resulting 0.099999999 . This has nothing to do with the length of the number stored in binary. We had a value with a length of two digits. Added to a value of zero of length of one digit. And output a value the length of X digits inaccurate from the value we wanted.

    How else can this be explained?

    there is a difference between asking for a value that is below the range we can process. And asking for a value that is within the range we can process. And the two resulting inaccuracies are independent from one an other.
     
    Last edited: Oct 11, 2022
    Spacewizard- likes this.
  7. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,697
    That's probably one of the best floating point analogies I've ever read.

    I think I shall add that quote (attributed to you) to my floating point blurb!
     
    Spacewizard- and orionsyndrome like this.
  8. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    https://learn.microsoft.com/en-us/office/troubleshoot/access/floating-calculations-info

    inaccuracy caused by rounding
    Inaccuracy caused by precision e.g 1, are we talking about 1? Or are we talking about some infinite value immeasurably close to 1? Because the cpu considers these things the same. Due to rounding. But if I feed it a 1, what 1 does it use? Does it use the integer 1, that it knows it can represent, or does it decide that it is not 1 but slightly less yet still considered 1. Well. It’s referring to an uncertainty in how to describe the quantity of energy.

    Maybe am crazy, just seems to make complete sense to me.

    never the less conclusion is the same whatever the cause. It is currently out of human ability to engineer something that is as perfect as we expect it should be. As perfect as the brain.
     
    Last edited: Oct 11, 2022
    Spacewizard- likes this.
  9. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    Last edited: Oct 11, 2022
  10. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,993
    No, this is already the complete wrong assumption :) You did not have a value with two digits. The value 0.1 in decimal can not be represented as a binary floating point number at all as 0.1 has an infinite binary expansion and would require infinite number of digits to represent that decimal value correctly

    In binary that value would be

    1.1001100110011001100110011..... x 2^-4

    As you can see, the number has an infinite recurring binary expansion and the "0011" is repeating infinitely. If you enter the value here, you will notice, that the least significant digit of the mantissa is rounded up since the following digits would have been "11". Therefore the actual represented decimal value is
    0.100000001490116119384765625
    The next smaller number is
    0.0999999940395355224609375


    Your example is also wrong, as adding a perfect 0 should always round to the same value. The IEEE754 format defines pretty clear rounding rules. However the specification is not clear on all details so different implementations can yield slightly different results, but only in specific cases. However on one machine, the same machine code with the same input would yield the same result. Different builds (especially when code has been changed) may result in different optimisations which could result in code that produces slightly different values, but that is still consistent.

    You may want to read this article.
     
    Nad_B, Spacewizard- and orionsyndrome like this.
  11. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    DBBCE6F4-0C39-4FBB-9231-B9E9D464D7FF.png

    https://en.m.wikipedia.org/wiki/CMOS


    FLOATING INPUTS
    A CMOS digital input has a very high impedance. Consequently, when it’s not driven it will float, creating an undetermined input logic level. More importantly, the input may not stay that way and, in fact, it likely will not. For instance, reading the input in software will show a logical high or low, but waving a hand above the circuit board can be enough to cause the input levels to change.

    Over time, however, the floating input tends to accumulate a charge and float toward the logiclevel change-over point. When it reaches that point, it causes both the high and low MOSFETs to be partially on, resulting in shootthrough current.

    When the input buffer output switches state, the floating input can lose charge, causing the circuit to switch back. This keeps the charge hovering around the change-over point and makes the floating input very susceptible to noise, especially from signals switching on adjacent pins. Engineers need to be especially careful of a floating programming control or reset pins where a nearby toggling line may generate enough noise to make the microcontroller repeatedly drop in and out of programming or reset mode.

    A floating input hovering around the change-over point, and thus causing shoot-through current, will cause the CMOS device to exhibit higher than expected power draw. This may not be especially noticeable when the device is running. However, it can be significant for devices such as microcontrollers in their low-power state. In addition, the input’s logic level may change at any time and trigger unexpected responses from the device.

    AVOID FLOATING INPUTS
    There are several ways to avoid floating inputs. Many microcontrollers power up their configurable I/O as inputs because the desired output level isn’t initially known. For such devices, simply configure unused pins as outputs and drive them high or low. Make certain that this is done under all paths through the code, though, or there may be one mode where the pin may still be floating
     
    Spacewizard- likes this.
  12. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,993
    Uhm, I'm a trained electro-mechanic so what you said is not wrong, but has absolutely no relevance here or in computer systems. PCs do not have any open / floating inputs. Digital circuits are 100% reliable, it they are not, they are broken and need to be replaced. Within a pc you would never get a wrong / flipped bit unless extreme electrical noise is introduced or, as I just said, the hardware is broken. Of course the story is different for transmission lines which can much easier pick up noise which may introduce errors in the transmission. However that's why we have invented things like the Hamming code, checksums and re-transmission. Your essay about CMOS is completely off-topic here :)
     
  13. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    @AnimalMan
    you are hard-core mixing up broad electrical engineering with digital electronics. yes the two are connected, BUT

    transistors are fashioned in such a way to behave deterministically. the voltage thresholds are made in a way to simple be unambiguously binary. ZERO or ONE. there is no random flickering, no marginal voltage and certainly no inaccuracies because of it.

    this is why we treat such machines as discrete and DIGITAL, this is why we use this word, as opposed to continuous and analog, which are fine systems on their own, but DIGITAL computers simply aren't analog and do not work with continuous signals, even though in reality the underlying signals are continuous.

    if something happens to a transistor so that it loses this 'unambiguity', its state will invalidate in such a manner that this will surely lead to an error backpropagation, and subsequently to a system halt, mostly to prevent further damage. this largely depends on how exactly the software is made, i.e. space program computers have a lot of redundancies in place, as well as radiation shielding to prevent this, but for us mere mortals, literally a solar flare or even a cosmic ray can blue screen your CPU in the midst of some important work. it's just these things occur very rarely.

    thanks to Windows we even have jargon for unpredictable system failures: BSOD = blue screen of death, meaning unrecoverable system-wide error ending in a total halt of the system, requiring restart.

    sometimes this is due to a consistent hardware failure, or invalid BIOS configuration (which leads to a similar outcome), but not always.

    https://en.wikipedia.org/wiki/Electronic_voting_in_Belgium#Reported_problems
    just one example among many.

    again, read how IEEE 754 specification is designed, why it is designed in such a way, and how we actually store continuous floating point values on a digital machine, using discrete logic gates.
     
    Bunny83 and tomfulghum like this.
  14. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    Bunny83 and Kurt-Dekker like this.
  15. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,697
    "There are 10 types of people in this world: those who understand binary, and those who don't."
     
    Spacewizard- and orionsyndrome like this.
  16. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
  17. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    I like that you took the time to read that article. Good job for a chimp!
     
  18. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    No

    I’m shy
    Don’t make me blush

    I’m blushing

    i am just a mere servant of the great CMOS
     
  19. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    aren't we all
     
    AnimalMan likes this.
  20. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    My main skepticism was that we tell the computer what the value is, and then it doesn’t know what value we told it that it was o_O after it goes through a little circuit. It effectively loses context of the volume what we asked for it to create, and then use for us. That’s why deep down I suspect there are forces at play that still couldn’t be resolved by a 64 bit system or a idk 128 bit system or a 256 bit system. I still suspect however large we accounted for until we used the correct material to make the components and gates we will always face an anomalous inaccuracy. It’s just it would take longer and longer to round that number the more bits that we give it. Theoretically.

    but the problem is not the problem anyhow/

    the solutions exists to repair it and that’s what’s important for the user.
     
    Last edited: Oct 12, 2022
  21. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    It reminds me of the problem creating a game with infinite space, where we fly a spaceship around the infinite void. We don’t have to get too far from vector zero before every calculation we apply is noticeably inaccurate.
     
  22. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,697
    Those forces are that a base-2 numbering system cannot represent many base-10 fractional numbers, as Bunny points out above. That's just ... numbers.

    It's the same as a decimal number cannot represent 1/3rd... the closest you get is 0.33333333... forever. Is that equal to 1/3? NOPE! That is 33333333 / 100000000, which is different from 1/3 and would never be equal to it.

    That's EXACTLY the same process going on with binary not representing 0.1. Doesn't matter if you have 64bit, 128bit, 256bit or ten million bits... you cannot represent 0.1 in a finite size binary representation.
     
    Bunny83 and orionsyndrome like this.
  23. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    Well, you got that all wrong. In the beginning, we just had a computer that was this giant board of electrical switches. How on Earth do you even tell it what the number is? What values? Define a value.

    Because of Leibniz's work (et al) we figured out that the switches can be used to encode a particularly interesting signal, one that can be modulated in such a discrete manner that it leads to generally acceptable and intuitive results.

    If this important ground work wasn't done so meticulously on a mathematical basis, we'd likely do nothing with this technology, other than to switch something, but then this incredible miniaturization to micro and then nanoscales would probably never happen. Of course someone had to discover a switch without moving parts, but that's a different story altogether.

    Anyway, once we got there, and the thing knew how to tell you what is 2 + 2, somebody was like, ok this is great, but how can we divide 3 by 2? And literally everybody was scratching their head. This is not a simple question, and has so many answers.

    But then you got it all backward, coming into it from the perspective of a mildly-amused user of the 21-st century. It took several generations and numerous breakthroughs until things consolidated on how floating point works ... in a finite binary notation! Underneath everything, your computer doesn't know anything about any values, it's just switching, switching, switching, to death ...

    Someone made a standard, declaring how we are supposed to read, write, and logically treat such mathematical (and highly abstract) constructs on a digital machine. There are other standards, but this one somehow won, because idk economy, inertia, habit, reasons, and got integrated so low in the hardware with the advent of so called FPU, that we now take it for granted. I still remember a time when I had a math co-processor installed separately.

    Sure you can implement your own writing system, and represent values differently. And people do! For various reasons. Most of it still has to rely on hardware for speed, but it doesn't necessarily have to be according to IEEE 754 specs.

    So to return to your quote, we don't "give" the computer a value. We define a value through our own systems we've embedded inside of it. It has no concept of value, it gives back whatever we wrote in it in the first place. I don't know how you got it all backward, computers aren't magic boxes.
     
    Bunny83 likes this.
  24. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    So the more bits we have the more inaccurate it can be, and the longer it takes to resolve the inaccuracy.

    When we write a value 0.2 I have given it a value in that context. I have specified the parameter of the binary value. And then maybe I have gone on to use that value. So if I say okay Float A = 0.2 float A is always 0.2 until it is used. So we write in the inspector X position 0.2 and so shall it remain. 0.2. But if I asked for x position to = a + b = 0.2, we cannot be sure it will be 0.2.
    But in terms of vector zero and distance from zero here we are certainly exceeding the bit limit. Surely. But in an instance that has nothing to do with the length of the value an inaccuracy is still measurable.

    So on the breadboard lightbulb computer, it doesn’t matter how much electricity made it into the bulb as long as so that it is enough to illuminate the bulb. But on the computer and to the mathematician it does matter because switch on switch off isn’t always an accurate measure or that the result we want.
     
  25. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    You need to understand that these are all writing systems whether they're computer-based or not. Even our own writing system is according to some standard. We use both cyrillic and latin alphabets in my native language. I can read and write both. I can also think of an integer in decimal, binary, octal and hexadecimal system. But if I write, for example 754, what does that mean?

    Don't be too quick about it, we're just used to decimal notation. What if that value I wrote isn't decimal, but octal? Then it means something else, yet the symbols are same.

    Separate symbols from words, and words from meaning. Now reconnect all three with arrows. These arrows are all just a convention. There is no god given meaning behind symbols, to and fro, and multiple people can have a multitude of words to describe the same concept.

    If you insert a concept of 'beauty' into Chinese does it ever come back the same?
    In a written system, this is what you get back 美丽
    How on Earth do I even verify if it's correct? If it's the same concept?

    If I wrote to you ajcvhdjfkhew, how would you know that this wasn't some language you simply don't understand? Why is it hard then to accept that computers, deep down, simply use a language you actually know very little about, grounded in mathematics. This can change only if you learn more about it. But until then stop having these magical epiphanies, because computers do not produce meaning on their own.
     
  26. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    I hear what you are saying Kurt and I like it: I get it. Because the binary can never represent the value but the fact is that sometimes it does correctly represent the value.

    So if some behind the scenes rounding goes on which it does; in float; this rounding can also itself be terribly inaccurate, that’s why sometimes our values round perfectly it is just instances like the original post where it has failed to round or doesn’t represent the value. But truth be told If you operate within the bit range, then 95% of floats will be accurate to that last decimal place to that 9.00001. Now on a system where the float had 1 less bit!!! On the surface It’s seems a more accurate system.


    Could it be that the more accurate one tries to be; the more inaccurate they become? That’s a philosophical question for you.
     
  27. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    So if we want a computer that can take us to Mars and back, and such we give this computer eyes and ears; so to speak; and basic self preservation features; we generally want that computer to be as accurate as possible.

    the floating point system is built for speed not accuracy. It is design to approximate at speed the general result we want and if that result was inaccurate then we round it ourselves.

    but now a computer built to travel to Mars; or Jupiter and back. He is not designed for speed. It’s a slow ride.
    The cpu that fires a Gatling gun to intercept incoming missiles, and increments it’s machinery rotors and gears in order to quickly respond to an incoming missile. Now is he using floats and decimal? To a finite accuracy? Or is he using ints and less bits to the parameter of his gears?

    it is a funny one to think about.
     
  28. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    Think of it like a Gatling gun firing threads of hair, beard shavings, out of a Gatling gun, to try and hit a flea in mid air.
    What is the best way obtain this accuracy? In tracking that target


    I think the whole reason these things catch us is that the majority of the time they return the correct values. And so we grow comfortable, and that’s when it strikes with a suprise inaccuracy. For a specific measure or volume. Or after a specific calculation. Such as trying to obtain zero.
     
    Last edited: Oct 12, 2022
  29. orionsyndrome

    orionsyndrome

    Joined:
    May 4, 2014
    Posts:
    3,108
    Exactly. Ever heard of transcendental numbers?
    Something like that, sure.
    It's definitely using a combination of everything, but mostly the most reliable systems known to man. These are very expensive systems, you don't want the whole mission to fail because some bit was misplaced. Definitely no floating points there. It's all fixed point and a lot of bits available, and then another computer running the same thing in parallel, for redundancy, and then an analog system to back this all up. I am not joking.

    Let's return to transcendental numbers. There is no human writing, or convention, or means, digital or by hand, to explain these values, in any way. To illustrate the point Bunny made earlier with the binary system unable to hold 0.1 in its entirety. Here's one such value, but this one is inexplicable in any language, in any number system, binary or decimal, it can't be a rational, it can't be finite, it. does. not. behave.

    It's π
    That's how we decided to call it. That's just like this one symbol to encapsulate an entire endless train of value sausage. And no matter how many digits, ink or paper you can muster, decimal or binary, you cannot hold it all together, there is always yet another universe of them pouring in. If our lives depended on this, we would be dead.

    But you know what, when you're making games, I've discovered that 6.283185 is perfectly good enough. That's all you need. It's even better than what Unity offers in Shader Graph. In there it's just 1.0/6.28 in one of the nodes. Of course I had to facepalm.

    NO it doesn't work when it's so crazy short. You can't just 6 and that's it. But can you see what we're doing here? How we deliberately sacrifice precision for just a taste of the meaning? (In a way we do that all the time, just observe this smiley :eek: or idk lol ikr?) That's what floating point does, but in a way that is dynamic.

    Imagine a simple scenario, if we were children, computers were decimal cardboard machines, and memory was like little pieces of paper. Imagine you had only 10 pieces of paper to encapsulate some information, and someone told you can write only Arabic digits, but you had to contain the value of 44 / 7. That's not transcendental, but rational, see, it's a ratio. What would you do?

    Well, you can try writing 6283185307 but that's not it. It's nine orders of magnitude bigger than 44 / 7. You can write just 6, but that's not it either, it's much closer but you've sacrificed almost a whole third. If you pay close attention we can live without that 7 in the end of it, and pronounce that as a floating point position. In other words, if you imagine a decimal point "floating" between any two digits, you can arbitrarily place it wherever by specifying where it should be. For example if there's a decimal point to the left to the last digit, we write 0, and for every digit to the left we increment the last digit by 1.

    We get the following progression
    628318530. 0
    62831853.0 1
    6283185.30 2
    628318.530 3
    62831.8530 4
    6283.18530 5
    628.318530 6
    62.8318530 7
    6.28318530 8

    Voila! We have designed a floating point system that can represent arbitrary decimal values. The value of 44 / 7 can be encoded as 6283185308, as written on 10 pieces of paper, and we can say that the error is sufficiently small for our intended use of this value. If everyone would agree on this new standard, we could even do advanced mathematics. It's all a lie and intentionally so.
     
    Last edited: Oct 12, 2022
    AnimalMan likes this.
  30. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,993
    Now you're mixing completely different topics in which have a completely different goal. The real world does require even less precision in a lot cases because when you interact with the real world you have to rely on input readings which are already imprecise and the actors such a system has to manipulate things in the world have even more inaccuracies. Such tasks are done using a control feedback loop that is adjusting its actions based on the current readings and the current error term.

    Just as an example the "Apollo Guidance Computer" only had integer math and everything was done in either integers or fixed point math. The system as a 15 bit system (16 if you count the parity for error checking) so the possible representable values are quite limited. In realworld engineering you don't care about being as precise as possible, but just as precise as necessary (with reasonable margin) to get the job done. If you have some time I can recommend this Apollo Guidance Computer talk to get a rough idea what we're talking about.

    Sorry, but the main issue in such a system is again the physical limitation of the steering of the gatling, the precision of the fireing of the gatling as well as the imprecision of the reading / sensing. Those are the hard limits of your error. You have to choose a number system that is just precise enough to represent the numbers with a better accuracy than the expected accuracy from the systems you try to control. Just knowing what would be the best direction to fire, up to the 1000s decimal place, doesn't help at all if the physical gun can't shoot that accuratly.
     
    orionsyndrome and AnimalMan like this.
  31. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    Topic must end for there is real work to be done!


    Pi can be explained via 3.14

    but Pi may also be explained by its width and height, if Pi is to be drawn on a grid. I guess the floating point is a quick way to approximate something and then apply it to the grid. But if you go ahead and make a circle pixel brush, you’ll see pi transcendental float is buggy locating and colouring those correct pixel offsets.
    The discussion is deep, know that I am not saying you guys are not correct. I am just saying that floating point can be measured and a result can be produced to explain the result, but at the end of the day although a transcendental such as pi number may draw perfect points of a circle of that can then be connected to via a line, they cannot locate the squares of a grid corresponding to their own value. For that, would be easier to obtain using integer dimension and a basic rule of scale.

    This is where I get off. Bits bobs and bytes. Infinite numbers. 256 bit systems. I couldn’t waste time theorising a new system again. And I may indeed get in trouble if I continue. As I think some people find my stubbornness offensive.

    i was not saying you guys are wrong I was saying maybe there is more to it than meets the eye. For that what a larger volume of bits would still never repair.

    such as the loss of context in a value like 0.2 after its use.

    but I will add, before I will forever stop speaking

    A circle in real life, drawn via protractor and pencil or a compass rather, certainly does not tend towards infinity. The degrees do indeed round off at 360. Pi is just a cheap way to approximate it and to explain it using decimal. If it did I am certain each circle we draw would cause a rip in the space time continuum and a black hole would emerge from the paper where the infinite decimal would implode.
     
    Last edited: Oct 12, 2022
  32. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    If somebody makes a discord Wheres we infinitely debate these topics until the cows come home I’d love to join it.

    Have a good day guys
     
  33. MelvMay

    MelvMay

    Unity Technologies

    Joined:
    May 24, 2013
    Posts:
    11,459
    Sorry to be a party pooper but please, be aware that this is a massive thread derailment/hijack.

    I would politely suggest using the General Discussion forum and not other peoples threads. ;)
     
  34. Spacewizard-

    Spacewizard-

    Joined:
    Jun 7, 2019
    Posts:
    74
    No problem, I read the discussion with amazement, I tried to understand as much as possible even though I didn't fully understand it :D. My comprehension about the articles combined with technical terms maybe a little slow as my native language is not English, now i realized that... (by the way, I wasn't very good at math in college and high school anyway, I think I understand better when something is expressed with symbols... or diagrams...I don't know?, I've been questioning myself this right now(that was a pretty unnecessary detail), the important thing is to understand something with writings, everyone already understands with schemas... I interrogating nature of reality at the moment).
    Last of all, very thanks to all who replied, it's quite confusing but I will read the references given carefully but it is difficult to understand this quickly of course.
     
  35. halley

    halley

    Joined:
    Aug 26, 2013
    Posts:
    2,433
    I wish some people here would cut out the weird pseudo-science mysticism of electrons, crystals and bad juju. Oh, there must be forces at play that can't be explained... horse pucky.

    It has to do with a very crisply defined standard for how to encode numbers into a limited number of binary digits. The Institute of Electrical and Electronics Engineers, or IEEE for short, evolved this standard in 1985 (but other standards preceded that).

    https://www.h-schmidt.net/FloatConverter/IEEE754.html

    Play with this online calculator.
     
    Kurt-Dekker and Nad_B like this.
  36. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    No floating point inaccuracy. Is rounding errors. That’s why it exists. Because it’s using too much space. Idk why we Rez this old thread otherwise to make a point that doesn’t need to be made. But since 99% of the information I get back here is useless I go ahead and ignore.
     
  37. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    Not only is it rounding errors but it is inability to equate what you guys claim to be binary digits.
    So I have 8x6 cups of water some are full. I have a matching 8x6 produced via math function. They cannot equate or ever match directly.

    So if it all works on crystal system as you guys claim, then you’d never have the issue. It’s quite simple. But the reality is the issue exists in the equivalence of these values. Remember a binary digit is how it is stored. It is not how it pass through the cpu chip. Doesn’t pass through as a recognisable form. An equivalence is created and it is used for mathematics and rolled back into storage.
     
  38. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    7,861
    upload_2022-12-17_22-35-45.jpeg
     
  39. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164

    Well hey spino if you’re right then go ahead. Never encounter a floating point error. Show little or no care about the discussion if it does not effect your life. Could you?
    You couldn’t is the fact. No. You couldn’t. Because you know it matters. And I don’t know why this dude asking for unreal engines automatic origin sorter turns into homoerectus Sparta vs Troy.

    go ahead never encounter floating point error. Buy the next level of computer the manufacture is selling you. And justify its problems.
     
  40. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    7,861
    Dunno how you interpreted my dumb meme as my downplaying the existence of floating point inaccuracy. I never have, never will, and fully understand why it exists and why it happens.

    I was more so poking you still trying to argue your totally wrong understanding as to why it happens with industry experts that have decades more experience on you.
     
  41. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
    Well half a decade more experience depends what your rate of learning is. If I have x2 ability to biologically compute and reform logic then 5 years to me is a decade to you
     
  42. AnimalMan

    AnimalMan

    Joined:
    Apr 1, 2018
    Posts:
    1,164
  43. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    7,861
    That has to be the most autistic thing I've read.

    Ever.

    By a long shot.
     
    FerdowsurAsif and Spy-Master like this.