Search Unity

How to handle truly open world maps with Unity, due to floating point limitations?

Discussion in 'World Building' started by Marcos-Elias, May 9, 2018.

  1. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    250
    @snacktime Thank you for indicating that the "camera relative" method is based on origin shifting. I could not find any doco or examples that were clear on that question.
     
  2. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    Gravesend likes this.
  3. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    250
    @Peter77, ah, yes, thanks for reminding me. It does give quite a reasonable bit of information. To be fair, bgolus did point me to that earlier this year. I do not know when the more detailed explanation was included, but it seems to be for a couple of years at least.

    Some things not clear:
    1. Does it use a threshold-based shifting system like the old Unity wiki?. This seems to be implied by: "translates GameObjects and Lights by the negated world space Camera position before any other geometric transformations affect them. It then sets the world space Camera position to 0 and modifies all relevant matrices accordingly." Or,
    2. Does it continuously move the World in reverse, like continuos floating origin? This does not appear to be the case because then there would be no need for having a World camera position.
    3. How is the physics problem dealt with/controlled (physics objects being affected by the sudden acceleration from the Wolrd shift (if 1) applies!)?
    4. Why is it tied to shader code in a specific pipeline and not available as a general solution to all pipelines? The old wiki code, at least, could be used with any version of unity, as my code can.
    With regard to 3. "before any other geometric transformations affect them" appears to be the type of control that we all need: that is, prevent a shift of the world transform hierarchy from messing up physics. But I expect that is not the case?
     
  4. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    It seems this is done to the matrices passed to the GPU, the GameObject transforms aren't modified.
     
    cosmochristo likes this.
  5. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    250
    That makes sense: if subtracting the camera world position is done on-the-fly in the GPU, then it could be shifting all vertices of objects, or rather meshes, in vertex shaders as they go to be rendered.
    A very low-level approach.

    Also, your explanation would fit perfectly with the work of Sebastien Lagarde:
    https://blog.unity.com/technology/the-high-definition-render-pipeline-focused-on-visual-quality
     
    Last edited: Dec 14, 2022
  6. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    This gives a bit more clear picture technically:
    https://docs.unity3d.com/Packages/com.unity.shadergraph@8.1/manual/Position-Node.html

    It doesn't move anything at all. It's literally just using an offset position in the shaders.

    But precision loss is still impacting everything that moves. Precision is set amounts at specific ranges. And if I remember correctly how the math works, it's basically rounding operations to the precision amount. So as precision decreases objects move faster, until they don't when the amount moved is not enough to round up to the next precision amount. And they just stop altogether.

    32k is around where you lose a full 3 decimal points of precision. I would expect a lot of stuff to start breaking around there.
     
  7. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    250
    @snacktime yes that link confirms what you said about shaders.

    Can you clarify what you mean here:
    I recall in HDRP there was something about dividing maps/terrains into a grid of 256 sectors, is that what you mean?

    32km sounds right for some noticeable problems.
    In my experience, with standard pipeline, with single precision, accuracy problems are visible at about 70km and the accuracy at 8million is down to about 1cm to half a meter (it varies depending on the type of intereaction). Lots of things go wrong at that stage, and as you say, movement may stop working (if the movement displacement drops below the resolution).

    Categories of effects I found:
    1. sliding along one axis: when trying to move to a point/target, you slide off in one direction or another.
    2. The "travelling tower" problem: when you move towards a position, some objects move away!
    3. Distant relative jitter in general, ie. normal forms of jitter effects, such as flashing, for some objects (travelling tower is another example if DRJ).
    4. not moving at all: as you mentioned, when space resolution is lower than the movement vector magnitude.
     
  8. HIBIKI_entertainment

    HIBIKI_entertainment

    Joined:
    Dec 4, 2018
    Posts:
    595
    @cosmochristo,
    Yes the documentation has a small sub section on it.

    Dev blitz also mentions relative rendering, as should be objectively obviously rendering/animations/physics have varying degrees of accuracy.


    There's still a 32bit depth limitation numerically, so you may find that physics accuracy still may show inaccuracies if they or anything else is world pos absolute based.

    However this is a significant improvement of units over the built in counterpart.


    For scales of projects on a universal scale sure, you'd have to scale down your units on the project level to potentially facilitate that.


    There's a few shader graphs situations where this is also the case.

    In terms of respected treatment, treat unto others how you wish to be treated, goes a very long way indeed.
     
    Last edited: Dec 15, 2022
    frarf likes this.
  9. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    No I'm talking about how floating point math works. This page has a pretty good breakdown:

    https://blog.demofox.org/2017/11/21/floating-point-precision/
     
    cosmochristo likes this.
  10. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Yes, if both the objects and the camera are very far from the origin their positions would be subject to precision loss and subtracting them wouldn't change that and they would still snap while moving. But the vertices of each model are protected from precision loss since their transformed coordinates won't be multiplied by massive world-space values and at least this won't happen:
    float_precision_error.gif
     
    cosmochristo likes this.
  11. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    250
    That's a nice article. I added the following (in comments):
    Some comments:
    1. It may help to separate precision from accuracy, where accuracy refers to the resolution, or distance between one representable number and the next. For single precision floats, the precision is always 32bit.
    3. The accuracy, on the other hand, will vary with size of number represented long before you reach 6 digits.
    The resolution at 1.0 (between 0.5 and 1.0) is double that at 2.0 because at 2.0, there is a 1 in the exponent, so the gap between numbers is double than that at 1.0.
    4. For 3-dimensional space, the largest (worst case) gap error between representable points is a geometric formula (involving square root) and this is larger (about 3.4 times) than the values calculated for 1 dimension. To this one must include the temporal error as well, so one could well expect a larger multiple than 3.4.
     
  12. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    250
    Hah, I did porches too :)
    PorcheDIffs.jpg

    @Neto_Kokku, you are half right, if the camera world coordinate is very large then the entire object mesh will suffer jitter,
    and the relative jitter between objects would vary depending on their actual reference position from the origin.
     
  13. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Correct. The mesh itself won't get mangled but the object position will still jitter so camera-relative rendering doesn't help that much in the end.

    I think it's good only for cases where distances aren't large enough to break physics or cause noticeable movement jitter but could already cause visual artifacts on densely detailed meshes like characters when seen up close. Since the vertices' final screen positions go through several multiplications and additions they are more sensitive to precision loss.
     
    cosmochristo likes this.