Search Unity

  1. Looking for a job or to hire someone for a project? Check out the re-opened job forums.
    Dismiss Notice
  2. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Floating Point Errors and Large Scale Worlds

Discussion in 'World Building' started by Phelan-Simpson, Apr 15, 2018.

  1. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    I have measured the moving collider performance issue raised by @Marcos-Elias, @ristophonics, and @razzraziel. With 10k collider objects (boxes with a collider component), if I continuously transform the group by transforming the scene root, there is a significant physics load seen in the profiler. As I did in my latest paper, I ran it on my core i5 laptop and, with very little geometry visible, and used the mouse keys to continually move the scene root.

    In the first image attached, on the left of the screen can be seen the simple visible geometry and profiler window on right.

    CollidersActiveHierarchy-30pc.jpg

    You need to tell Unity to not perform unnecessary processing when the scene root is moved and this can be by turning off the active hierarchy above the colliders before each transform and back on after it:
    Code (CSharp):
    1. nucleonsGroup.SetActive(false);
    2. // Move the scene to the new position by changing scene parent object transform.
    3. transform.Translate(rotated_transform);
    4.  
    5.  nucleonsGroup.SetActive(true);
    The next image shows what happens when I do this.


    The profiler measurements show that the magnitude of processing devoted to collider movement is now much less compared to the proportion of cpu devoted to rendering a few simple objects, even while continuously transforming 10,000 colliders. Therefore, moving colliders does not need to cause a significant performance load.
     

    Attached Files:

    Last edited: Sep 8, 2019
  2. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    5,517
    Did you profile it in a build or editor?

    I'm asking because:
     
  3. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    @Peter77 Good question, I am running from the editor play button because, even though I have developer build ticked and autoconnect profiler ticked, no profiler is displayed when I run it as an app.

    I run it full screen, so the rendering of editor windows superpig mentioned is not an issue.
    I also turn vsync off so there are no cpu spikes from that.
    There are no memory allocations during run either, after the initial creation of 10k colliders.
    There may be other interference from the editor but, on average, the relative load comparisons are still valid anyway.

    The doco on profiling with dev build talks about running profiler on another machine, which would be ideal but I do not have a network setup.

    If you can tell me how to get the profiler up on the same machine when running from a built app then I will try it.
     
  4. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    5,517
    That should be no different:
    • Start development build
    • Open Profiler window
    • Select Player in Profiler drop-down
    • Press record in Profiler window
     
  5. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    It is on my system: after starting it there is no menu option to open profiler. Is there a KB shortcut?
    No menu options for anything. OSX Mojave 10.14.6. Unity 2018.4.7.f1
     
  6. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    5,517
    You're saying when you start the build, the "Windows > Analysis > Profiler" menu item disappears from the Unity Editor main menu? :eek:
     
  7. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    I mean it is not available in the build that I run. If I need to run it from the editor (like I have been doing) then ... do you mean I should close the editor after starting profiler and it will connect with the build when I run that?
     
  8. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    @Peter77: I worked it out. Here are the profile measurements from two dev builds: the first with an active hierarchy during moves and the second with it inactive.
    Profile with active 10k colliders under root node while moving (the hierarchy).
    HierarchyActive2-moving40pc.jpg

    Profile for inactive hierarchy:

    HierarchyInactiveUnity2018.7-40pc.jpg

    So the difference is 60fps+ (active hierarchy) to 200fps+ (inactive).

    For these tests, I used 1280x800 with rendering quality set to very low. Same version of Unity 2018.7. Only apps running were the unity editor with the profiler, the game from dev build.
     

    Attached Files:

    buFFalo94 and Peter77 like this.
  9. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    I put an example video online here:


    The relevant scripts shown attached in video are:
    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. /*
    6. * Performs simple Floating Origin player movement by reverse transforming the scene
    7. * Floating origin movement only happens if there is
    8. * a change in navigation input and it is above a minimum threshold.
    9. *
    10. * Here's and online example testing 10k colliders: https://youtu.be/DsWLQFnRdLo
    11. * which shows how the PlayerMove and PlayerView are attached.
    12. *
    13. * Assumptions:
    14. * Player object has camera attached, it is positioned at the origin, and
    15. * tagged "playerCapsule".
    16. * This script is attached to the scene parent object.
    17. */
    18. public class PlayerMove : MonoBehaviour
    19. {
    20.  
    21.     // Minimum player collision detection distance
    22.     private const float COLLISION_DISTANCE = 2f;
    23.     // Multiplier for when bouncing back from collisions
    24.     private const float COLLISION_ADJUST = 1.2f;
    25.     // Threshold for detecting navigation changes
    26.     private const float NAV_SHAKE_THRESHOLD = 0.0001f;
    27.     // Max distance for detecting player collisions
    28.     private const float RAYCAST_DISTANCE = 10f;
    29.  
    30.     // Multiplier to each movement input
    31.     private readonly float speed = 7.0f;
    32.     private GameObject player;
    33.     private GameObject SceneRoot;
    34.     // Layer mask for ray casting
    35.     private int layerMask;
    36.     // Horizontal movement deltas
    37.     private float deltaX;
    38.     private float deltaZ;
    39.     private float speedAdj;
    40.     // Current reverse transform
    41.     private Vector3 reverseMovement;  
    42.     // Rotated reverse transform
    43.     private Vector3 rotated_transform = new Vector3(0f,0f,0f);
    44.     private readonly Vector3 player_position = new Vector3(0f, 0f, 0f);
    45.     private RaycastHit rayCollision;
    46.  
    47.     void Start()
    48.     {
    49.         // Accloc mem once only
    50.         reverseMovement = new Vector3(0, 0, 0);
    51.  
    52.         // Use Physics.Raycast to cast a ray forward into scene to check for collisions
    53.         // create a bit mask for 7 layers with 0 for player layer to use in Raycast
    54.         layerMask = 1 << 8;
    55.         layerMask = ~layerMask;
    56.  
    57.         // turn off cursor display in game window
    58.         Cursor.visible = false;
    59.  
    60.         // Get access to the player object
    61.         player = GameObject.FindGameObjectWithTag("playerCapsule");
    62.         SceneRoot = GameObject.FindGameObjectWithTag("root");
    63.  
    64.         if (player == null)
    65.         {
    66.             print("player not found");
    67.         }
    68.     }
    69.  
    70.     /// <summary>
    71.     /// Do not use FixedUpdate here because performance drops dramatically (on dual core i5 macbook pro).
    72.     /// <seealso cref="PlayerView.cs"/>
    73.     /// </summary>
    74.     void Update()
    75.     {
    76.         // Get the horizontal movement changes from keyboard and
    77.         // negate them so we can move scene in reverse
    78.         deltaX = -Input.GetAxis("Horizontal");
    79.         deltaZ = -Input.GetAxis("Vertical");
    80.  
    81.         // Only process floating origin movement if there is navigation input
    82.         // change and it is above noise/shake threshold.
    83.         // Performance: don't really want a sqr root here -
    84.         //   or even a squares comparision.
    85.         if ((Mathf.Abs(deltaX) + Mathf.Abs(deltaZ)) > NAV_SHAKE_THRESHOLD)
    86.         {
    87.  
    88.             speedAdj = Time.deltaTime * speed;
    89.  
    90.             // Scene reverse transform for floating origin navigation.
    91.             // Make movement delta proportional to time since last move and speed factor.
    92.             // Peformance: changed this to assignment so no mem alloc and GC needed, and
    93.             // 2 multiplies a bit faster than multiply by 3D vector.
    94.             reverseMovement.x = deltaX * speedAdj;
    95.             reverseMovement.z = deltaZ * speedAdj;
    96.            
    97.             /*// Uncomment to do player collision detection.
    98.               //If player collided with close object then ...
    99.             if (Physics.Raycast(player_position, player.transform.TransformDirection(Vector3.forward), out rayCollision, COLLISION_DISTANCE, layerMask)
    100.                 && (rayCollision.distance < COLLISION_DISTANCE))
    101.             {
    102.                 /// ... bounce back a little from collision
    103.                 transform.Translate(-rotated_transform*COLLISION_ADJUST);
    104.             }
    105.             else // no collision, so move scene in reverse
    106.             {*/
    107.                 // use player camera rotation to modify reverse movement vector so that player forward corresponds to forward movement input
    108.                 rotated_transform = Quaternion.Euler(player.transform.localEulerAngles) * reverseMovement;
    109.  
    110.                 SceneRoot.SetActive(false);
    111.  
    112.                 // Move the scene to the new position by changing scene parent object transform.
    113.                 transform.Translate(rotated_transform);
    114.  
    115.                 SceneRoot.SetActive(true);
    116.             /*}*/
    117.         }
    118.     }
    119. }
    and
    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. /*
    6. * Rotates the player, and hence attached player view, based on mouse input.
    7. *
    8. * Assumptions:
    9. * This script is attached to player object.
    10. * Camera is attached to player object.
    11. */
    12. public class PlayerView : MonoBehaviour
    13. {
    14.  
    15.     // Control how fast player view responds to mouse movement
    16.     // More sophisticated games would allow the sensitivity to be changed to suit a player's preferences.
    17.     private const float sensitivity = 2f;
    18.  
    19.     // Limit (clamp) the vertical rotation to +/- this angle
    20.     private const float vertClamp = 60.0f;
    21.  
    22.     // Rotation about the horizontal axis (up and down)
    23.     private float currentRotationX = 0;
    24.     // Left and right rotation
    25.     private float currentRotationY = 0;
    26.  
    27.     void Update()
    28.     {
    29.         // horizontal and vertical rotation at the same time
    30.         currentRotationX -= Input.GetAxis("Mouse Y") * sensitivity;
    31.         currentRotationX = Mathf.Clamp(currentRotationX, -vertClamp, vertClamp);
    32.  
    33.         currentRotationY = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivity;
    34.  
    35.         transform.localEulerAngles = new Vector3(currentRotationX, currentRotationY, 0);
    36.     }
    37. }
    38.  
     
    Last edited: Sep 13, 2019
  10. ncho

    ncho

    Joined:
    Feb 1, 2014
    Posts:
    91
    At a map size of 4096x4096 am I at risk for any physics inaccuracies? I haven't noticed anything yet but my game does rely on raycasting significantly so I'm somewhat paranoid about this.
     
  11. Stardog

    Stardog

    Joined:
    Jun 28, 2010
    Posts:
    1,612
    Probably not. If you put the middle of the terrain at the origin will only be 2048 from the edges.
     
  12. ristophonics

    ristophonics

    Joined:
    May 23, 2014
    Posts:
    31
    Depends... Is the 4096x4096 the resolution of the heightmap or the distance in meters of the terrain?
     
  13. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    If you are using a threshold of similar that size, I would have to say yes. The tile size might be ok for keeping the map vertex coords small but the threshold can independently lead to physics errors at much smaller distances.
    I did some very constrained physics tests because physics can be highly sensitive to error. I reported the experiment here:
    https://www.researchgate.net/public...ghting_cubes_battle_for_positional_invariance

    and you can see the videos here: part 1: 1DOF:
    https://youtu.be/vbIb9dh1f7o
    part 2: 2DOF:
    https://youtu.be/80W113eL6wQ

    Note that the videos compare continuous floating origin(CFO) with the threshold method that I call POS.

    The main point I would like to make is that there will always be some increased error with the lower 3space resolution induced by greater distance from the origin and complex calculations are sensitive to ti and magnify it.
    As for map tiles, I used to use a LOD system that effectively adjusts size when you get closer.
     
    Last edited: Oct 24, 2019
    PrimalCoder and buFFalo94 like this.
  14. buFFalo94

    buFFalo94

    Joined:
    Sep 14, 2015
    Posts:
    244
    Really cool tests
     
    cosmochristo likes this.
  15. signalsaudio

    signalsaudio

    Joined:
    Jul 13, 2019
    Posts:
    4
    Sorry to necro an old thread, but there was tons of information in here that is awesome - and from my experience, really hard to find.

    After reading through this thread I tried the following experiment, which may be useful to some people trying to make open world games in Unity:



    Other things I thought of that are not in the video:

    1. You actually could resolve some physics issues at those distances by using the "Continuous Floating Origin (CFO)" "display layer" to calculate physics from near the origin on the "floating error corrected" representations of enemies (will have to watch video to understand what I mean by that). I didn't really consider this because the game I'm creating is multiplayer, and the server would be verifying hit calculations on the "collider world" anyway.

    2. I found that even without colliders, the CFO method struggles with moving immense numbers of game objects under a root parent node in Unity - However, some tests I tried indicated that this may be easily resolved if game objects are set inactive that are not near the player. I'll have to do more research to verify this, but the early tests I did seemed promising.

    3. A big benefit to this approach is that it also allows you to use regular control methods on the "regular collision layer". Nothing is changed about how a player interacts with that world - just that its mesh renderers are stripped so it isn't actually being rendered.

    4. A good piece of advice to make your life easier is to simply use the reverse transform.position on the CFO "display layer" - no need to map the controls backwards or anything over-complicated. Since player rotation is generally local anyway I believe you can just use a direct copy of the player's rotation from the "regular collision layer" for most situations.

    The slightly more involved process of implementing this hybrid approach is creating the script to instantiate both worlds on top of each-other, strip the mesh colliders on one, and the mesh renderers on the other - but that should not pose much difficulty for most Unity programmers.

    Big thanks to CosmoChristo for his posts and research papers. Awesome stuff.
     
    cosmochristo likes this.
  16. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    Note that the continuous methods allow greater (e.g. doubling) of performance for some physics operations relevant to this discussion, such as avatar/view proximity, collision, and distance tests because relative-to-zero design eliminates some variables and operations automatically.
    I provide an explanation of this short article here:
    https://www.researchgate.net/publication/342510617_Position_Independent_Principle
     
    Marcos-Elias likes this.
  17. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    This is a good way to visually compare the two floating origin methods side-by-side. Another variation that I think would be worth making is to have just one copy of the world and two avatars, one for each navigation method.
    My comparisons showed each method one at a time, like this video of the object shift method, where the rendering problems starting from the first shift are shown and also how accelerating to faster speeds causes glitches.

    and this one that demonstrates scaling out into planetary space with simple physics in a HUD:

    I think your side-by-side comparison would be more effective.
     
  18. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    Hi @Phelan-Simpson, I'm curious about how you arrived at 1cm accuracy for that distance. The reason is that I asked myself similar questions some time ago and I ended up making some theory and rules to predict base worst-case accuracy. My aim was to provide people with a proven way to estimate accuracy limits.
    So my rule (3.4*distance*machine epsilon for single/double precision) gives a value of 4cm worst-case accuracy for single precision between two points/vertices at 99,999.99 distance. In other words, most of the time you will get 1cm or so accuracy, but occasionally it will be 4cm.
    On top of that, there is the magnification of spatial resolution base error due mainly to multiplications (if you multiply a number by 10, the error is multiplied by 10). Although I use this rule, magnification tends to overwhelm it and it hard to predict when you only write some of the code: unity may contribute more than your code.
    For example, I find the Unity reparenting operation generates noticeable error in the transform value. By observing this error at a distance of > 10^11 (near an average Mars orbit), the reparenting magnification + whatever is done in my scripts, is around 45 times what my rule predicts.
     
  19. Omti1990

    Omti1990

    Joined:
    Jun 22, 2018
    Posts:
    5
    I'm sorry for Necroing this, I saw this thread and found it very helpful for programming my space game. I've just been facing a few practical problems right now that have only barely been adressed here: How do I get serious collision physics to work with a continous floating origin (CFO) setup?

    There've been code suggestions for raycast collision detections and that works well enough for frontal collisions. But what about hits that aren't directly frontal, but where my spaceships collides with something at the sides?

    Do you know if there's some elegant solution to detect such hits? I've bee trying to use the detect collision function from rigidbodies and it sort of works, but there's a major problem. As far as I can tell if you want to make use of rigidbodies in a CFO setup, you need to restrain its ability to move on the x y and z axis. That also means you can't feed it a velocity (I tried but the vector remained {0,0,0}), which means it's collision detection ability is crippled and can easily lead to you glitching through objects.

    Is there some way to get at the underlying functions and just feed data into them? I've been looking into the scripting API, but I don't think they're exposed. (Please correct me if I'm wrong)

    Now I come to the next part of the problem. How do you actually calculate the collision. This might not be much of a problem for first person shooters and the likes, where it's mostly about being bumped back when crashing into a wall. In my case I do need a bit more complicated calculations (to say the least). I've shown the easy cases below. In case of a frontal elastic collision with a movable object you can just use standard one dimensional impulse calculations. The spaceship gets reflected, but loses part of its velocity to the object it has collided with. The other option is that the object is static and the spaceship gets just reflected in that case it doesn't even have to be frontal.

    But what about elastic collisions that aren't frontal or with a static partner? What to do when crashing with two objects at the same time? (Or at least two colliders of the same model?) I believe under normal circumstances the rigidbody component would calculate all of this. I've been looking into the calculations myself and even in university course level stuff on the internet it tends to be simplified.

    To be honest, I would rather not try to program a custom rigidbody sort of collision system myself if I can avoid it. Do you know if unity's physics calculations are exposed somewhere? Or if there are modules that I could use for 3d collisions (or even assets)?
     
  20. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    hi @Omti1990,

    If you are talking about collisions between the zero centered avatar (and attached ship) then I think it is probably worth considering making a custom collision detection CFO system if you can exploit faster optimisations like I mention in the Position_Independent_Principle linked to earlier, at least for first approximation collision detection.

    I have only played with overly simplified collision responses myself so far so don't have a ready made solution for you. However, I'd be happy to discuss further because I think I will be looking at doing the same thing later on.
     
  21. Omti1990

    Omti1990

    Joined:
    Jun 22, 2018
    Posts:
    5
    My current solution is to use boxcasts from the colliders and then just add the normals of all the raycastHit. Then just use the reflection function from Vector3 / Vector3d to calculate where my spaceship should be reflected. This works decently well for demonstration purposes, but actual collision physics are far more complicated than that. I'd really love something where I'd just have to plug in rotations, masses, collision points and velocities and it'd just do the physics for me. No clue if someone made an asset that does that sort of thing.

    Anyways, I've come across another problem. Actually moving a parent transform works well enough if you're only moving basically static objects. The problem I've encountered happens if you've moved the parent transform for your world far from origin and try to add moving objects to it (in my case projectiles). They're moving really oddly. I suspect this is because their transform.localPosition has high float values on at least one axis. The further you move the parent transform away the worse these effects get.

    Even with static objects, I suspect there are limits, which I might easily surpass at the solar system scale I've been using. So I don't think just moving the parent transform is really a solution for that kind of problem. I've been thinking I could just move the objects individually and just use my own coordinate system in double Vectors to determine their position, but that opens the question how do I effectively call on mobile objects or objects I might or might not want to spawn later on.

    My current best bet is to just get all children during Awake and put them into some array and just have thousands of deactivated game objects sit around for everything at the start and activate them if needed. I'm just worried that'll be not particularly performative although I could optimise this by only updating those that are actually currently visible. Still worried what might happen with the performance in some bigger spacebattle if thousands of projectiles have to be updated every frame.
     
  22. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    you are probably right. There is a way to manage the precision and distance from origin of every active thing, not just the CFO avatar (and stuff near it).

    ok well i would like to investigate this type of dynamic performance issue with you, if that's ok. I can test it on my current framework. Maybe you can make a small test project that I can test. My main time waster is trying to get some acceptable flight dynamics in different navigation modes - e.g. when transitioning from space to near planet to a detailed region - where you might need some pitch and roll - to walk/terrain follow and then back out again.

    Yes that's right and the hierarchy should not remain static as you say - static hierarchies are useless. I use a dynamically modifiable hierarchy. I am using single precision only and go out to Mars using this approach. Without it I would not get away with single precision going from space to ground.

    haha yes I use activate/deactive all the time on sub hierarchies.
     
  23. KeinZantezuken

    KeinZantezuken

    Joined:
    Mar 28, 2017
    Posts:
    45
    @cosmochristo
    @Omti1990
    If you get somewhere please update this thread when you can, I'm interested in this as well.
     
  24. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    ok, minor update: As to your question about setActive false/true, it improves performance in my app, and I do this every time there is a camera "move", in quotes because the camera actually stays at zero. So I don't think you will have a performance hit, instead fps should improve. You will have to design it differently for what you want (as you are already thinking) and one of the changes is to not deactivate something containing sound, or you will get audio glitches. So sound elements may need a separate hierarchy if you have some complex audio landscape.

    As for vector algebra, I am still only using it for determining if player is facing certain objects. No CD yet.
     
  25. AndreiMarian

    AndreiMarian

    Joined:
    Jun 9, 2015
    Posts:
    58
    For the floating point precision problem there are mitigations and solutions. All suggestions presented so far seem like mitigations. But why not take the bull by the horns and make it as it really should be ideally?

    Using floating point, single or double in 3D Worlds, is the same blunder, like hammering the nails with the pliers: the whole idea behind "floating" point is to use them either in small xor in large scale. Not both at a time!
    Inconsistent accuracy -- 7 decimals near origin and skipping integers past the insignificant value of ±16,777,216. I'm saying "insignificant " as the float positive range is 0 - 340,282,366,920,938,463,463,374,607,431,768,211,456.
    Problem is doubled due to negatives.
    So really the usable fraction of a
    float 
    is 1 / 81,129,638,414,606,681,695,789,005,144,064‬. You see now?

    Now what I'm proposing is making a Unity package with integer based coordinates and possibly math. Use
    int
    or even better --
    long
    .
    I know it's a huge undertake but it will be the solution. We would have a steady ±2,147,483,647 for the same 32 bit price, and possibly even faster on CPU, and ±9,223,372,036,854,775,807 with
    long
    .
    Using
    double 
    only sweeps the issue under the carpet.
    Need 3 decimals precision? use the unit as mm. Need precision of merely thousands? Use the unit as Km. Any way, you get more space for the buck than with
    float
    . Plus consistency. Using
    long
    units as mm you'd get ~2 light-years across at the mm precision, as 1 ly ≈ 9.4605284e15 m.

    Note: You would have to manually prevent/handle overflow cases or the engine would reserve some values for the corresponding
    NaN
    and ±∞ in floating points.
     
  26. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    @AndreiMarian, Integers could be utilised for greater accuracy, as you say. Floating point has the advantage of greater scale. Personally I prefer floats. I think if multi-precision int math could be made to work well then they could be a good option but I have not seen any proof of that. I think it is worth pursuing though. :)

    As for a solution, when one combines position independence of calculations with the observation that small numeric values lead to less error propagation then you have a solution where calculations can be performed at or close to the origin to minimise error in the result. The results can then be applied to whatever location they are relevant to.

    This is a general solution, whether you use integer or float. It is not some sort of hack that only works under some cases. When coded as algorithms, my two main algorithms are Continuous Floating Origin and Relative Spaces which work together. And the first is not origin rebasing or object/origin shifting based on some threshold distance from the origin - those are what may be considered hacks because they have glitches and other limitations. For example, what threshold applies generally to every application?

    I don't claim to have the only answers but I have spent nearly two decades on this very subject and built a great deal of evidence that my approach is a general and effective solution with good performance. It does require a change from absolute to relative thinking and that is not always so easy, even Newton, who's Laws proved there is no absolute space or position, could not accept that everything is relative (Hawking, A Briefer History of Time, bottom of page 23.)

    I hope this helps :)
     
    AndreiMarian likes this.
  27. AndreiMarian

    AndreiMarian

    Joined:
    Jun 9, 2015
    Posts:
    58
    It's my favorite approach of the existing ones. If only it could fit all cases, including mine... but unfortunately I can't use it in my case (more about it below).

    TL;DR
    I want a robust, consistent, simultaneous, light-year spanning, world.
    long
    delivers these.

    I've read through the paper on which you've based your approach, I've seen your git code and the related videos. I for one like how it turned out, and kudos for the activation/deactivation gimmick that skips unnecessary recalculation.
    I like it as it's one of the most straightforward approach, not straying (too much) from the ideal, and it's relatively performant.

    Everything is relative, I subscribe, but it's about space-time not just space. For our case the ideas that come up are:
    1. Computing the events in the order of range from the camera, at a speed greater than the fastest locomotion speed, such that "updating" is not observable while traveling. But this means high spatial accuracy + low processing.
    2. Only compute events in the observable range. This means your approach but no simultaneity.

    In my specific case I just want the regular game routine but scaled to a large world. That is, I want simultaneity, i.e. things to still happen even if I move away past the reliable (4000 units?) range. This implies the same precision all over the world.
    Just one example: if I send a ship to mine some asteroid in the far distance, I don't want them to overlap due to floating error and explode. Nor do I want to compute the mining event at a later time, when I'll be close enough.
    A workaround would be to just don't do collision checking but it's ghetto, plus it's not just physical collisions, there are essential matters like do something when in range, like f.i. shoot enemy when in visual range but not in their weapons range. If these ranges difference is say, a mile, and the precision is 2 miles, you can imagine the mess. It may even shoot friendlies etc. Anyway, anything you could think of can happen.

    Using
    float
    is just wrong, like using a rubber hammer for nails. Replacing it with
    double
    is like using a ten times bigger rubber hammer.

    As for the double coordinate set (x, y, z, x', y', z') it's both cumbersome for us devs and a burden for CPU, memory. Plus how do you deal with rendering? You'd have to convert to some regular coordinates for the GPU.
     
  28. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    thanks, for the positive comments :).
    and note that the activation/deactivation is just getting around an inefficiency in Unity - would be better if you could tell Unity that a node and it's subtree are being moved by floating origin so don't send unnecessary events. Also, you can't use it on something containing streamed audio, and maybe some other things.
     
  29. AndreiMarian

    AndreiMarian

    Joined:
    Jun 9, 2015
    Posts:
    58
    Floating point idea and inner implementation is worth something close to a Nobel prize. A world in a seed, so ingeniously distributed. And I mean it. But as I said in other posts, it doesn't fit our use case. Why? Because it doesn't provide for the needs. And even if, because there is something else more suitable.

    We have to realize that all you, me and everyone else we're trying here is just come up with ways to patch the system such that we can make space, which is the foundation of a game, simply... work;)
    For so many devs the 3000-5000 unit limit came as a surprise, otherwise the internet wouldn't be full packed with the subject. Let's be honest guys, this is a sign of some poor thinking, not hardware limitation.

    To better see what I mean, take time instead of space. What if all of a sudden you'd discover that 25 minutes into the game a frame would start to take arbitrary values, even using deltaTime, due to some type limitations? After 1 hour a frame would take randomly between 5ms to 200ms?
    Would you be happy just capping play time to 25 mins? (Current situation)
    Would you call restarting the game every 25 mins a solution?
    Would you be happier if the fluctuation would be reduced to 10ms?
     
    Last edited: Jan 7, 2021
  30. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    I fully agree, temporal jitter is as important as spatial and even causes some spatial jitter. CFO should be positional centering not just spatial: you remain at the center of space and time [and other positional fields]. I have not yet had the time to implement this :).

    As for your mining example, my plan is to put distant calculations in the back-end/ server side processing: calculated in a centered model and recorded for the place it relates to, but not rendered (in detail) unless a player is viewing it close.
     
    AndreiMarian likes this.
  31. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    Collisions and physics.
    This is a somewhat slow moving thread, but, to paraphrase the wise Treebeard, The acquisition of deep knowledge is not worthwhile unless you spend a lot of time acquiring it.
    From time to time, there are some questions raised over collisions and physics under continuous floating origin, so here is a demo of a ship colliding with terrain and stuff and some active physics.
     
  32. cosmochristo

    cosmochristo

    Joined:
    Sep 24, 2018
    Posts:
    108
    @TerraUnity correctly pointed out to me that the above video does not demonstrate use of Unity rigidbody and collision physics very well. So here is a better demo:
     
unityunity