Search Unity

Is it possible to build nested coordinate systems?

Discussion in 'Game Design' started by Nubnubbud, Feb 6, 2017.

  1. Nubnubbud

    Nubnubbud

    Joined:
    Nov 19, 2014
    Posts:
    49
    let's say I theoretically need to maintain decent precision throughout a huge map, I mean real planet to solar system sized, and enough precision to model bullet drop for hitting a moving target a long way away.

    I had an idea, and I want some input, but before we get to that, I'll introduce some maths that are important to that idea.
    • Unity uses floats for its coordinate system.
    • Floating point variables can handle numbers up to 2097151.xx within two digits of precision (hence the .xx), so 2097151.6 will become 2097151.625.
    • if you move the decimal, you can get up to 2097.151 with perfect accuracy, assuming you keep three decimals or more at all times
      • this means the limit for precision in the unity engine, depending on the scaling you choose, is:
        • 2km at millimeter precision (~1/5th skyrim's size)(minimal distortion)
        • 20km at centimeter precision (~10x skyrim's size)(noticeable jumping)
        • 200km at decimeter precision (~1000x skyrim's size)(can tell you're on a grid)
        • 2000km at meter precision (1,000,000x skyrim's size)(grid is love grid is life)
        • (assume the assets are getting smaller, instead of the level getting bigger, as float errors actually get exponentially worse as you go farther. In actuality most positions will be more precise than the minimums here.)
    so, what if... you created a system of nesting two of these coordinate systems?:

    <---like this
    • the main coordinate system is only made of whole numbers with their coordinates stored as longs or ints
    • the small coordinate systems work like unity's, and when one reaches the end of one, it adds or subtracts a digit to the main coordinate, and moves the object to the opposite side of the next small coordinate system.
    • if stored as longs, with 2km cube small coordinate systems, you could have a useable space of about 18 pentillion cubic kilometers, with about millimeter precision at the cost of three more longs per object to keep track of. This is insane and unheard of, which brings us to the last part of this...
    Please tell me why this will or won't work, because I'm sure someone has thought of trying this before!
     
  2. JoeStrout

    JoeStrout

    Joined:
    Jan 14, 2011
    Posts:
    9,859
    That'll work fine.

    Of course the physics engine won't know about your nested coordinates. But if you don't need that, then I don't see a problem. I think the only reason you haven't heard of this before is that the vast majority of games don't need it.
     
    Ryiah and Kiwasi like this.
  3. Xepherys

    Xepherys

    Joined:
    Sep 9, 2012
    Posts:
    204
    This is basically just taking the game in chunks. Minecraft is probably the best example of this being done in a current game. Chunk data can be stored and retrieved as needed when a player moves near a boundary with another chunk.

    You're clearly talking about a much grander scale, but reading about Minecraft chunks might help ideas flow: http://minecraft.gamepedia.com/Chunk
     
  4. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    An alternative is to use double precision. This is how games like Kerball achieved the distances required for space travel.



    They also implemented detirministic physics at the orbital scale.

    Just another option to consider.
     
    Ryiah likes this.
  5. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    I think you'll have to do some jiggery pokery with the camera, because ultimately Unity has to render the objects and it needs to position the game objects with regular float coord precision. So you'll need to keep moving the camera when you get near the end of a chunk i.e. finding the modulo of the chunk size from the 'real' big number, and positioning objects at that coord. Also I think you'll have issues if it becomes possible to visually see objects that are inside more than one chunk at a time, like adjacent chunks, because now you almost are going to have to use multiple cameras to properly position the objects.
     
    Ryiah and Kiwasi like this.
  6. Nubnubbud

    Nubnubbud

    Joined:
    Nov 19, 2014
    Posts:
    49
    I imagine it could be done in two ways:

    position on one grid could be seen by something on another grid. that's easy enough. All it takes is a clever use of the Pythagorean theorem (also nested) to find the direction and distance of the object...though these are big numbers, so using a large number format or at least doubles is gonna need to happen there.

    As well, when it comes to displaying faraway things, save things light years away, float errors are negligible. aka: a meter difference 4km away isn't too noticeable, and a 10 km distance is nigh imperceptible at 1 AU away. There's nothing actually stopping you from having (and rendering) an object in the far reaches of your home coordinate system, just that once you're sharing a coordinate system with an object, you can be guaranteed mm+ precision with respect to it.

    I think weirdness might happen if you used some kind of telescope in the system, though. you'd see small things in the distance jumping about. in such a case, it might be best to begin rendering things based solely on what angle they are from your heading, and how big they should appear at that distance.

    if you have the object keep its physics info and velocity as variables, I think that in the case of a velocity of 0, it could replace its velocity with those variables, unless they're negligibly low to begin with. could this work?
     
    Last edited: Feb 7, 2017
    Haneferd likes this.
  7. eKyNoX

    eKyNoX

    Joined:
    Feb 4, 2013
    Posts:
    14