Search Unity

Has Unity Considered a 64 bit floating point upgrade?

Discussion in 'General Discussion' started by Arowx, Feb 2, 2016.

?

Would you like Unity to develope a 64 bit large world API?

  1. No way, 10km 32 bit is big enough for me think of the artwork dude?

    24 vote(s)
    13.4%
  2. Yes please, I want to use procedural techniques to build worlds, planets and entire solar systems?

    130 vote(s)
    72.6%
  3. What just 64 bit hell no think exponential we need 128bit or 256bit, you luddite!

    25 vote(s)
    14.0%
Thread Status:
Not open for further replies.
  1. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Neither. If you need to move that fast then you need to either design around the limitation or use a (probably fundamentally) different solution.

    What has Unity in particular got to do with #2?
     
  2. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    The answer is smart game design...
     
    AlanGameDev and zombiegorilla like this.
  3. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OK As some people have mentioned there are different ways to get around the 32 bit precision limitations.

    1. Origin Shifting - World based games can use an origin shift, this can add an overhead when entities cross boundaries you need to adjust their coordinates to their new local offset. So computational overhead and issues and complexities with Multi-player games. Which harkens back to the retro screen tiling systems of the 8 bit and 16 bit era.
    2. 64 bit Transform/Physics - Space games like Kerbal Space Program and StarCitizen have adopted variations on this, they still use 32 bit on the rendering side but adopt 64 bit to allow a solar system spanning simulation of their game 'world'. I belive StarCitizen enabled 64 bit physics in it's game engine. Kerbal added a 64 bit double precision transform and possibly gravitational physics system over Unity's 32 bit one.

    As has been mentioned (@hippocoder) if you look at the scene streaming features being worked on by UT they are probably working towards an origin shifting solution, it should provide larger worlds/terrains that will stream in around the player. But might not be ideal for faster paced large world/planet/space simulations.

    I think a 64 bit transform / physics overlay would be the ideal solution, you would still need terrain streaming but if the mathematical precision problem would be resolved without the origin shifting becoming a potential issue.

    e.g. Imagine the worst case scenario a massive battle and the front line moves forward 10km so units, shots, missiles and shells are constantly crossing the boundaries of two zones.

    That's why my original question to UT is are you researching the potential of a 64 bit upgrade to the engine. We are seeing a transition to 64 bit chips across all platforms, all of these chips have some kind of vector maths acceleration.

    Would it be feasible to gain the advantages of having 64 bit or double precision to the engine on hardware that can support it?

    Now that might be as deep as the ability to turn on and build with a 64 bit physics engine, or the lighter option of having a 64 bit transform for very large or and fast moving worlds.
     
    Quatum1000 likes this.
  4. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I have just tried a quick benchmark with Unity and found that apparently the float and double multiplication and division speeds on my PC are very similar (code below).

    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.UI;
    3. using System.Collections;
    4.  
    5. public class Test : MonoBehaviour {
    6.  
    7.     public Text text;
    8.  
    9.     delegate double TestFn();
    10.  
    11.     public int loopSize;
    12.  
    13.     IEnumerator Start()
    14.     {
    15.         TestFn test = TestF1;
    16.  
    17.         text.text += "Beginning Test Looping "+loopSize.ToString("N");  
    18.         yield return null;
    19.  
    20.         text.text += "\nTest F1[*] ... ";
    21.         TestHarness(test);
    22.  
    23.         yield return null;
    24.  
    25.         test = TestF2;
    26.  
    27.         text.text += "\nTest F2[/] ... ";
    28.         TestHarness(test);
    29.  
    30.         yield return null;
    31.  
    32.         test = TestD1;
    33.  
    34.         text.text += "\nTest D1[*] ... ";
    35.         TestHarness(test);
    36.  
    37.         yield return null;
    38.  
    39.         test = TestD2;
    40.  
    41.         text.text += "\nTest D2[/] ... ";
    42.         TestHarness(test);
    43.  
    44.         yield return null;
    45.     }
    46.  
    47.      
    48.     void TestHarness(TestFn test) {
    49.         float t = Time.realtimeSinceStartup;
    50.  
    51.         test();
    52.  
    53.         t = Time.realtimeSinceStartup - t;
    54.  
    55.         text.text += t.ToString() + "\n";
    56.     }
    57.  
    58.     double TestF1()
    59.     {
    60.         float a = Random.Range(float.MinValue, float.MaxValue);
    61.         float b = Random.Range(float.MinValue, float.MaxValue);
    62.         float s = 0f;
    63.  
    64.         for (int i = 0; i < loopSize; i++)
    65.         {
    66.             s += a * b;
    67.         }
    68.         return s;
    69.     }
    70.  
    71.     double TestF2()
    72.     {
    73.         float a = Random.Range(float.MinValue, float.MaxValue);
    74.         float b = Random.Range(float.MinValue, float.MaxValue);
    75.         float s = 0f;
    76.  
    77.         for (int i = 0; i < loopSize; i++)
    78.         {
    79.             s += a / b;
    80.         }
    81.         return s;
    82.     }
    83.  
    84.     double TestD1()
    85.     {
    86.         double a = Random.Range(float.MinValue, float.MaxValue);
    87.         double b = Random.Range(float.MinValue, float.MaxValue);
    88.         double s = 0f;
    89.  
    90.         for (int i = 0; i < loopSize; i++)
    91.         {
    92.             s += a * b;
    93.         }
    94.         return s;
    95.     }
    96.  
    97.     double TestD2()
    98.     {
    99.         double a = Random.Range(float.MinValue, float.MaxValue);
    100.         double b = Random.Range(float.MinValue, float.MaxValue);
    101.         double s = 0f;
    102.  
    103.         for (int i = 0; i < loopSize; i++)
    104.         {
    105.             s += a / b;
    106.         }
    107.         return s;
    108.     }
    109. }
    Is this a valid test, my results:

     
  5. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    That's why I want physics scenes. Place that battle on separate physic scene! Or you really think that a lot of battles can mix up? I doubt it, and besides, PhysX allows moving between scenes without any destruction/addition of objects(so it's fast).

    If hardware supports it - it will still depend on application. As I said, sending mesh across to GPU is already hogging, and if you send double position data(not counting all data being double - that'll double size of even textures as you'll want to send 64 bit textures instead of 32 bit and that is useless) that'll hog that sending for a bit longer. So it still depends on use-case.

    Oh, and you're forgetting unity targets mobile apps a lot and I can't imagine them installing 64 bit GPUs anytime soon. So as BoredMormon said - It's too early for that.

    That might work, but it's still problem of precision/performance.
    If your object is moving through empty space at high speed: why not increase unit size? From a kilometer to 100 kilometers (in space), or even higher. For precision calculation it should be possible to create a new physics scene with different size and use that another physics scene for precise calculations.
    If your object is moving through filled space, then calculating all collisions on high speed will hog so much CPU that it's like you're shooting yourself in leg anyway. (All physics engines use tree approximation for culling of collisions - now tell me, what will happen when you leave tree leaf and travel through 10 leaves?)

    That's why I'm talking about physics scene being more realistic compared to rebuilding everything into 64 bit... It's already implemented in PhysX(I'm not sure about mobile platforms though as it could've been removed when Unity adapted PhysX so it still depends on Unity's compiled PhysX), it can emulate 64 bit well but it's just Unity didn't implement that.

    Yes they are. On CPU. Try GPU first, then tell us. Also don't forget to send all data as doubles instead of floats (or floats instead of compressed floats) and tell us how loading time and GPU usage has changed.
     
    Last edited: Feb 3, 2016
  6. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    I asked because that's a thing when you mod voxel / infinite world games right now. You either stutter / stop when the new area needs to load or like minecraft you wind up in a big block of nothing and start falling through the floor until everything loads in - but you can still move around lag free while it loads :D

    If we implement our own chunk system, we won't have access to threads to have things load in a non blocking way so... the rest of the interface and character will freeze up, right? I know you could load a little at a time, but isn't that all happening on the same thread still?

    I was asking because of how existing games react to this. And games that allow mods so you have new things you might not have accounted for.

    Speaking of modding, I love that unity 5 means all unity games are already potentially moddable since anyone can download unity 5 and produce an asset bundle. THIS FACT NEEDS MORE ATTENTION
     
    McMayhem likes this.
  7. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    GPU drivers can't be accessed from more than one thread at one time. Only DX11+ (not sure from which OpenGL it happened, maybe 4.0?) allow to access it from thread that did not create device context, but even then no more than one thread can take that context for loading onto GPU.

    There's problem that unity's operations on meshes and textures before it starts loading them to GPU use a lot of time of same(main) thread.

    But even if it's solved we will still have to cope with what I wrote above leading to
     
    angrypenguin and Tomnnn like this.
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Unity is implementing a mesh instancing technique that will make it much lower cost to repeat multiple instances of the same mesh on the GPU. See roadmap info on 5.4, or beta thread for more info. Note it has some limitations, e.g. shadows and lighting can increase draw calls.
     
  9. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    I know about instancing, but that's not what I'm talking about. I'm talking about optimizing and parsing mesh, internally repacking texture, etc. All of those can(and generally should) be threaded instead of made on main thread. It does that internally, but we can't use that threading when making procedural meshes.

    P.S. I've seen instancing examples in raw openGL and cubes(for so liked minecraft) give about 1000 times worse performance with mesh instancing than when making procedural mesh, so it's not solution to everything.
     
    angrypenguin likes this.
  10. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It's a pity that Unity has not added a thread safe API, especially for areas like mesh generation, that will not be used in the main game loop. But I thought you should be able to write a threaded mesh generation system as the underlying data is generic enough arrays of structs like Vector3, floats, and index's.
     
  11. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    I did it once and it bottlenecked at Mesh.SetTriangles(...); call (same happened in SetIndices(...) call when I rewrote code). Resulting in like 30% performance gain total(which is not nearly enough for work I did, so I generally advise not to optimize). But if Apply of textures I can understand why I do on main thread, but why do I have to call SetTriangles and Optimize from main thread? Why is there no Apply method to end calculations on Mesh I was doing and finally upload it to GPU (the only operation that must happen from main thread)?
     
    Last edited: Feb 3, 2016
  12. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Actual transfer of the data to GPU will happen in the main thread, most likely. So even if you generate stuff in multiple threads, that probably won't mean much in the long term.

    Also, I suspect that unity skinned mesh calculation is done on CPU at least on some platforms, but I'm not 100% sure about that.
     
  13. AlanGameDev

    AlanGameDev

    Joined:
    Jun 30, 2012
    Posts:
    437
    Just made an exception to my 'no polemic discussion rule' to say that you got it perfectly.

    I don't even know why KSP was mentioned in the discussion since it is the practical proof that you don't need 64-bit floats in the whole shebang to make a space game in Unity. Just use whatever data type you want for your universe simulation, and for the actual game mechanics 32-bit is perfectly fine imho.

    The point is, you most likely don't actually need high precision physics simulation or transform positioning thousands of kms away from the player. In most games the player has a 'radius of influence' that you can't say is 'big' or 'small', because the scale doesn't matters much ('much' because it affects the built-in physics engines), you can only refer to it in terms of precision, if your game is about water bears or elephants, it doesn't matter, you'd just scale it accordingly. The problem is having both at the same time... a battle of elephants with a simultaneous battle of tardigrades on the elephants back. And in a big world. You'd need some creative solution for that. For example simulating the tardigrades in a bigger scale and using some trick so it looks like they're on the elephants backs.

    The same goes for space games or huge planar worlds. If you need to simulate two cities many kms apart in a human-scale level, you need to use some trick to keep both cities near the origin. Multi-camera tricks are easy to do and the result is generally excellent. You can think of it like a wraparound 3d world, but instead of repeating the same world you would use another one instead. Unfortunately, afaik Unity doesn't provide a good way to simulate multiple worlds because you have to resort to layers.
    In BGE you can have separated scenes (so 'gameobjects' from scene1 don't even exist in scene2, instead of just having them all in the same scene and ignoring some) and you could compose images from cameras in two scenes, so in theory you could have many scenes running simultaneously and display them in a seamless manner. Of course when an object reach the boundary of a scene it would have to be 'transported' to the other scene. There are some clever tricks to minimize that problem, like making a 'border' with the physics/interactive stuff from the other scenes and transporting only when that border ends. In practice though, you'll hit a performance wall very soon.
    So, yeah in very rare occasions you may actually need 64-bit, but as I said in my first post, not on all levels. And you'd have to stick with CPU physics sim, but Unity is limited to CPU so far anyway.

    Also, obviously classifying the size of a project is subjective, but I'm pretty sure KSP is bigger than at least 90% of all commercially available indie games; so I think it isn't unreasonable to say that relatively to the other indie games it's a big/huge project. Of course nothing keeps you from having your own personal metrics... I could as well say that all games ever released are tiny projects. I'm more practical than that though.
    If you have 5 people working on your game for a year, it's already a multi-million project; do the maths or go ask in the HR if you don't believe me. It doesn't matter if people were on revenue share and worked for free, or if it wasn't an ambitious project to start with, or if it was playable on 'early access'; if you multiply the man-hours by the average wages, it's a multi-million project -- that's what matters. (KSP was officially released in 2015)

    Exactly.

    EDIT:
    Also, I'm an extremely practical person (perhaps that's my problem with this forum). And sticking to practicality, even recently big studios still choose 32-bit over 64-bit even when it's a totally possible choice, and I assure you it's not because the devs are dumb.
    Bepu physics from my good friend Ross Nordby for example has been used successfully in truly 64 bit engines (there's the BepuDoubleEnhanced fork), and still a major game recently decided to use it with multiple independent 32 bit simulations for each part of the world, think of a chunk-based game that has a separate physical world for each chunk. With an approach like that you could have your objects in a 64 bit 'overworld' and transform the coordinates from each physical world, or you could just shift origin too.
    If Unity had some kind of 'physical world' component, instead of a global physical world, it would be possible to do that too. It could use transform parenting to determine which object pertains to each world, so it would be easy to move them. Of course that doesn't solve the 32 bit transforms coords problem.
    I repeat though that it's too much effort for the benefit or two or three guys :p.
    Something I really miss in Unity is the possibility of ticking the physical worlds manually.
     
    Last edited: Feb 3, 2016
  14. dogmachris

    dogmachris

    Joined:
    Sep 15, 2014
    Posts:
    1,375
    Mind me if I'm wrong, but isn't it what doubles and decimals and such things are for?
    Besides, Someone's got to explain to me what such high precision is required for in game development (excluding CFD simulation and such stuff, which only a few people do for games).
     
  15. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    At least those that protect 64 bitness say so...
     
  16. dogmachris

    dogmachris

    Joined:
    Sep 15, 2014
    Posts:
    1,375
    Barely any devs use the left side of a floating-point number to its full extend. Increase the dimensions of your scene by using a factor of 10000 on all relevant components (transform, physics, etc.) and your little space game, flying simulator stuff, or whatever it is, you‘re trying to make, will most likely be sufficiently precise with 32 bit floats - and if not: use doubles, that's what they're for. :)
     
  17. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    You can't use them. At least not unless you want to build your own physics system. That's the whole point of this thread.
     
  18. dogmachris

    dogmachris

    Joined:
    Sep 15, 2014
    Posts:
    1,375
    Yeah okay but that's due to PhysX. However System.Math can, and I don't think there's much Unity could do about PhysX.
     
  19. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Problem is unity doesn't support that.
     
  20. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @Teravisor Oh. I thought we'd still be able to move while scenery loaded if 1 thread was running player stuff and another thread was instantiating more level geometry. Then you posted some gpu stuff I didn't understand. But in short, would threads not help either way because instantiating the geometry would rely on the gpu so the player thread would stop visually for a moment anyway?

    What is minecraft doing then? You can wander around for a moment before geometry pops in.
     
  21. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Either way precision will be important if you're looking for BIG and preferably PROCEDURAL world.

    If you start making handcrafted content, you'll run out of the budget before you make one life-sized city.

    The game world those days are still quite tiny



    Skyrim, for example said to be "4.32 miles across by 3.42 miles" (roughly 7x5.5 km), which is small. Also that size results in odd experience when you visit a village and realize that there are maybe 5 houses and 12 people total in there.
    Nope. Won't work. You can scale up/down all you want, you'll be still restricted to +- 10km cube from zero coordinates if you want to maintain 1cm precision relative to character size. The cube will scale up/down along with your scene. So, if you make game scale 10 times smaller, the cube will become 10 times smaller as well. Precision of floating point is relative. So, if you want 1 millimeter precision, you'll only have +-1 km. 0.1 mm? +- 100m.
     
    hippocoder likes this.
  22. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    These aren't mutually exclusive ways to develop your world though. Procedural applications like World Machine have become popular alongside hand-crafting with developers in various stages of the world's generation.

    Bethesda's games are known for their compact worlds. They're good examples of the amount of content you can put into a game for a small world, but they're terrible examples of large worlds. Practically every other game out there is bigger.

    http://www.eteknix.com/witcher-3-gta-v-skyrim-far-cry-4-map-size-comparison/
     
    Last edited: Feb 3, 2016
  23. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Depends. They might help in Unity Mesh case. One of cases when it won't help at all is Texture2D.Apply() (it's extremely slow operation).

    What I was saying is that upload from RAM to GPU RAM cannot be threaded no matter what. And it's quite expensive(more expensive than Unity internal transformations in my tests).

    Mesh data is first calculated on CPU in secondary thread (Can be done in Unity right now). Then passed from secondary thread onto main (That's nearly free operation, can be done in Unity for nearly no cost performance-wise). Then loaded to Mesh class and, thus, GPU through drivers (No way threading will help with that).

    The only part of those that cannot be threaded by us is loading into Mesh class and to GPU drivers.

    Let's go with example of last part of loading onto GPU(numbers here are arbitrary):
    Let's say loading one minecraft mesh(they are less than 5k vertices mostly with 3 buffers) from our RAM into GPU takes one frame worth of time, we see no problem (skipping one frame once most likely won't be noticed by eye).
    Now if we load 3+ meshes at same frame it will skip 3 frames. Half of people notice slight stutter. Only way to fix it is limit loading of chunks to one per frame, while rendering a frame in-between. FPS might fall, but 30 FPS is still playable, right?
    Now imagine mesh with 40k vertices. Uploading it to GPU will cost at least 8 times more than minecraft mesh. So in best case it will skip 8 frames and that's a lot! Now if we use 6 buffers instead of 3 to add to it it will use 16+ frames! And there's just no way to prevent stutter other than subdividing mesh. No threading will help if source of stutter is this case because
    Now as for why threaded API in Unity case might help: it does internal transformations before sending to GPU. Let's say they take 0.3 frame of our time. That'll make mesh load 1.3 frames instead of only 1. But if we'd prepare all values while calculating at another thread that would've still costed us 1 frame.

    Realistically speaking, I don't know how much threaded API will give Unity and I doubt that'll be much. Main problem here is comfort of develop: we need to pass data from secondary thread to main and only then apply it. But no matter how much performance it will give us, loading huge meshes with a lot of different buffers will still stutter and that's what I was saying there.

    Note: it's all about loading performance; in-game performance is not affected by all of that as internally it's same.

    P.S. All this rattle is child's performance optimizations compared to creating PhysX colliders. You can easily die from 20 frames wait there for just bunch of colliders.
     
    Last edited: Feb 3, 2016
    Tomnnn likes this.
  24. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    Oh, cool, thanks. Would that be a good approach to making an infinite world for unity?

    Well the issue I was thinking about was because infinite world games built like minecraft would be loading pretty frequently. Would the best approach for unity be to have really large chunks to reduce how often you need to load new ones or have small enough ones that they took almost no time to load?
     
  25. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    It has been succesfully used several times already so I guess so.

    Ideally make biggest mesh that wouldn't stutter on load on target platform(that reduces overhead like drawcalls), but if you make it too big it will stutter when loading to GPU(so don't go too far). From my tests somewhere chunks of 8x8x8 work nice, 16x16x16 work nice especially with greedy cubes algorithm to reduce faces, 32x32x32 are sometimes too much(in worst cases only though in my tests - when vertice count is above 15000-20000 it starts to noticably stutter when chunk appears or updates on weaker gaming PCs if updated frequently), 64x64x64 is too much(it won't always fit into single mesh anyway, so it's just hurting your head over nothing). Reducing sizes (4x4x4 and below) will often create very small meshes that often can't be batched because of vertice limit on dynamic batching so it will increase drawcalls, and, thus drag down runtime performance. Didn't try any other sizes personally.
     
    Last edited: Feb 3, 2016
  26. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @Teravisor thanks for clarifying. I wanted to do this a while back, but not anymore :p I'm not sure why I asked. Maybe having an idea will inspire me in the distant future.

    I think I'd be too lazy to do it properly, so I'd probably not try. There's a sample project around somewhere where the terrain is made of cubes in memory that are used to build large single meshes that make up a whole chunk. My first and last attempt at this stuff had individual cubes that had their own meshes. That's 16 extra unnecessary vertices per cube.

    --edit

    And that 16 is just per column in the chunk. so an 8x8x8 would have much worse :eek:
     
  27. Steve-Tack

    Steve-Tack

    Joined:
    Mar 12, 2013
    Posts:
    1,240
    For what it's worth, I have a space game that uses a floating origin (origin shifting). It was easy to implement and works pretty well. My play areas aren't even that big, but beyond 7,000-10,000 units from the origin I was getting jittery motion and weird flickering shadows. Setting a threshold to move everything back when going beyond 3,000 units fixed that.

    The only issue I ran into is with moving particles close to the camera that used velocity-based stretching. I'd get a one frame "flash" of particle weirdness when the shift happened. I ended up stopping the nearby particle system before moving everything, then start it back up. It's slightly noticeable, but much better than the flash. Totally worth it for a stable experience. And if I ever decide I need larger play areas, I can make them essentially as big as I want.
     
  28. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @Steve Tack I did an infinite ocean once where you're drifting on a little platform. When ever I shifted everything back to the center there was a super noticeable jitter. Probably just an error on my part, but pretty funny. I had a static reference to an empty on the player so everything could easily parent itself to that and then just move the player back to 0,0 and unparent.

    Is parenting and unparenting faster than doing math to move everything? Is it more performant for the engine to move several objects by moving a parent or to simultaneously translate all of the objects?
     
  29. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Erm, Daggerfall and Arena.
     
  30. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    Yes. Two examples out of at least seven titles that are open world and both are very old compared to the rest. While those are likely the biggest worlds you will likely encounter in a commercial video game they are very much the exception now.

    On a side note there is a fan project to rebuild the engine. Can't wait till its playable to the end. :D

    https://www.reddit.com/r/dftfu
     
    Last edited: Feb 4, 2016
  31. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Bethesda has more than 7 titles.
    https://en.wikipedia.org/wiki/List_of_Bethesda_Softworks_video_games
    From those titles, I highly recommend playing Terminator: Future Shock.

    I don't think the age is very important, also, I wouldn't call those and "exception" (, since the developers themselves said "our games are BIG".

    Well, Xenoblade Chronicles is said to have 70000 square miles of area. Then we have older spacesims like Elite: Frontier which were galaxy sized (and had planetary landings too).
     
  32. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    I was only familiar with seven. Thus why I said "at least". I wasn't positive what others were "open world". ;)

    They definitely squeeze a tremendous amount of content into a very small area.

    It's on my list of games to play once I have upgraded my computer to better run Dolphin. Or bought a Wii.
     
  33. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Just noticed on the poll with a total of 19, votes having a higher than 32 bit precision option should be researched by Unit. 10 votes for high precision 9 votes for stay with current limitations.

    No official mention regarding whether this is going to be researched, personally I thought a terrain system update would need something like this and higher precision would be a win/win with a fallback of using origin shifting for legacy 32 bit systems.

    But note that Homeworld: Deserts of Kharak, are using their own deterministic physics system using '64 bit' fixed point math.

     
    Last edited: Feb 5, 2016
  34. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    yoyobbi and Tomnnn like this.
  35. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @Zuntatos I wonder if it's more efficient to move the origin instead of moving your entire world back to 0,0 :rolleyes:
     
    darkhog likes this.
  36. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Um... was that trolling? That link states
    so it's same as modifying all world objects positions by offset (so it's = move whole world back to 0,0)...
     
  37. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    19 votes is a tiny number. Make it 19000, then the unity team will have a good reason to care.

    Nope. That's not what they said. They didn't say they "have 64bit fixed point physics". They said they sue 64bit fixedpoint vehicle simulation AND apply unity physics on top of that. Most likely fixedpoint for movement, and unity physics for whatever is currently visible. It is a big difference compared to your statement.

    "Vehicle simulation" is simpler than general purpose physic engine with several rigidbody and constraint types.
     
    yoyobbi likes this.
  38. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well they are using a 1m to 1 Unity Unit scale, and their maps are 25 km2 so they are using a 64 bit fixed point precision system (32.32) as the main simulation system. It started life as a 2D system so it might only be 2D, with the 3D Unity 32 bit system being in camera view. So it sounds like Unity's physics is only for the 3D view and not the core simulation running the game.

    As they mention they cannot rely on Unity's non-deterministic physics for a multiplayer RTS game.

    Please read their Made with Unity blog it is very informative.

    In a way this sounds very similar to the approach the Kerbal Space Program used, they wrote a 64 bit physics simulation (probably more for gravity and orbital dynamics than collisions) and used Unity mostly as a rendering engine.

    In both cases the developers needed a 64 bit large world simulation so ended up building their own, but how much easier would it be if Unity had the option for 64 bit built in?
     
    yoyobbi likes this.
  39. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Sigh.

    Read this:
    http://chrishecker.com/Rigid_body_dynamics

    You don't understand the difference between "move a vehicle" and "general purpose physics engine". General purpose is about 10 times more complex.

    That's not a "physics engine" and won't be anywhere near PhysX/Havoc complexity.

    Not to mention that RTS doesn't really need a physics engine.

    That's a nonsense argument here. It doesn't matter how "easy that would be for the end user". The development team would need to justify wasting time required to implement the new feature, and if only 1 game in 10000 is going to use the feature, there's no point in implementing it... unless all existing bugs are already fixed and devs have nothing else to do.

    Solving some issue for one game is significantly easier than solving the same problem for all games. Inflexible solution are simpler.

    As I said, bring 19000 people to vote, then unity team will have a reason to give a damn about higher precision.
     
    Last edited: Feb 5, 2016
  40. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OK so you want Unity to stay small, and not take on the AAA engines that are all going 64 bit so they can have massive worlds.

    And your reasoning is that because the majority of small world developers using Unity are here they don't want the option to make larger worlds. You do realise that the flaw in your logic could be that they may not be here because they are off using a 64 bit game engine?

    Take the fact that the makers of Desert of Kharak, did not post on the forum and only after I mentioned their game here did they respond, give light to the fact that maybe this community is not representative of the professional unity game developers who might want or need this feature.

    Maybe as part of my initial question to UT to research 64 bit precision could be to contact the professional developers and ask their opinions.

    Most indie game developers would struggle to generate good or great content over a 10km+ region but if Unity would like more AAA studios using it's tech maybe this is something they should add, or at least research.

    What if an indie or small studio using Unity were to get Kickstarter or other funding to make there game. They imagine a large world, only to find that Unity can't do it. Do they stick with Unity and work out how to make a large world system on top of Unity using up valuable budget and possibly needing to hire more technical help or drop Unity and adopt an engine that can provide large worlds out of the box?

    Or what is better for Unity and it's future a Unity without 64 bit and a great little game engine, or Unity with 64 bit able to go toe to toe with the best game engines out there?

    I want Unity to be the best it can be, you seem to want to hold it back, why do you work for another game engine? Are you a troll for hire? ;):p:D
     
    darkhog likes this.
  41. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    I could say same about you because you ignore problems Unity has currently while trying to propose changes that will require rewriting Unity from scratch. Disregarding bugs, one small example is Unity Navmesh cannot be baked runtime which will only be solved in 5.5(at least they try to). There is non-elegant workaround to procedurally generate it, but generally same and even worse than your case(btw UE can bake meshes runtime, at least in theory). You want more examples?
     
  42. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    Just to be precise here: it does not say anything about the 'simple API' being runtime; it may be editor only.
     
  43. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    No, I want unity to fix all the existing issues before working on a feature that potentially involves half of the engine.

    The engine has truckload of issues. It is not the right time to implement this kind of thing, and whoever needs it, should have no problem making a workaround for unity's limitations.
     
    Ryiah, Teravisor and Zuntatos like this.
  44. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    usually

    I asked if that operation (iterating over everything in the game) and adding a vector to it would be more or less performant than parenting everything to a transform and then moving that transform. Is there any engine function / optimization that makes parented transforms moving together more performant than individually moving them? Or is it exactly the same, which means that a 'base' parent transform is actually slower because the movement is the same but now there is overhead for setting and unsetting a parent transform?

    I'm hoping UT has some sort of magic in place for moving things under a parent so I can justify doing things that way. It's just an easier solution for me.
     
  45. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    None other than in UE it's compiled code while in Unity it's script (thus a little performance difference in favour or UE). Otherwise it's same. There's no magic in programming, unfortunately, as PCs aren't magic entities...
     
  46. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    Which is largely being eliminated by IL2CPP as Unity's C# scripts get converted to C++ prior to being compiled.
     
  47. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Tests show that setting transform.position property is still 10 times slower from script than accessing variable/property of pure C# code... Besides, I've added "little performance". Well, okay, nearly-not-noticable-in-all-cases performance difference.
     
    Last edited: Feb 5, 2016
  48. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    @Arowx, usually you are very keen on having the best possible performance. Using 64 bit almost everywhere would lower the performance e.g. for SIMD operations. AAA games need to have good performance, are you sure you want to have bad performance?
     
  49. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,147
    Having the ability to choose the precision you need would be nice though that's more work on Unity's part.
     
  50. darkhog

    darkhog

    Joined:
    Dec 4, 2012
    Posts:
    2,218
    It makes little sense for GPU vendors to move to 64bit before major engines have made a move to support it.
     
Thread Status:
Not open for further replies.