Search Unity

Massive World or Space Challenge

Discussion in 'General Discussion' started by Arowx, Feb 7, 2016.

  1. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,137
    Well you put in memory what you can, but it needs to be read and written often if you're going to be traveling a lot in a game world. Why keep chunk 0 in memory after traveling 10000 chunks away?
     
    darkhog likes this.
  2. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Opening folder requires resources too... And often it'll request disk activity, so no, it's not fast. neginfinity is right about files - it's expensive operation as it requires disk activity, creating handles, checking if handle is correct, etc.
    Why not keep buffer then? Just a plain list of arrays in file. It'll be faster just because minimal size that HDD can write is cluster size which is often 4 Kb (so any writes of 4 Kb will take same amount of time as writing 1 byte)
     
  3. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    7,162
    My point was that you'll want to keep the number of files as low as possible. It has nothing to do with keeping chunks in memory. You can dump everything into the same file, you know.

    Also, Antivirus software may interfere with the process, making it even slower.
     
  4. Lee7

    Lee7

    Joined:
    Feb 11, 2014
    Posts:
    136
    This challenge would be easy for a singleplayer game.

    The real challenge would be multiplayer.

    The largest example of an open world non-instanced non-sharded game world without using any funky tricks that I know of is Lineage 2.

    The map is HUGE and you can literally walk from one end to the other with no loading screens or other tom foolery.
     
    Last edited: Feb 20, 2016
  5. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,137
    @neginfinity
    @Teravisor

    Well that's... disappointing. Instead of a massive file where you go line by line looking for a resource near the bottom, I thought you could save time if the files were broken up by content. So a few million lines in 1 file is better than several organized flies?

    If file access is so slow, how do games like minecraft save the world reliably in case of a crash?
     
  6. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Files aren't lines, they are bytes. If you're using XML/YAML for saving data, you're losing a lot on space used and write speed. But yes, a megabyte file is better than 1000 files 1Kb each because quering folder structure takes time too and 1 Kb is less than cluster size (usually 4Kb, you can however change that when formatting HDD) so you get 3 megabyte overhead actually. Also keeping 1000 file handles takes memory and time to create unlike 1.
    At some point, Minecraft started packing regions (not single chunks) into files. While each individual chunk is less than 4 Kb and that causes inefficiency, you can always pack several chunks with small index file to make up for it (or write indexes in the beginning). Don't forget, you can save in background and only when disk isn't reading new chunk.
    Why not first create a temp file, write there, then overwrite original file after temp write succeeded? This way you only lose either temp (nothing gets written) or original (still can restore temp). This can be done in background. It's bad if you want to write files that have gigabytes in them, but if your regions are 4Kb-1Mb it's relatively fast (numbers depends on target platform).

    If you need extreme reliability, you can always use SQL server which will save your data in case of crash.
     
  7. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,137
    SQL faster than a flatfile? oh my. It seems I have a lot of things I thought in highschool and go fix them.
     
  8. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Um... Nope. It's not. It just gives:
    1. Epic data reliability.
    2. Parallel access to data out-of-box (depends on SQL sever though).
    3. Random access to data out-of-box.

    All 3 are good for MMORPG server. For anything less than that, that's overkill.

    But it's still slower for clustered read/writes (like Minecraft chunks) as it needs to manage tables which has huge overhead. It's still viable (you can see opensource project minetest - I think they used SQL for chunk storage last time I looked at them, but that was long ago and they could've changed something)
     
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    7,162
    Don't design the file in such way that you'll need to scan through the whole thing. At the very least, there's Memory mapping, so you could let OS handle that.

    Opening one file will be faster than opening 100 files.

    I'm not sure if minecraft even autosaves the game.

    You can reduce the overhead via async file access. But still, operation won't be fast.

    Either way, "profile before optimizing".
     
  10. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,137
    Would memory mapping let you skip to the right part of the file?

    It does, the mystery is when. There are a lot of mods that give you faster ways to move. Some of these cause crashes on slower computers because you're traveling a few chunks per second. But when you get back in the game, you are within a block of where you crashed. And if you were traveling through terrain and leaving a trail of modified chunks behind you, you'd see the changes there too.

    Maybe minecraft frequently freezes because it is constantly dumping data into a file somewhere?

    Oh. Then why would sql be better if I have a well organized flatfile - like a json file?
     
  11. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    Didn't you read 3 points after what you've quoted? All of them are better in SQL than in flat json file.
     
  12. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    7,162
    Memory mapping allows to access the file access via memory pointer. I.e. you open file, memory map it, and treat as raw memory block. OS does the magic of loading/unloading relevant segments. Different access flags may result in different performance.
     
  13. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,137
    Would your random access point mean it's faster to access? I don't care about reliability really.

    Oh.
     
  14. Teravisor

    Teravisor

    Joined:
    Dec 29, 2014
    Posts:
    654
    It'll be faster if you're searching for data. Depending on implementation, modifying one variable in a huge object will be fast as well. To just read random variable - if you're opening file handle each time you read file, maybe. Otherwise no, it's same or slower (unless SQL keeps cache and it's cache hit).
     
unityunity