Search Unity

ECS for procedural voxel world gen and mesh creation

Discussion in 'Entity Component System' started by AnthonyPaulO, Dec 18, 2019.

  1. AnthonyPaulO

    AnthonyPaulO

    Joined:
    Nov 5, 2010
    Posts:
    110
    Hello World!

    I've been researching ECS and fooling around with some basic things here and there in order to better learn it and decided it was time to update my toy voxel project, which is a basic minecraft clone, to ECS, mostly for learning purposes. The Data Oriented approach is not a problem to me, I understand all about cache locality, misses, etc... so it's just a matter of learning how to put this all into practice with Unity.

    In a Minecraft type of world you have different types of blocks and the world is made up of these blocks, and the world is generated via some noise function and biomes are then layered on top. In order to optimize for performance the engine pools blocks together in the form of chunks, say 16x16x16, and generates a sufficient amount of chunks at a time so that you're always immersed in a part of the world.

    Let's say we split this generation into two phases: one is the world generation portion which is the part that generates the type of world around you based on some noise function, and the second phase is to generate the 3D representation of that world which then gets rendered on the screen. These phases seem to be great candidates for the job system, particularly the 3D chunk generation since it's the most visible and real-time dependent of the two.

    I've been going through a lot of the ECS examples and have yet to see one that covers most of the concerns that need to be dealt with when implementing ECS for these phases. Let's take the second phase as an example. When I look at the samples they are all about spawning prefabs and all you're passing in are positional or rotational data which then moves them around on the screen. I have yet to see an example that creates a meshes from scratch based on a (relatively) large set of data. A small mesh example I saw created it on the main thread and used the job system to rotate it around. What I'd like to know is the following:

    1) Given an array of chunks, where each chunk is, say, a 16x16x16 (4096) array of block data, the idea is to "pass in" only the data you need consumed, in this case a subset of the block data containing only the data necessary to generate the chunk mesh. This means iterating through each chunk and copying a subset of the data needed into another (preallocated) array, and this data may also include additional data from lookups (such as uv for textures). I take it this would have to be done on the main thread? It could be done as a job specifically meant for this data translation but it would need access to these lookup tables. What is the best approach for this type of scenario?

    2) Let's say we now have chunks of block data containing only the data we need for mesh generation, is each block considered the IComponentData we need to pass in, or is each chunk the IComponentData, in which case you're passing in an array?

    3) The data generated during mesh generation (vertices, uvs, faces, etc...) could be cached for performance, for example if I remove a block from the world or need to cull block faces I may only need to tweak the data rather than regenerate it from scratch. What mechanism exists for caching this data and re-using it at some point for modification?

    Thanks in advance.

    Anthony
     
    Last edited: Dec 18, 2019
  2. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,775
    We had multiple talks, discussing mesh generations and voxels with DOTS. Including in the context of minecraft like worlds.

    Lets for a moment consider generating 300k blocks, with entities.
    For simplifications we take 16x16 x 2 surface of chunk, with low complexity, rather just flat chunk. But no need generate/render hidden blocks. That is 512 blocks in single near chunk.
    Assuming we we generate every visible chunk, with same amount of visible blocks (for simplification), weather they are close, or far, that is 585 chunks, in area of 24x24 chunks (576 chunks -> 294,912 potentially visible blocks). That is with 0 optimization and you can achieve it now.

    And we know, we can generate lots of entities fairly efficiently.

    First reduction of blocks generations, can be simplified, by simply combining near same blocks, into one larger block.
    It could easily reduce number of visible blocks by halve, and no need any mesh generation and all that stuff at this point.

    I would consider surface mesh generation real need, for far-far chunks, if want to allow view long distance.
     
  3. Sarkahn

    Sarkahn

    Joined:
    Jan 9, 2013
    Posts:
    440
    For the purposes of mesh generation you dont need to do it on the main thread. You can store all your mesh data - probably one set per block chunk - in nativearrays or dynamic buffers and pass those around in jobs. You only need to be on the main thread when you call Mesh.SetWhatever.

    Regarding 2: I've been going back and forth on how to best represent the block data. In other threads a Unity person seemed to indicate going with arrays of block data per chunk was the way to go, as opposed to one entity per block. The latter makes it easier to add per block data, but makes block access by position more complicated (and slower) since you need to manage the arrays of blocks yourself and you cant rely on them being in contiguous order in memory.
     
  4. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,775
    True, but you can combine these two. Store data of blocks per chunk, i.e. buffer and use entities for rendering.

    In fact, you could skip entities part and use Graphics.DrawMesh approach to render. For example DrawMeshInstancedIndirect. We had few discussions.
     
  5. AnthonyPaulO

    AnthonyPaulO

    Joined:
    Nov 5, 2010
    Posts:
    110
    Are these discussions documented somewhere?
     
  6. GilCat

    GilCat

    Joined:
    Sep 21, 2013
    Posts:
    676
    Antypodish likes this.
  7. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,775
    See @GilCat response.

    Also, search anything regarding rendering in DOTS forum.
    Some keywords
    • DrawMesh
    • DrawMeshInstanced
    • DrawMeshInstancedIndirect
     
  8. Sarkahn

    Sarkahn

    Joined:
    Jan 9, 2013
    Posts:
    440
    This is what I've started doing, I'm doing one entity per block and having each chunk with a dynamic buffer of entities referencing all the blocks in that chunk. This lets me use the straightforward indexing per chunk when I need to access blocks by position and makes it easy to add per block data.

    It feels like the most sensible way to me, but I feel like it might not scale well when I get into hundreds of thousands of blocks territory. I guess I need to start learning about compression at some point.
     
  9. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    792
    In my first experiments i used one entity per block and an SharedComponent for the chunk. Dynamic buffer not worked (some limits) but with a large amount of chunks i reach also the limit of the entitiy system (Entity store allocation exceptions) or massive memory usage ( ca. 2 GB ram only for entity without data). So i switched to flat array in Chunk objects and plan to use block entities only if needed.

    If you use power of two chunk size, you can use bit operation to get the local block position in the chunk. With 16x16x16 chunk you can cast the world position to int3 and use only the last 4 bits (int x = pos.x & (16 -1))
     
    MNNoxMortem and Sarkahn like this.