Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question Why can't I change the limit size of Chunk

Discussion in 'Entity Component System' started by somebodySB, Aug 2, 2023.

  1. somebodySB

    somebodySB

    Joined:
    Nov 30, 2018
    Posts:
    13
    My project is using DOTS (Data-Oriented Technology Stack), but it has many large components, which is causing multiple entities to approach the 16k limit. I attempted to improve the application by modifying the source code of the "entities" package to change the size limit of a Chunk to 64k. After the modification, the project ran very well, and the fps did not decrease, although the memory usage increased by approximately 2GB.

    Now, I have two questions:
    1. If the Chunk's size doesn't affect performance, why not allow Unity users to customize the Chunk size as a configuration?
    2. Under normal circumstances, one Chunk can accommodate one 16K entity. After my modification, one Chunk should be able to contain four 16k entities, thus reducing the number of Chunks. So, where does the significant increase in memory usage come from?
     
  2. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    760
    When I last looked, unity always allocates chunk memory in blocks of 64 chunks. (and uses 64bit mask to see which chunks are free or used). with 16kib chunks that would be 1Mib per block. If you increase the chunk size to 64kb, you then increase the block size to 4 Mib.
    Other data is also stored in a chunk (in a header of 64 bytes) so not all 16kib (64kib in your case) are available, that would mean that in your case there is only space for 3 entities, and a lot of space in the chunk remains empty.

    However, if you have an entity whose components together have more than 16kib, you should reconsider your architecture and possibly split it into several entities.
     
    somebodySB and xVergilx like this.
  3. Arnold_2013

    Arnold_2013

    Joined:
    Nov 24, 2013
    Posts:
    262
    From what I remember chunk size was chosen mostly arbitrary, but there did not seem to be a use case for very large entities. Large data structures would be placed in Native containers or blobs, since it would usually only have a few of these and not many.

    With the per component layout in memory, having smaller components with only the needed data for a system to work on should be more performant than having big components where not all data is used at the same point in the code.
     
  4. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,292
    16 kbs also fits cache line nicely [most of the time]. Different platforms -> different requirements.
    Custom chunk size was on UT TODO for a while, so its probably going to be implemented at some point.

    Depending on what you do, you could try:
    1. Splitting into multiple entities by logic relations;
    2. Moving DynamicBuffers away from the chunk (via [InternalBufferCapacity(0)]). In most cases you don't actually need them stored in chunk directly;
    3. Compressing data types;
     
  5. somebodySB

    somebodySB

    Joined:
    Nov 30, 2018
    Posts:
    13
    thank you very much.The answer helps me a lot.Before I try to improve chunk size,I used serveral entities indeed.But it caused some problems.The biggest one is I need to query different component for one function.When them are splited to different entites,I have to use NativeHashMap for collecting some components together in one job,and use it in other job.(e.g.The daily record is one,the cumulative record for a month is one,the monlty record is another)If I can improve chunk size,I can figure it in one job.It's easier to program and have better performance I think.
     
  6. somebodySB

    somebodySB

    Joined:
    Nov 30, 2018
    Posts:
    13
    As I know,the data out of DynamicBuffers capacity will be stored in heap memory and affect performance.This way may not be a good choice.
     
  7. Laicasaane

    Laicasaane

    Joined:
    Apr 15, 2015
    Posts:
    288
    somebodySB likes this.
  8. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,292
    [InternalBufferCapacity(0)] ensures buffer is pre-allocated away from chunk upon entity creation.
    Only pointer is stored inside the chunk. It doesn't move them, just stores in a different place.

    If buffer runs of out memory (in cases where capacity is not set, or not set to 0), it will still be moved away from the chunk, but at the higher cost of reallocating & moving data. So if your chunk is full, and you're trying to push more data, memory will be re-allocated and moved.


    From experience, not having DB in chunk doesn't affect as much when iteration is fast enough already.
    Should be similar to using native container with persistent allocator.

    I've moved all DynamicBuffers away from large entities and it saved like half or more memory in chunk.
    Allocating extra chunks costs way more than cache miss.

    Use case by case basis.
    If entity quantities are less than ~10k, you'd probably want to move them away.
    Otherwise profile and see if its a big difference.

    Usually jobs are too fast & near free.
    Its worth to trade some of the job performance on a separate thread than to move data around on the main thread.
     
    Last edited: Aug 3, 2023
  9. somebodySB

    somebodySB

    Joined:
    Nov 30, 2018
    Posts:
    13
    It's a beautiful solution and thank you for sharing this powerpoint.If I don't misunderstand it,in the current frame,you write this frame‘s data and you read last frame's one.And then,sync them.But what if you must read the data after the writing?
     
  10. somebodySB

    somebodySB

    Joined:
    Nov 30, 2018
    Posts:
    13
    I'll try it in my project and test final performance.Thank you for advising.
     
    xVergilx likes this.