Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

DOTS 0.5 and Serialization

Discussion in 'Entity Component System' started by ogimusprime, Mar 25, 2022.

  1. ogimusprime

    ogimusprime

    Joined:
    Jul 20, 2021
    Posts:
    10
    Hello! I recently upgraded my project to the new DOTS 0.5 release which I was really excited for, thank you Unity Team! Unfortunately this upgrade seemed to have broken the method I used to serialize my world and save to disk.
    Code (CSharp):
    1. public void SaveMethod()
    2.     {
    3.         var querySaveTag = new EntityQueryDesc
    4.         {
    5.             All = new ComponentType[]
    6.             {
    7.                 typeof(SaveTag)
    8.             },
    9.            
    10.             None = new ComponentType[]
    11.             {
    12.                 typeof(RequestSceneLoaded)
    13.             }
    14.         };
    15.         var queryPrefabTag = new EntityQueryDesc
    16.         {
    17.             All = new ComponentType[]
    18.             {
    19.                 typeof(Prefab)
    20.             },
    21.            
    22.             None = new ComponentType[]
    23.             {
    24.                 typeof(RequestSceneLoaded)
    25.             }
    26.         };
    27.         using var saveQuery = EntityManager.CreateEntityQuery(querySaveTag, queryPrefabTag);
    28.  
    29.         using var saveWorld = new World("saveWorld", WorldFlags.Staging);
    30.  
    31.         using var entityArray = saveQuery.ToEntityArray(Allocator.Temp);
    32.        
    33.         using var entityRemap = new NativeArray<EntityRemapUtility.EntityRemapInfo>(EntityManager.EntityCapacity, Allocator.Temp);
    34.        
    35.         saveWorld.EntityManager.CopyEntitiesFrom(EntityManager, entityArray);
    36.  
    37.         var filePathStr = $"{Application.dataPath}/saveData.test";
    38.  
    39.         var streamWriter = new StreamBinaryWriter(filePathStr);
    40.  
    41.         SerializeUtility.SerializeWorld(saveWorld.EntityManager, streamWriter, out objectOuputArr, entityRemap);
    42.        
    43.         streamWriter.Dispose();
    44.        
    45.        
    46.     }
    The issue with the code above is now the StreamBinaryWriter class is "inaccessible due to its protection level". I am not sure how to get this working again and would greatly appreciate some guidance. I am also having the same error with the StreamBinaryReader class which is used in my load method.
     
    Tony_Max likes this.
  2. scottjdaley

    scottjdaley

    Joined:
    Aug 1, 2013
    Posts:
    152
    StreamBinaryReader and StreamBinaryWriter were deprecated in 0.18 and then removed in 0.50. Since 0.18 was never released, it was essentially just removed in a single update without any warning. I also didn't see any mention of it in the upgrade guide. The source for those classes is still available in Entities/Unity.Entities/Serialization/BinarySerialization.cs so you could just copy it into your project. Not sure if they still work correctly, but they appear to still be used in some tests.

    From https://docs.unity3d.com/Packages/com.unity.entities@0.50/changelog/CHANGELOG.html
     
    cooooop likes this.
  3. ogimusprime

    ogimusprime

    Joined:
    Jul 20, 2021
    Posts:
    10
    I had not thought of that. I will try to make my own Binary Reader/Writer using their code. Thank you for the tip!
     
  4. Krooq

    Krooq

    Joined:
    Jan 30, 2013
    Posts:
    180
    I really badly want a serialization solution for DOTS.
    It's not the sort of thing I really want to implement myself, so easy to mess up and the bugs when you do mess up are nasty.
     
    MNNoxMortem and Fribur like this.
  5. Krooq

    Krooq

    Joined:
    Jan 30, 2013
    Posts:
    180
    I tried copying over the stream reader/writer, I think it works...
    Be careful though, you will get errors if you try to read or write a world without any chunks.
     
  6. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    It really is tragic. I have a factorio-like game that's broaching 100k lines of code, with insanely complicated state to save and load. SerializeUtility.SerializeWorld never worked for me despite spending hours trying to tame it. I ended up writing my own solution but I know it's much slower than it could be and it's very delicate code, subject to an incredible amount of entropy as my game continues to grow.
     
    lclemens likes this.
  7. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    You do not want to use SerializeWorld for saving.

    If you ever change a single component in the future you will break all of your users saves. It basically prevents you ever updating your game once released. It's not designed for saving.
     
  8. HyperionSniper

    HyperionSniper

    Joined:
    Jun 18, 2017
    Posts:
    30
    The simplest solution is to use a serialization library which can automatically serialize classes with reflection and serialize/deserialize manually by turning entities into a big list of JSON/XML/whatever objects with most of the hard work taken care of by reflection.
    This is slow but it's easy and can even be optimized in many libraries if you want to get into that.
     
    davenirline likes this.
  9. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    So what is it for?
     
  10. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    948
    This was our solution and it's pretty solid. I have written about this.
     
    soundeosdev likes this.
  11. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    I probably shouldn't mention this as I've only been working on it for a little under a week but I'm kind of excited how it's turned out. I've taken my 3rd shot at writing a serialization library for Entities. I've worked on 2 completely opposite solutions (including a shipped product that requires it not to break between versions) over the past couple of years and using what I've learnt, merged them together to create a very versatile solution avoiding a annoyances, tedious maintenance and migration problems I've seen.

    I've got it to the point where you just add the attribute [Save] to any component you want saved and that's all you really need to do. No code-gen or reflection required.

    Code (CSharp):
    1.     [Save]
    2.     [GenerateAuthoringComponent]
    3.     public struct TestRemapping : IComponentData
    4.     {
    5.         public Entity Entity;
    6.     }
    Code (CSharp):
    1.     [Save]
    2.     public struct TestBufferRemapping : IBufferElementData
    3.     {
    4.         public Entity Entity;
    5.     }
    Supports remapping entities and has a reasonably simple migration progress.
    It's extremely fast, especially to serialize. I intend to look at releasing this after i flesh out some features (subscene entity saving, release validation etc)
     
    Last edited: Apr 4, 2022
    bb8_1, lclemens, LuisEGV and 14 others like this.
  12. scottjdaley

    scottjdaley

    Joined:
    Aug 1, 2013
    Posts:
    152
    This looks great, excited to see what you share! I'm currently working on my own entity serialization so I'm curious about your approach.

    Do you have a solution for specifying the set of entities to save? For example, let's say I want to save all of the MyTestComponents, but only on entities that don't have a DontSaveTag. Would that be possible?

    If you wanted to save a built-in component, such as Parent or Translation, how would the user declare this? Is there some other way to register types?

    I think you mentioned on discord that you are instantiating prefabs during deserialization. If so, will this mapping to prefabs be customizable by the user? For example, currently, all my entities have a FixedString32 in a component that can be used to fetch the corresponding prefab, but I might change it to use a uuid at some point. Is there a way to control what prefab key gets serialized and how it maps to a prefab entity?

    I have another very specific problem that I've run into recently. I want to serialize the LinkedEntityGroup buffer on entities which contains Entity refs. In some cases, this buffer contains child entities that are purely visual. They don't have any gameplay components on them, so they don't need to be serialized. In fact, I don't want to serialize them because they are part of the parent's corresponding prefab, so they will get instantiated during deserialization anyways. Is there some way to specify which elements of the LinkedEntityGroup buffer get serialized? And is there a way to declare the buffer deserialization as additive instead of replacement?

    My serialization library is also still a work-in-progress, but it currently looks something like this:

    Code (CSharp):
    1.  
    2.         // Describes the set of entities I want to save.
    3.         var desc = new EntityQueryDesc
    4.         {
    5.             All = new[] { ComponentType.ReadOnly<EntityType>() },
    6.             None = new[] { ComponentType.ReadOnly<HologramTag>(), ComponentType.ReadOnly<PreviewTag>() },
    7.         };
    8.         var world = World.DefaultGameObjectInjectionWorld;
    9.         EntityManager entityManager = world.EntityManager;
    10.  
    11.         // Creates the serializer. The PrefabEntityMapper tells the serializer how to remap entity references into
    12.         // prefab keys, strings in this case.
    13.         var serializer = new EntitySerializer(entityManager.CreateEntityQuery(desc), entityManager, new PrefabEntityMapper(world));
    14.        
    15.         // Specify the component and buffer types to serialize on the entities.
    16.         serializer.AddComponent<HexPosition>("Position");
    17.         serializer.AddComponent<Facing>("Facing");
    18.         serializer.AddComponent<Printer>("Printer");
    19.         serializer.AddBuffer<LinkedEntityGroup>("Group");
    20.  
    21.         string path = "path/to/my.save";
    22.      
    23.         // Save
    24.         using var sw = new StreamWriter(path);
    25.         using JsonWriter writer = new JsonTextWriter(sw);
    26.         _serializer.Serialize(writer);
    27.      
    28.         // Load
    29.         using var sr = new StreamReader(path);
    30.         JsonReader reader = new JsonTextReader(sr);
    31.         _serializer.Deserialize(reader);
    32.      
    33.         // {
    34.         //     "Entities": [
    35.         //     {
    36.         //         "Entity": {
    37.         //             "EntityIndex": 1,
    38.         //             "Key": "belt-straight"
    39.         //         },
    40.         //         "Components": {
    41.         //             "Position": {
    42.         //                 "Value": {
    43.         //                     "x": -5,
    44.         //                     "y": 6
    45.         //                 }
    46.         //             },
    47.         //             "Facing": {
    48.         //                 "Value": 0
    49.         //             },
    50.         //             "Group": [
    51.         //             {
    52.         //                 "Value": {
    53.         //                     "EntityIndex": 1,
    54.         //                     "Key": "belt-straight"
    55.         //                 }
    56.         //             }
    57.         //             ]
    58.         //         }
    59.         //     },
    60.         //     ...
    61.         //     ]
    62.         // }
     
    Krooq likes this.
  13. skiplist

    skiplist

    Joined:
    Nov 9, 2014
    Posts:
    45
    Our approach has been to use this for saving which I think has worked pretty well. The way we make it work is by copying over everything to new "serialized" components that we just never change. And if we stop using one we need to at least keep it in the project to not break saves just as you say.

    So while it might not be for everyone it works pretty well for us and often having different components for runtime and serialized make sense anyway like the health which is the absolute value during runtime, but we save a percentage so that when we increase the max health it isn't deserialized with less health.
     
    Arnold_2013 likes this.
  14. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Is this such a big problem it's just loading blocks of memory/data to/from a file system?
     
  15. GroundCombo

    GroundCombo

    Joined:
    Jan 23, 2015
    Posts:
    29
    I would call saving and loading a world reliably while avoiding breakage whenever a single bit changes in any component a big problem, yes.

    My solution for our game was to copy all entities except ISystemStateComponents, managed components and prefabs into a temporary world, use MakeGenericMethod() with GetBuffer<T>() and GetComponentData<T>() to convert them into a serializable SavedEntity class containing a List<IComponentData>, List<SavedBuffer> (with type information and a list of IBufferElementData for the dynamic buffer), the entity index and version, and serialize.
    On load we deserialize, recreate the entities with their components and buffers into a temporary world by using MakeGenericMethod() with AddComponent/Buffer, remap entity references, do some postprocessing and move everything into the game world.

    This has been working so far and allows us to tinker with components without breaking saves (most of the time) while we're developing.
     
    lclemens and scottjdaley like this.
  16. scottjdaley

    scottjdaley

    Joined:
    Aug 1, 2013
    Posts:
    152
    That sounds exactly like what I has working last week. However, I didn't really like how the type information had to be serialized and all the reflection stuff was getting messy. Decided to switch to a system where the IComponentData and IBuffereElementData types are explicitly registered with the serializer which made things a bit simpler. Still trying to figure out the best way to handle different kinds of migrations without breaking saves. Currently thinking of just adding a layer that can rewrite the raw json before it is deserialized into entities.
     
  17. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    This is basically exactly what I'm doing as well. But reflection is slow and the state in my factorio-like is massive and subject to a lot of entropy. I find myself having to do not a small amount of ad hoc "post processing" of the data to really get it all working. I wish there was a better, faster way.

    I suppose I'll be eager to see what tertle is cooking up.
     
  18. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    I use no reflection (except GetAttribute), I just use DynamicComponentTypeHandle. I can write up my current approach and why I'm doing it this way tonight when I'm off work if I find some time.
     
    Last edited: Apr 7, 2022
  19. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    OK here it is. Big ol' chunk of text. Skip to 3.1 if you don't care about a quick bit of background. I've used 2 very different save systems at work and I've just started writing my own for my personal projects based off experience gained from the systems I've used.

    1. Serialize the entire world
    This our first iteration, which I did not write personally. It was the most simple approach just using SerializeWorld and the obvious benefit is extremely easy to use. It's mostly done for you! Or is it... We stuck with this for nearly a year, had massive tooling built to manage the issues however

    The first obvious downsides is any change to a StableTypeHash on a component breaks it. Change a namespace, the name of the component, any field on the component, add/remove any field. Your save file is dead. We replaced old components with exact same memory mapped stubs then migrated to the new versions .We had a huge range of tool in place available to developers to help migrate this including detecting and auto generating the replacements. Problems really start creeping in though when you do a huge refactor that just can't be easily migrated.

    The next big issue is inflexibility. You save the entire world, you load the entire world. But what if between this time designers have come in and changed how things should behave. Maybe the giant lobster no longer has a sing ability or you've decreased the max. It's a huge pain and limitation on your designers to have to manually migrate the world to the new base prefabs. (NOTE: not all games want to apply changes to existing saves and this would not apply.)

    You're also saving a lot of data you don't need. 90%+ of components/data you do not need to save. The worst thing is, because any component change you make requires a migration even if you you don't care what data is on a component you still need to migrate it.

    But the final nail in the coffin, at least for me is - most Unity updates break your saves and there is little you can do and there is no guarantee that some point in the future Unity won't make a change to the entities package that will simply make this unavoidable. This is not an acceptable risk on a launched title.

    2. Serialize each 'archetype' onto their own container
    So yeah, a few months out from launch we just kept randomly breaking saves. Decided it was unacceptable and we need a new approach so I 'volunteered' and got to work writing something else. I kind of based it off what I had done and seen done in more traditional GO type games.

    Built containers for each archetype, e.g.
    struct BuildingSave {   int Type; float3 Position; quaterion Rotation; }


    To serialize we just have a large IJobEntityBatch that reads all data we want to save on this archetype and write each entity to it's own container. Pretty quick.

    To deserialize, we first create a default types of each archetype saved, like you would spawn if you were create it in a game fresh. Then we just apply the saved settings to it if it still has the component. This has the huge benefit of any changes design has made to the components on the prefab are updated. Added new components? They'll be there. Updated a creatures max health updated, it's there (you only need to save Current health.)

    Migration isn't too bad as we control the containers however any component change might require migrating multiple containers which is a bit annoying. Also we have to migrate the entire container instead of just a single component.

    However we do save the bare minimum and only need to migrate rarely, probably only have one dev update one minor thing once a month. We shipped with this 6 months ago and have not had a major issue since. It's not perfect but it's been good enough that we haven't considered changing since it was up and running..



    This brings us to now what I'm looking at doing.
    First question is why? It seems like we have a proven working solution and that's true. However this is for my own project and I can't exactly copy code I did at work and doing it a completely different way does help me avoid any issues (not that I think I'd actually have an issue with my employer.)

    But the main reason is it's still not completely without fault. While it is reasonably easy to maintain, it still takes a lot of code to setup initially and there are some ugly things about it (keeping old systems around for each migration.) I always wanted to codegen this bu doing it this way makes codegen quite a bit of work and I ran out of time.

    3.1 Serialize each Component
    Disclaimer: this is a 1 weekend test and has not been proven to be production worthy yet.
    The approach I'm taking now is to serialize each component separately. The process is simple.

    Give each entity you want saving a 'type' component (I call it Savable). This has a reference to it's prefab (either an int, weak asset reference, whatever you want.)

    Then you can can save each component my simply give a [Saved] attribute (also very easy to manually register types for those in 3rd party libraries.)

    Code (CSharp):
    1.  
    2. foreach (var type in TypeManager.AllTypes)
    3. {
    4.     if (type.Category == TypeManager.TypeCategory.ComponentData)
    5.     {
    6.         if (type.Type.GetCustomAttribute(typeof(SaveAttribute)) != null)
    7.         {
    8.              var saver = new ComponentSave(this, typeIndex);
    9.     }
    From the TypeIndex you can get ComponentType and get a DynamicTypeHandle

    this.System.GetDynamicComponentTypeHandle(this.componentTypeRead)


    using that you can serialize the component from a chunk

    Code (CSharp):
    1. foreach (var chunk in this.Chunks)
    2. {
    3.     var components = chunk.GetDynamicComponentDataArrayReinterpret<byte>(this.ComponentType, this.ElementSize);
    4.     this.Serializer.AddBufferNoResize(components);
    Very simple serialization process. No magic required except grabbing an attribute in OnCreate.

    Benefits:
    - Just need to attach [Saved] to any IComponent or IBufferElement and it will start working
    - Changes to prefab are reflected in saved data.
    - Fast serialization
    - Very easy migration. Just done in 1 giant block per component.

    Downsides
    - I have to store an int for each component I save to match to the saved entity so file size is larger.
    - Not so fast deserialization (though it's actually doing much better than I expected, simply creating the entities is still the highest cost but I haven't stress tested really high component counts yet so I am expecting not as good performance.)

    Deserialize steps are basically
    1. Check each serialized component for current matching type, if it doesn't exist look for migration. If not found discard data [work in progress for this weekend]
    2. Create all entities.
    3. Apply components back 1 at a time. I thought this would be slow but surprisingly it's not nearly as bad as I expected. Biggest cost is just Instantiating entities. Overall it's not a huge deal though as my goal is fast serialization as this often has while you are playing but deserialization usually happens in a load screen.

    1 more note, Entity references are really easy to handle. Thanks to Unity and Entity info saved in the TypeManager you can remap entities in components with something like

    Code (CSharp):
    1.  
    2. public static unsafe void RemapEntityFields([ReadOnly] byte* ptr, TypeManager.EntityOffsetInfo* offsets, int offsetCount, NativeHashMap<Entity, Entity> remap)
    3. {
    4.     for (var i = 0; i < offsetCount; i++)
    5.     {
    6.         var entity = (Entity*)(ptr + offsets[i].Offset);
    7.         *entity = remap.TryGetValue(*entity, out var newEntity) ? newEntity : Entity.Null;
    8.     }
    9. }
    Final thoughts on 3.1.
    Probably won't get around to it this weekend as it's going to be focused on ensuring migration workflow feels good but I'm looking at implementing, what that I think will be reasonably easy, is apply save data to entities in subscenes.

    Also currently I only support full component serialization. I've considered this a lot and partial component serialization isn't that big a deal to implement but does make serialization a bit slower (instead of a memcpy on the entire array, each field needs to be individually MemCpyStride) and migration a bit more of a pain. I haven't decided if I'm going to support it yet as I don't think separating components with save data from those without is that bad a plan. That said, I will probably add this option just minimize it's use but will be on the tail end of my feature implementation.

    3.2 Serialize manually per chunk
    Wait what, 3.2? Yep, after I wrote 3.1 I decided to see if I could do a version that was faster didn't need to store references ints per component to decrease file size. This is basically the same as 3.1 but instead each possible component that can be saved is stored per chunk.

    This is definitely even faster to serialize and a bit faster to load and makes a measurably smaller file size.
    However, when I was planning out migration I realized it was going to be a lot more rigid and painful to manage and decided to go back o my original plan. It's still fast and file size really isn't that big of an issue, we are only talking 3MB compressed for 8 components on 100k entities (which is a lot more than I'll probably ever need to actually save, though will need more components.).

    I'm not ruling out switching back to 3.2 at one point if I can figure out migration but for now sticking with 3.1 and fleshing out the migration for that.

    Final Thoughts
    I've loosely heard of a few more alternatives to saving. Storing components in blobs etc. I have no experience on this. Others might have completely different solutions or found ways around the downsides of SerializeWorld in which case, great, please share!
     
    Last edited: Apr 7, 2022
    lclemens, TWolfram, Jarumba and 13 others like this.
  20. Luxxuor

    Luxxuor

    Joined:
    Jul 18, 2019
    Posts:
    89
    Thank you for the great writeup @tertle, definitly appreciate the thought process.
    For a previous project we also used something akin to method 2, storing certain components in a seperate data structure. We made sure to avoid entity references at all costs though so there were some extra components needed to for example identify the player or other serialized entities. Was very custom at the end so I also tried out to implement something more general like you described in method 3 but was trying out code generation; I gave up though as integrating generators, making sure people do not forget to trigger it and writing it all took more time than it was worth it tbh.
    Using the DynamicComponentTypeHandle is pretty clever though and the fast serialize, slower deserialize tradeoff is also imho very well chosen. Speaking from experience 3Mb for a save game is also not too uncommon, esp on console where you most of the time end you up with some minimum size of a few Mb anyways.

    Regarding full component serialization: imho I would not support serializing components partially at all, ECS is already well suited for breaking up big components into smaller multiple ones so that is what you should do, in the past we did partial serializing of MBs and it was very error-prone and confusing for engineers coming into the project. Slowing down the serialization for that is not a good tradeoff imho.
     
    Shinyclef likes this.
  21. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    Great to hear about others experiences. There are so many discussions out there for various parts of game development but I've always found there not to be much about saving, yet nearly every project needs to do this!

    I did think of another option while reading this and that is an attribute that make components ignore fields during deserialization. So it still serializes the entire component for speed but doesn't write it back on deserialization to allow value updates.
    I'm still leaning towards just not supporting this though, as you say entities already lends itself to easily breaking up components.
     
    lclemens and Shinyclef like this.
  22. Luxxuor

    Luxxuor

    Joined:
    Jul 18, 2019
    Posts:
    89
    Yeah I think its just not as "sexy" as other systems. In every project I have worked on it was an afterthought; if we do not implement it we can't break it. You read often that with ECS, serialization is much easier but there is no discussion around actual implementation.

    I think that this is a much better tradeoff and I could see that having a use for computed/cached values that are guaranteed to be generated before the next update. To be honest I would still not use it but I can see this as a good compromise.

    Quick question as I have never found out how entity remapping works: where do you get the NativeHashMap<Entity, Entity> from, the EntityManager?
     
    lclemens and Shinyclef like this.
  23. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    For this case, you create the NativeHashMap!
    Serialization stores the old world Entity + Prefab type
    Deserialization then creates the new entities and associates the old Entity as the key and the new Entity as the value in a hashmap.

    Snippet of my code looks like this

    Code (CSharp):
    1. NativeHashMap<Entity, Entity> entityMapping; // Grouped by previous world entity and new world entity
    2. NativeHashMap<int, UnsafeList<Entity>> savedEntities; // Grouped by prefab type and old entity ID's
    3.  
    4. // <Deserialize into hashmap>
    5. // ...
    6.  
    7. var oldEntities = new NativeList<Entity>(128, Allocator.TempJob);
    8. var newEntities = new NativeList<Entity>(128, Allocator.TempJob);
    9.  
    10. using var saved = savedEntities.GetEnumerator();
    11. while (saved.MoveNext())
    12. {
    13.     var current = saved.Current;
    14.     var entities = current.Value;
    15.     var start = newEntities.Length;
    16.  
    17.     newEntities.ResizeUninitialized(start + entities.Length);
    18.     oldEntities.AddRange(entities.Ptr, entities.Length);
    19.  
    20.     EntityManager.Instantiate(prefabLookup[current.Key], newEntities.AsArray().GetSubArray(start, entities.Length));
    21.  
    22.     current.Value.Dispose();
    23. }
    24.  
    25. // This is a custom add range function I have for hashmaps that is 2 orders of magnitude faster than adding individually
    26. entityMapping.ClearAndAddBatchUnsafe(oldEntities, newEntities);
     
    Last edited: Apr 7, 2022
    Luxxuor and Shinyclef like this.
  24. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    First test has left me 2 migration options that you could implement. Easy, clean version and an extremely faster version.

    The objective is to take 50,000 of this saved saved component

    Code (CSharp):
    1. [Save]
    2. [GenerateAuthoringComponent]
    3. public struct TestComponentForMigration : IComponentData
    4. {
    5.     public int Value;
    6. }
    and migrating it to
    Code (CSharp):
    1. [Save]
    2. [GenerateAuthoringComponent]
    3. public struct TestComponentForMigration : IComponentData
    4. {
    5.     public int Value;
    6.     public int NewValue;
    7. }
    And make NewValue = 5

    The easy version

    Code (CSharp):
    1. public class TestComponentForMigrationMigrator : ComponentMigrator<TestComponentForMigrationMigrator.Before, TestComponentForMigrationMigrator.After>
    2. {
    3.     public override ulong From => 1392796844747678277;
    4.     public override ulong To => 17841778420484547873;
    5.  
    6.     protected override void Migrate(ref After newComponent, in Before oldComponent)
    7.     {
    8.         newComponent.Value = oldComponent.value;
    9.         newComponent.NewValue = 5;
    10.     }
    11.  
    12.     public struct Before { public int value; }
    13.     public struct After { public int Value; public int NewValue; }
    14. }
    Performance isn't that bad
    upload_2022-4-8_17-6-54.png

    However if you were willing to write a little unsafe code you can make it a lot faster by throwing it all in a parallel burst job.

    Code (CSharp):
    1.  
    2. public class FastTestComponentForMigrationMigrator : ComponentMigratorBase
    3. {
    4.     public override ulong From => 1392796844747678277;
    5.     public override ulong To => 17841778420484547873;
    6.  
    7.     protected override JobHandle Migrate(
    8.         NativeList<MigrateData> migrateData, NativeArray<byte> oldElements, int oldElementSize, NativeList<byte> newElements, int newElementSize, JobHandle dependency)
    9.     {
    10.         return new MigrateJob
    11.             {
    12.                 MigrateData = migrateData.AsDeferredJobArray(),
    13.                 OldElements = oldElements,
    14.                 OldElementSize = oldElementSize,
    15.                 NewElements = newElements,
    16.                 NewElementSize = newElementSize,
    17.             }
    18.             .Schedule(migrateData, 16, dependency);
    19.     }
    20.  
    21.     [BurstCompile]
    22.     private unsafe struct MigrateJob : IJobParallelForDefer
    23.     {
    24.         [ReadOnly]
    25.         public NativeArray<MigrateData> MigrateData;
    26.  
    27.         [ReadOnly]
    28.         public NativeArray<byte> OldElements;
    29.  
    30.         [NativeDisableParallelForRestriction]
    31.         public NativeList<byte> NewElements;
    32.  
    33.         public int OldElementSize;
    34.         public int NewElementSize;
    35.  
    36.         public void Execute(int index)
    37.         {
    38.             var data = this.MigrateData[index];
    39.  
    40.             for (var i = 0; i < data.Length; i++)
    41.             {
    42.                 byte* oldComp = (byte*)this.OldElements.GetUnsafeReadOnlyPtr() + data.OldIndex + (i * this.OldElementSize);
    43.                 byte* newComp = (byte*)this.NewElements.GetUnsafePtr() + data.NewIndex + (i * this.NewElementSize);
    44.                 ref int value = ref UnsafeUtility.AsRef<int>(newComp);
    45.                 value = UnsafeUtility.AsRef<int>(oldComp);
    46.                 ref int newValue = ref UnsafeUtility.AsRef<int>(newComp + UnsafeUtility.SizeOf<int>());
    47.                 newValue = 5;
    48.             }
    49.         }
    50.     }
    51. }
    upload_2022-4-8_17-8-10.png

    And actually I could rewrite this job to use MemCpyStride and MemCpyReplicate to do it even faster, but when I profiled 0.02-0.01ms for 50k components I decided the headache I always get using strides was not worth it.

    Few notes:
    Obviously the fast version is a lot more challenging to understand and while significantly faster the slow version is going to be fast enough as this is nearly always going to execute during a load screen (it takes ~20ms to instantiate the 100k entity prefabs anyway)

    Code (CSharp):
    1.         public override ulong From => 1392796844747678277;
    2.         public override ulong To => 17841778420484547873;
    These magic numbers are obviously not great. Unfortunately I do need some way to unique reference all current but also all historic components. I have ideas for future idea to make this a lot more manageable though, just working on the proof of concept atm.

    Finally I have a 3rd option, which is kind of middle ground, where I provide 2 native arrays of old to new components and you can optionally convert them in your own burst job. The problem this, this has to be schedule IJob and while these can all execute in parallel it still took 0.86ms. So maxing my 24 workers and only saving 50% over purely main thread wasn't so ideal. That said savings are savings.

    Anyway probably won't post much more about this until i'm in a more polished stage.
     
    Last edited: Apr 8, 2022
    lclemens, NotaNaN, bb8_1 and 5 others like this.
  25. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    @tertle

    Trying to follow along here and incorporate 3.1 into my own code, but I'm missing some pieces.

    Code (CSharp):
    1. foreach (var chunk in this.Chunks)
    2. {
    3.     var components = chunk.GetDynamicComponentDataArrayReinterpret<byte>(this.ComponentType, this.ElementSize);
    4.     this.Serializer.AddBufferNoResize(components);
    What is "this"? Is
    this.Chunks
    EntityManager.GetAllChunks()
    ? And where is this.ElementSize coming from? Looking at the code for that method, it appears to be related to the size of the component type itself, but not totally clear.
     
    Last edited: Apr 9, 2022
  26. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    I'm serializing each component in a separate thread so I'm doing manual chunk iteration. Therefore Chunks is just a native array assigned from a query
    Code (CSharp):
    1. public NativeArray<ArchetypeChunk> Chunks;
    2. var chunks = this.queryRead.CreateArchetypeChunkArrayAsync(Allocator.TempJob, out var dependency1);
    As for elementsize that's just read from the TypeInfo
    Code (CSharp):
    1. var typeInfo = TypeManager.GetTypeInfo(this.typeIndex);
    2. var elementSize = typeInfo.ElementSize;
     
    slims likes this.
  27. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    @tertle Thanks for this, that all makes sense and this part seems to work fine for me, but my next point of confusion is how Entities are mapped to the components that have been converted into byte arrays while doing chunk iteration. Basically, when I deserialize, how do I know which entities are associated with the components?
     
  28. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    It's not shown above as I was keeping code simple for the forum sake but I also store an index (which is actually just Entity.Index) along with each component and I generate a remap hashmap on deserialization to convert the index to the new Entity. It's not ideal and it's the major drawback of this implementation and I mentioned it as a downside in my big writeup.
    That said it compresses well and I don't think file size will ever be an issue.
    Note I said I store only the entities index not the full entity just to reduce file size a little. The full entity is stored in the entity type save which is what instantiates it on deserialization as this full entity is what needs to be used for remapping, not the index as that wouldn't be safe.
     
  29. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    @tertle I think I may just be overthinking this, my real confusion is how to know what bytes are associated with what entity while you're iterating over the chunk. So I have something like this:

    Code (CSharp):
    1. foreach (var chunk in chunks)
    2. {
    3.   var components = chunk.GetDynamicComponentDataArrayReinterpret<byte>(dynamicType, elementSize);
    4.   var entities = chunk.GetNativeArray(GetEntityTypeHandle());

    But components is just a big old array of bytes, where as the native array of entities is obviously going to give me every entity in the chunk.

    I could divide the bytes by number of Entities in the chunk and map the entity to the array start and end. Like if my component is 64 bytes large, I'd expect that, in a chunk with 2 entities, there'd be 128 bytes. Is entities[0] => components[0]-[63] and entities[1] => components[63]-[127]?

    I feel I may just be missing some small key to understanding this.
     
  30. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    For each chunk I write 3 things

    int - length - number of entities/component
    int[length] - entities - just the indices (but start with the full thing it's easier Entity[length])
    T[length] - component - the actual data

    so for each component I write

    Header (component info, total entities that have this component etc)
    =====
    int
    int[]
    T[]
    =====
    int
    int[]
    T[]
    =====
    // repeating for each chunk

    This is repeated for every saved component
    So every component has it's own array, which is then merged at the end to a single array for compression + writing to disk.
     
    Last edited: Apr 10, 2022
  31. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    I've been struggling with this for a couple hours now. I have something approximating what you're saying but I'm clearly not getting this. Here's what I have right now. It puts together a dictionary of entities => byte array associated with all their components.

    Code (CSharp):
    1.    
    2.  
    3.        // This will grab any entity in the game that needs to be serialized
    4. var entityQuery = EntityManager.CreateEntityQuery(new EntityQueryDesc()
    5. {
    6.   Any = new[]
    7.   {
    8.     ComponentType.ReadOnly<ShipMarker>(),
    9.     ComponentType.ReadOnly<Placeable>(),
    10.     ComponentType.ReadOnly<LogisticsUnit>(),
    11.   }
    12. });
    13.  
    14.         var entitiesMap = new Dictionary<Entity, List<byte>>();
    15.         using var chunks = entityQuery.CreateArchetypeChunkArray(Allocator.TempJob);
    16.         var entityHandle = GetEntityTypeHandle();
    17.         foreach (var type in types)
    18.         {
    19.           var dynamicType = GetDynamicComponentTypeHandle(type);
    20.           foreach (var chunk in chunks)
    21.           {
    22.             var entities = chunk.GetNativeArray(entityHandle);
    23.             var typeInfo = TypeManager.GetTypeInfo(type.TypeIndex);
    24.             var components = chunk.GetDynamicComponentDataArrayReinterpret<byte>(dynamicType, typeInfo.ElementSize);
    25.             if (components.Length == 0) continue;
    26.             var length = components.Length / chunk.ChunkEntityCount;
    27.  
    28.             for (var i = 0; i < entities.Length; i++)
    29.             {
    30.               var entity = entities[i];
    31.               if (!entitiesMap.ContainsKey(entity))
    32.               {
    33.                 entitiesMap.Add(entity, new List<byte>());
    34.               }
    35.        
    36.               entitiesMap[entity].AddRange(components.Slice(i * length, length));
    37.             }
    38.           }
    39.         }
    Couple issues:

    1. I now have a mapping of entity to all the bytes associated with that entity's components, but I won't know what bytes are for what components when deserializing. I could store some other metadata about what indexes are what types, or make it a dictionary of Type -> byte[], but not sure if this is the correct idiom.

    2. How to actually convert the bytes back into components when deserializing? Am i just meant to use BinaryWriter/Reader?

    I'm probably a bit annoying at this point. It seems like I lack some fundamental understanding about the low level workings of ECS that you and some others on this forum have. Any help offered is much appreciated.
     
    Last edited: Apr 10, 2022
  32. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    Update:

    The solution in my last post seems to work. This is the updated code (this code runs in my SaveSystem)

    Code (CSharp):
    1.  
    2. // This dictionary basically stores the game state and will get written to file
    3. var entitiesMap = new Dictionary<Entity, SerializedEntity>();
    4. // the query just gets all the entities in my game state that need to be serialized,
    5. // this doesn't include entities that I can recreate by infering game state when
    6. // loading the game to save space.
    7. using var chunks = entityQuery.CreateArchetypeChunkArray(Allocator.TempJob);
    8. var entityHandle = GetEntityTypeHandle();
    9. // types is just a collection of types with the [Serializable] attribute
    10. foreach (var type in types)
    11. {
    12.   var dynamicType = GetDynamicComponentTypeHandle(type);
    13.   foreach (var chunk in chunks)
    14.   {
    15.     var entities = chunk.GetNativeArray(entityHandle);
    16.     var typeInfo = TypeManager.GetTypeInfo(type.TypeIndex);
    17.     var components = chunk.GetDynamicComponentDataArrayReinterpret<byte>(dynamicType, typeInfo.ElementSize);
    18.     if (components.Length == 0) continue;
    19.     var length = components.Length / chunk.ChunkEntityCount;
    20.  
    21.     for (var i = 0; i < entities.Length; i++)
    22.     {
    23.       var entity = entities[i];
    24.       if (!entitiesMap.ContainsKey(entity))
    25.       {
    26.         entitiesMap.Add(entity, new SerializedEntity());
    27.       }
    28.  
    29.       var componentDataBytes = components.Slice(i * length, length).ToArray();
    30.       entitiesMap[entity].Components.Add(type.GetManagedType(), componentDataBytes);
    31.     }
    32.   }
    Serialized Entity is just a little class that lets me map types to the component data so I can recreate them with BinaryFormatter when loading the game:


    Code (CSharp):
    1.     [Serializable]
    2.     class SerializedEntity
    3.     {
    4.       public Dictionary<Type, byte[]> Components = new();
    5.     }
    The old way of marshalling my game state using reflection is about 100x+ slower than this method depending on how many components need to be marshalled, but I haven't yet actually written the bytes to a file (I don't expect much performance difference there, since it's, in theory, the same data).

    I still suspect I'm not actually following @tertle 's implementation but it's somewhere in the ballpark.

    Still not clear on how to convert the byte arrays back to components in deserialization though.
     
    Last edited: Apr 10, 2022
    lclemens likes this.
  33. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    Ok I think I'm getting a little further in my understanding.

    Storing all the data in a marshaled structure like I'm doing isn't quite right. You have to keep all the bytes together in one big array per component, so that you can use Reinterpret<T> after loading the data from a file.

    So you'd stick this in the structure you're writing to file:


    Code (CSharp):
    1. var components = chunk.GetDynamicComponentDataArrayReinterpret<byte>(dynamicType, typeInfo.ElementSize).ToArray();
    2.  
    3. saveState.addComponentArray(components);
    then when you deserialize, you can create a nativearray from it again and reintrpret it back to the original component, and it will be in the same order as the entities you added to the save state.

    var deserialized = new NativeArray<T>(saveState.getNextComponentArray()).Reinterpret<T>();


    This is probably blindingly obvious to people in this thread, but maybe it will help someone in the future. Still not sure the best way get T for each component when deserializing for the reinterpretation, but I can think of a few somewhat ugly ways to do it. Will keep trying...

    EDIT: I suppose I could also use my solution and use UnsafeUtility.as<T> one by one.
     
    Last edited: Apr 11, 2022
    lclemens likes this.
  34. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    So yeah, definitely a bit different to I'm doing it as you're mapping your data entity:component 1:many using
    entitiesMap[entity].Components.Add(type.GetManagedType(), componentDataBytes);
    While I'm literally mapping entity:component 1:1

    This is fine and whatever works for you. It's actually what we basically do in the 2. part I discussed.

    Random update since I'm replying anyway.
    Got [SaveIgnore] working as as soon as I went to implement this in my project I wanted it straight away. Turned out to be easy anyway.
    Also added support for saving components on child entities in a hierarchy, not so easy and not so performant deserializing so should be avoided but support is there for when required.

    So pretty much everything is done on my todo list except saving subscene entities but I ran into a technical issue (unrelated to saving, more to do with combining entities/netcode/physics/rendering combined). I want to write it as a more generic implementation that'll allow you to to do mini save close, open load subscenes while playing.
     
    Last edited: Apr 11, 2022
    lclemens and Timboc like this.
  35. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    Did this affect performance at all?
     
  36. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    Not much. I still save the whole component I just MemCpySlice out the pieces. Ideally you group all your [SaveIgnore] together at either start or end of the struct then it's still only 1 copy however it supports any configuration.
     
  37. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    @tertle Rewrote pretty much everything but I still don't get how to convert the byte[] to the components when loading the game.

    My plan was to use use
    UnsafeUtility.As<byte[], {MyComponent}>
    for each component, but since I don't know the component type at runtime, I have to use reflection, but the
    As
    method returns a ref, and therefore cannot be invoked via reflection.

    I know your solution is a little different, but not by much. What's your secret to converting these byte arrays back to components?

    EDIT: I found a solution to convert byte[]'s to structs generally in C#:

    Code (CSharp):
    1. public static T BytesToStruct<T>(ref byte[] rawData) where T: struct
    2. {
    3.   T result;
    4.   var handle = GCHandle.Alloc(rawData, GCHandleType.Pinned);
    5.   try
    6.   {
    7.     var rawDataPtr = handle.AddrOfPinnedObject();
    8.     result = (T)Marshal.PtrToStructure(rawDataPtr, typeof(T));
    9.   }
    10.   finally
    11.   {
    12.     handle.Free();
    13.   }
    14.   return result;
    15. }
    So I just call this code with reflection instead of
    UnsafeUtility.As
    and it works fine.

    There's got to be a better way...

    EDIT: Found a slightly better way. Instead of using this BytesToStruct method, you can wrap UnsafeUtility.As like this:

    Code (CSharp):
    1.  
    2. public class LowLevelHelper {
    3.   public static T As<T>(ref byte[] bytes) where T : struct
    4.   {
    5.     return UnsafeUtility.As<byte, T>(ref bytes[0]);
    6.   }
    7. }
    Then call it like this as you're iterating over all the data you need to deserialize when loading a game:

    Code (CSharp):
    1.     private static object ConvertBytesToStruct(Type type, byte[] bytes)
    2.     {
    3.       var genType = typeof(LowLevelHelper).GetMethod("As", BindingFlags.Static | BindingFlags.Public);
    4.       var unsafeAs = genType!.MakeGenericMethod(type);
    5.       var component = unsafeAs!.Invoke(null, new object[]
    6.       {
    7.         bytes,
    8.       });
    9.       return component;
    10.     }
     
    Last edited: Apr 16, 2022
  38. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    I store the component stable hash when saving. Then deserializing I just convert this hash to a component. If this doesn't exist then it means the component has changed and it looks for a migration to an existing component.

    Code (CSharp):
    1. var deserializer = new Deserializer(decompressed, 0);
    2.  
    3. while (!deserializer.IsAtEnd)
    4. {
    5.     var header = deserializer.Peek<HeaderSaver>();
    6.     var saveDeserializer = new Deserializer(decompressed, deserializer.CurrentIndex);
    7.  
    8.     if (this.TryGetSaver(ref saveDeserializer, out var saver))
    9.     {
    10.         this.deserializers.Add(saver, saveDeserializer);
    11.     }
    12.  
    13.     deserializer.Offset(header.LengthInBytes);
    14. }

    Code (CSharp):
    1. private bool TryGetSaver(ref Deserializer deserializer, out ISaver saver)
    2. {
    3.     while (true)
    4.     {
    5.         var header = deserializer.Peek<HeaderSaver>();
    6.  
    7.         if (this.savers.TryGetValue(header.Key, out saver))
    8.         {
    9.             return true;
    10.         }
    11.  
    12.         if (!this.TryMigrate(ref deserializer))
    13.         {
    14.             Debug.LogWarning($"No saver or migration was found for {header.Key}. This data will not be deserialized. If intentional ignore.");
    15.             return false;
    16.         }
    17.  
    18.         // Successfully migrated, deserializer will be updated and point to new data, try find a saver again
    19.         // It's to migrate multiple times on old data
    20.     }
    21. }
     
  39. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    This actually isn't true for IBufferElement, at least for the code you posted.
    GetDynamicComponentDataArrayReinterpret
    does not support dynamic buffers, so I had to handle them differently. Here's the code I used for that:

    Code (CSharp):
    1.  
    2. var bufferAccessor = chunk.GetUntypedBufferAccessor(ref dynamicType);
    3.  
    4. for (int i = 0; i < bufferAccessor.Length; i++)
    5.   {
    6.       unsafe
    7.       {
    8.           var buffer = bufferAccessor.GetUnsafeReadOnlyPtrAndLength(i, out var length);
    9.           if (length == 0) continue;
    10.           var entity = entities[i];
    11.  
    12.           var size = length * elementSize;
    13.           var bufferBytes = new byte[size];
    14.           Marshal.Copy(new IntPtr(buffer), bufferBytes, 0, size);
    15.           // Now stick bufferBytes into whatever structure you are using for the final serialization.
    16.       }
    17.   }
    Note you can still get the dynamic type in the same way for IBufferElement types
    var dynamicType = GetDynamicComponentTypeHandle(type);
     
  40. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    Finished the final runtime feature on my todo list for my library - subscene saving (inc save & close, open & load)



    Basically my 'Save Processor' has 2 modes. Prefab or SubScene. The difference being in
    - prefab mode it will create all saved entities from prefabs then apply saved data to them
    - subscene mode it'll match a unique key to entities in subscenes and apply the saved data to them

    It's fast and sync point free (including no command buffers) so can be used for on the fly subscene saving/loading when you open/close subscenes while playing a game with minimal hiccups. In theory this library could easily be extended for rollback.
     
    Last edited: Apr 18, 2022
  41. desper0s

    desper0s

    Joined:
    Aug 4, 2021
    Posts:
    14
    This thread is gold! If someone is interested in how to copy bytes from save file to existing entity, without knowing it's type in compilation time here is some hint (note that this is may be not optimal, but allows to copy data when entities order is different between save file and created world)

    Code (CSharp):
    1. //entity should be already created in current world
    2. var storage = systemBase.EntityManager.GetStorageInfo(entity);
    3.  
    4. var rawData = storage.Chunk.GetDynamicComponentDataArrayReinterpret<byte>(dynamicComponentHandle, TypeInfo.ElementSize);
    5.  
    6. var componentPointer = ((byte*)rawData.GetUnsafePtr() + storage.IndexInChunk * TypeInfo.ElementSize);
    7.  
    8. //chunk.ComponentsViewList is list pointing to raw component data in save file
    9. // and it has type UnsafeList<byte>
    10. var loadedDataPointer = chunk.ComponentsViewList.Ptr + (TypeInfo.ElementSize * i);
    11. //Do actual memory copy
    12. UnsafeUtility.MemCpy(componentPointer, loadedDataPointer, TypeInfo.ElementSize);
    13.  
     
  42. soundeosdev

    soundeosdev

    Joined:
    Dec 24, 2020
    Posts:
    14
    How do you manage Entity mappings? I have a DynamicBuffer of Entities, a referance list of other entities. How can I serialize and deserialize this? Great blog btw.
     
  43. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    The way I did this was as follows (I use BinaryFormatter for serialization/deserialization):

    When saving:

    1. Loop over all entities and serialize their respective components in something like this structure:

    Code (CSharp):
    1. [Serializable]
    2. public class MarshalledEntity
    3. {
    4.   public Dictionary<Type, byte[]> Components = new();
    5. }
    6.  
    7. [Serializable]
    8. public class SaveState
    9. {
    10.   public int Size { get; set; }
    11.   public Dictionary<SerializableEntity, MarshallingSystem.MarshalledEntity> AllEntities;
    12. }
    13.  
    14. [Serializable]
    15. public struct SerializableEntity : IComponentData
    16. {
    17.   public int Index;
    18.   public int Version;
    19.  
    20.   public static implicit operator Entity(SerializableEntity se) => new() { Index = se.Index, Version = se.Version };
    21.   public static implicit operator SerializableEntity(Entity e) => new() { Index = e.Index, Version = e.Version};
    22. }
    23.  
    When loading:

    1. Deserialize the bytes back into the SaveState object
    2. Create a map of Entity -> Entity
    3. Start loop over the AllEntities map
    4. Foreach key/value pair, instantiate a new entity using whatever method you prefer (I use scriptables pointing at game object prefabs that get converted into entities based on unique item ids).
    5. Add the new entity you just created and the old entity from the key to the entity mapping so that old entity is the key and the new entity is the value.

    Now that you've instantiated new entities for all the stuff you've serialized, you have to actually set all the component data back for each entity. While you're doing this, if you encounter a field that is an Entity type, instead of just setting the entity to its old value, get it from the map. You can do this via reflection. Here's the method I wrote for this:

    Code (CSharp):
    1.     private static void ReplaceOldEntities(IReflect type, ref object component,
    2.       NativeParallelHashMap<Entity, Entity> oldNewEntityMap)
    3.     {
    4.       var properties = type.GetFields(BindingFlags.Public | BindingFlags.Instance);
    5.  
    6.       foreach (var property in properties)
    7.       {
    8.         if (property.FieldType == typeof(Entity))
    9.         {
    10.           var oldEntity = (Entity) property.GetValue(component);
    11.           if (oldNewEntityMap.ContainsKey(oldEntity))
    12.           {
    13.             property.SetValue(component, oldNewEntityMap[oldEntity]);
    14.           }
    15.           else
    16.           {
    17.             property.SetValue(component, Entity.Null);
    18.           }
    19.         }
    20.       }
    21.     }
    That should do it. There may be better/faster ways but this has served me well so far saving and loading games with 30+ mb's of data in under a second on my dev machine.
     
    soundeosdev likes this.
  44. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    948
    For entities that need to save entity references/mapping, we add a WorldId component to those referenced entities. Each WorldId has a unique int ID value. We then use the value of the WorldId on serialization for example:
    Code (CSharp):
    1. parentEntity = "123"
     
  45. soundeosdev

    soundeosdev

    Joined:
    Dec 24, 2020
    Posts:
    14

    Thanks for the detailed response. Do you also store your IBufferElementData here?
     
  46. soundeosdev

    soundeosdev

    Joined:
    Dec 24, 2020
    Posts:
    14
    @tertle With your method, the third one, does it work with Tag (no data) components?
     
  47. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,647
    I only serialize the static archetype and if a tag component is part of the static archetype then it'll be added back automatically from the prefab on loading so there's no reason for me to ever save a tag component.

    For states I use a bitmask mapping to tag components and I can save the bitmask to regenerate the state on loading a save (https://gitlab.com/tertle/com.bovinelabs.core/-/tree/master/BovineLabs.Core/States) This way I can add/remove tag components that define state but not worry about saving them.

    If saving a tag component is something you really wanted to do though, it would not be that hard to just replace it with a bool (or bit) and simply add it back on load, though I prefer my approach.
     
  48. slims

    slims

    Joined:
    Dec 31, 2013
    Posts:
    86
    Yep, then I sort it all out in deserialization using reflection. If you need help I can paste more examples of that.

    Also note my method does not save Tag components either. Usually, like @tertle mentioned, my tags have ended up just being part of the archetype of the prefab itself. In the couple cases where it's not, I just add a byte type to the component that never gets touched so that the serialization system will pick it up.
     
  49. soundeosdev

    soundeosdev

    Joined:
    Dec 24, 2020
    Posts:
    14
    Actually I'm more curious about the migration strategies. One way or another we can save the data. The real problem for me is the migration. Currently I'm saving the data similar to the second method that @tertle mentioned, "2. Serialize each 'archetype' onto their own container". It seems like I need to drag all the old containers with me, which I don't like...
     
  50. ogimusprime

    ogimusprime

    Joined:
    Jul 20, 2021
    Posts:
    10

    Do you think that you could explain in more detail how you convert from the Component's stable hash back into the component? I am having the toughest time trying to dynamically determine which component I am currently serializing. My current generic method for converting an array of bytes into a struct looks like this:

    Code (CSharp):
    1. public static T Deserialize<T>(byte[] array)
    2.         where T : struct
    3.     {
    4.         var size = Marshal.SizeOf(typeof(T));
    5.         var ptr = Marshal.AllocHGlobal(size);
    6.         Marshal.Copy(array, 0, ptr, size);
    7.         var ouputStruct = (T)Marshal.PtrToStructure(ptr, typeof(T));
    8.         Marshal.FreeHGlobal(ptr);
    9.         return ouputStruct;
    10.     }
    The method works, however I still have to specify which type when I call this method.
    Code (CSharp):
    1.                 var entityData = Deserialize<dataComponent>(entityDataBytes);
    2.  
    How can I recreate the struct without hardcoding the type into my method call? I have access to the Type and TypeInfo.

    Thank you so much!