Search Unity

Question How to transfer large buffers of data to client before join?

Discussion in 'NetCode for ECS' started by pbhogan, Jan 12, 2023.

  1. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    What's the best way to deal with transferring data to a client before they join the game (or "go in game", as it were).

    For example: let's say the world is procedurally generated, or some sort of user-built level. When a new client joins, level data needs to be transferred to the client before it can join the game as it is not built into the game as a pre-made scene. How would this be accomplished?

    It seems like the only two methods of transfer is implicit synchronization of ghosts, and RPCs. So how can any large buffer of arbitrary data be transferred to a client? Would one have to build some kind of custom transfer protocol out of RPCs? How much data can/should a single RPC hold?

    Along similar lines, it seems like having the whole level built out of entities would fit the ECS model well, but how would you transfer those (potentially very many) ghosts or even know how far along snapshots are (for a progress indicator) if you were waiting on some amount of them (presumably prioritized) before spawning the player?
     
  2. NikiWalker

    NikiWalker

    Unity Technologies

    Joined:
    May 18, 2021
    Posts:
    316
    Hey pbhogan!

    An RPC can hold about a kb of data, although best practices is to keep them as tiny as possible.

    The current recommendation for bigger data is that anything over about 500bytes should be sent via its own TCP socket/backend.

    UTP is just not designed for this use-case, as the reliable pipeline (which RPCs use) is designed for lots of small, one shot RPCs.
     
  3. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    Hey Niki,

    Thanks for the response! Using an entirely separate socket/backend seems like a huge pain and would need to reimplement anything else the transport is doing (relay, NAT punch-through, etc.).

    I really hope that at some point UTP will expose an interface for sending arbitrary data packets (unreliable, reliable, reliable-ordered) out-of-band to what NetCode for ECS is doing on the same virtual connection. I presume that's already there under the hood. We can build almost anything on top of that interface.

    But for the time being, is there any reason it would be a terrible idea to decompose a large buffer into tiny chunks (say 250 bytes each) and transmit them using RPCs and then reconstruct the buffer on the other side? That's basically what we would do given a reliable packet interface.
     
  4. NikiWalker

    NikiWalker

    Unity Technologies

    Joined:
    May 18, 2021
    Posts:
    316
    Totally fair point. A request for this kind of API has come up a few times. No promises, but I'll see what I can do.

    Your workaround sounds fine. It is likely sending redundant data at a high rate, but that's not the worst thing in the world.
     
    isaac10334, Kmsxkuse, pbhogan and 2 others like this.
  5. NikiWalker

    NikiWalker

    Unity Technologies

    Joined:
    May 18, 2021
    Posts:
    316
    @isaac10334 @Kmsxkuse @pbhogan @Opeth001 @PolarTron (And any other stakeholders): Can I use this thread to gather requirements & expectations about this feature proposal/idea, please? Namely:
    • Please describe your use-case(s).
    • Describe how many "files" you expect to have, their average size, and the max size you'd like the ability to send.
    • Does this data change at runtime?
      • If so, how frequently?
      • And by whom (client vs server)?
        • I.e. Who is authoring this data?
    • Does ALL of this data need to be shared with ALL clients?
      • I.e. Can it be spatially chunked?
      • Is relevancy important to you? (e.g. procedural world, only sending data for the proc-gen dungeons you enter).
      • Is Importance a factor? Or would this data always have maximum importance?
      • What are the streaming requirements?
        • E.g. Are you expecting on-demand download of specific files?
        • What kind of latencies are acceptable for streaming this large data?
          • E.g. A few hundred ms? A few seconds? Tens of seconds? Configurable?
      • Will the client be experiencing "normal" multiplayer gameplay (i.e. snapshot replication) while this streaming occurs?
        • Or will this be hidden by loading menus?
    • Do you expect the server to implement bandwidth flow control/throttling for this data?
      • E.g. Limiting the client to a specific kbps (using similar controls as throttling snapshots)?
      • Or would you be fine with huge packet bursts?
    • Do you expect this data to be cached/saved between sessions?
      • E.g. If I (a user) play the same user-generated level 10 times, would you expect netcode to cache it once?
        • Thus, do you expect some kind of "file" versioning and/or change detection & replication?
      • What is the lifetime of this data on the server?
        • One play session? Persistent? etc.
        • Would you expect the server to interface with a database backend to persist this data?
    • What format is this data in? E.g. NativeArray<byte>? BlobArray<T>?
    • How do you expect to reference these large blobs of data?
      • Stored directly on GhostFields and/or Dynamic Buffers?
      • Via some kind of replicated reference to different "files"?
    • What data-types and filetypes do you expect to replicate?
      • Examples: Terrain data, modded animations, , text files?
    • Finally: Can you rate the importance of this feature?
      | Blocking | Will Need This Year | Will Need Eventually | Nice to Have | N/A |
    • And describe of your own planned or existing workarounds (and performance data if you have it).
    Reminder: The above should not be taken as us committing to delivering any/all of this.
    Cheers!

    Also: Feel free to use the Conversations feature if your responses to the above would be confidential. Link.
     
    Shinyclef, filod, Kmsxkuse and 5 others like this.
  6. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,761
    Hello. I've had a couple of cases where I wish I could reliably sync larger chunks of binary data. My most recent use case was streaming a navmesh from server to clients.

    Please describe your use-case(s).
    Nav mesh, world maps and procedural generated data from server to client

    Describe how many "files" you expect to have, their average size, and the max size you'd like the ability to send.
    So far for my use cases when split into chunks these have been around 1k to 20KB.

    Does this data change at runtime?
    Yes
    • If so, how frequently? Generally not frequently
    • And by whom (client vs server)? Server
    Does ALL of this data need to be shared with ALL clients? Yes
    • Can it be spatially chunked? For the most part, yes
    • Is relevancy important to you? (e.g. procedural world, only sending data for the proc-gen dungeons you enter). Definitely
    • Is Importance a factor? Or would this data always have maximum importance? Mostly send at minimum importance to not impact other gameplay systems.
    • What are the streaming requirements?
    • E.g. Are you expecting on-demand download of specific files? Only chunks of the whole data.
    • What kind of latencies are acceptable for streaming this large data? Relatively large (seconds) but there is an upper limit before user experience is affected (hitting edge of world)
    • Will the client be experiencing "normal" multiplayer gameplay (i.e. snapshot replication) while this streaming occurs? Yes
    Do you expect the server to implement bandwidth flow control/throttling for this data?
    Optional. Definitely not required and large bursts would be acceptable.

    Do you expect this data to be cached/saved between sessions?
    Would be convenient but I wouldn't expect it
    • E.g. If I (a user) play the same user-generated level 10 times, would you expect netcode to cache it once? Caching would be useful to minimize resending of data when re-entering an are.
    • Thus, do you expect some kind of "file" versioning and/or change detection & replication? Would be extremely useful
    • What is the lifetime of this data on the server?
    • One play session? Persistent? etc. Persistent
    • Would you expect the server to interface with a database backend to persist this data? I would not expect this to be handled by netcode.
    What format is this data in? E.g. NativeArray<byte>? BlobArray<T>?
    NativeArray<byte>

    How do you expect to reference these large blobs of data?
    • Stored directly on GhostFields and/or Dynamic Buffers? This would be easiest solution
    • Via some kind of replicated reference to different "files"? This would be acceptable
    What data-types and filetypes do you expect to replicate?
    Terrain data and related map, navmesh, etc

    Finally: Can you rate the importance of this feature?
    I already wrote a workaround so it's not blocking for me but for the kind of work we have planned in future I would say very high, probably up there with most useful feature I can think of off the top of my head

    And describe of your own planned or existing workarounds (and performance data if you have it).
    I have a workaround at the moment but I just realized I need to run and will update this with my existing solution when I'm back in a few hours
     
    Last edited: Jan 27, 2023
  7. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    @NikiWalker
    • Please describe your use-case(s).
      Voxel world streaming.

    • Describe how many "files" you expect to have, their average size, and the max size you'd like the ability
      to send.
      Not sure yet. The world will be lots of voxels, the terrain will be destructible, there will be lots of changes occurring, frequently.

    • Does this data change at runtime?
      Yes
      • If so, how frequently?
        Very

      • And by whom (client vs server)?
        Players, and AI Agents on the server. everyone can change!
    • Does ALL of this data need to be shared with ALL clients?
      • I.e. Can it be spatially chunked?
        It will be spatially chunked.

      • Is relevancy important to you? (e.g. procedural world, only sending data for the proc-gen dungeons you enter).
        Yes

      • Is Importance a factor? Or would this data always have maximum importance?
        I imagine nearby things are more important than far things.

      • What are the streaming requirements?
        • E.g. Are you expecting on-demand download of specific files?
          Players joining or entering new areas probably need to download multiple voxel chunks.
          Updates from there I hope to keep as small deltas if possible.

        • What kind of latencies are acceptable for streaming this large data?
          It's ok to take seconds to load the entire in range set of chunks. But it would be nice if near things the player will interact with now are loaded sub 1 second. It's the terrain, so players can't really do anything until it's loaded enough.
      • Will the client be experiencing "normal" multiplayer gameplay (i.e. snapshot replication) while this streaming occurs?
        Yes. The world will stream as they move around in it.
    • Do you expect the server to implement bandwidth flow control/throttling for this data?
      • E.g. Limiting the client to a specific kbps (using similar controls as throttling snapshots)?
      • Or would you be fine with huge packet bursts?
        I'm not really sure what the downsides of huge packet bursts would be. Unsure.
    • Do you expect this data to be cached/saved between sessions?
      • E.g. If I (a user) play the same user-generated level 10 times, would you expect netcode to cache it once?
        • Thus, do you expect some kind of "file" versioning and/or change detection & replication?
          In a previous project, I did this myself. In this project, I'm not sure. I wouldn't expect netcode to do it, but it would be a bonus if it did.
      • What is the lifetime of this data on the server?
        • One play session? Persistent? etc.
          Persistent
        • Would you expect the server to interface with a database backend to persist this data?
          This is what I did in my previous project using SQLite. This is a bonus, but not an expectation.
    • What format is this data in? E.g. NativeArray<byte>? BlobArray<T>?
      I don't know what all of my data will look like yet, but probably NativeArray<byte> for the voxels.

    • How do you expect to reference these large blobs of data?
      • Stored directly on GhostFields and/or Dynamic Buffers?
      • Via some kind of replicated reference to different "files"?
        I don't really have expectations right now, but I would imagine using Dynamic Buffers. As long as I can put layers on top of whatever foundation there is, and use it in burst jobs, it should be fine.
    • What data-types and filetypes do you expect to replicate?
      • Examples: Terrain data, modded animations, , text files?
        Voxel terrain. I'm not sure what else really. Maybe nothing else.
    • Finally: Can you rate the importance of this feature?
      | Blocking | Will Need This Year | Will Need Eventually | Nice to Have | N/A |
      Will need this year. If it doesn't come, I'll have to find some way to deal with this using existing features.

    • And describe of your own planned or existing workarounds (and performance data if you have it).
      I'm not sure because I'm still very new to NetCode. I'm imagine serialising some ghost data or something?
     
  8. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    Please describe your use-case(s)

    Transferring arbitrary binary data (potentially megabytes) from server-to-client, client-to-server and potentially client-to-client, and track the progress of the transfer. Server-to-client is by far the most important to me.

    This data could be anything, but the most immediate use case is binary voxel data for the level, but may also contain other world state. This data needs to be transferred to a connecting client before it can go “in game” and actually spawn a character in the world.

    An example of client-to-server would be uploading a custom level, or things like building design blueprints, character customization, etc. I don’t particularly need these to be Unity types or handled by Unity for me in some kind of smart way. I just want a way to transfer arbitrary binary data.

    Describe how many "files" you expect to have, their average size, and the max size you'd like the ability to send.

    This could be a single data blob for an entire level, but it might also be spatially broken into chunks of 16 to 64 kilobytes. It might also be initial world state combined by a series of sequential commands for modifying the world state.

    Does this data change at runtime?

    Yes, although it does not need to be transferred again. The change happens on the server as a result of input and will be passed to the client by means of RPCs. (As an aside, it isn’t currently clear to me if RPCs are reliably ordered. This is a requirement for such a system. I again plead for a general-purpose packet API). The idea is the initial world voxel state is transmitted to the client at join, followed by a list of commands for altering the world going forward.

    Does ALL of this data need to be shared with ALL clients?

    Yes, but not at the same time. Only upon join. I could see a scenario where world data streams based on spatial chunking if the world is infinite, but my current use case is not an infinite world.

    Can it be spatially chunked?

    Maybe, maybe not.

    Is relevancy important to you?

    Relevancy is important, but I don’t want Unity to manage that for me. Let me arbitrarily decide what to send when.

    Is Importance a factor? Or would this data always have maximum importance?

    For my use case it has high importance for a given client because the data MUST get there before the client can join the game. Of course, I don’t want the transfer to harm the game quality for other clients, so it should be balanced against that. But as long as I can track progress and show a progress bar to the user for the connecting client, it can take a while.

    What are the streaming requirements?

    Not files. Arbitrary binary data blobs. Latency is mostly irrelevant (within reason) as long as progress can be tracked.

    Will the client be experiencing "normal" multiplayer gameplay (i.e., snapshot replication) while this streaming occurs? Or will this be hidden by loading menus?

    Hidden by a loading screen.

    Do you expect the server to implement bandwidth flow control/throttling for this data? e.g. Limiting the client to a specific kbps (using similar controls as throttling snapshots). Or would you be fine with huge packet bursts?

    I’d expect some throttling to not harm other clients from experiencing normal gameplay. But within that constraint, it would a be a huge packet burst.

    Do you expect this data to be cached/saved between sessions?

    Potentially, but I don’t want Unity to manage this for me at all. I’ll handle storage myself.

    What format is this data in?

    NativeArray<byte>

    How do you expect to reference these large blobs of data?

    This is where it gets a bit fuzzy. I’m imagining something like RPCs that includes progress info (total bytes, received bytes) and a transfer handle that can be used to gain access to the byte array (even before it is fully received, because it may be able to process it in a streaming fashion), and/or cancel the transfer.

    Stored directly on GhostFields and/or Dynamic Buffers? Via some kind of replicated reference to different "files"?

    Nope.

    What data-types and filetypes do you expect to replicate?

    Not files. Just bytes.

    Finally: Can you rate the importance of this feature?

    Blocking / will need this year (except that I’ll implement my own workaround.)

    And describe of your own planned or existing workarounds (and performance data if you have it)

    My planned workaround is to implement the previously described system built on RPCs and a singleton transfer manager system.

    Transfers are initiated through the manager by giving it a data blob (NativeArray<byte>) and who it should go to. It does the transfer by splitting the data over smaller RPCs (say 200 bytes at a time) and sending them over a period of time at some specified data rate.

    The receiving end reconstructs the data, and exposes it as an entity component with the aforementioned progress state and transfer handle (and maybe some user data field to tag what it is, or I’ll use codegen to handle that).

    If I get this implemented, I'll reply here sharing the code as a proof-of-concept for those interested.
     
  9. pbhogan

    pbhogan

    Joined:
    Aug 17, 2012
    Posts:
    384
    @NikiWalker I've created a proof-of-concept for transfers on built on RPCs.

    It's available here for anyone interested: https://github.com/pbhogan/BlobTransfers

    It should be Package Manager compatible if you want to import into a project. The readme describes how to use it and the code is fairly well commented. Released as Unlicense to the public domain. Do with it what you will.

    Most of the action is here: https://github.com/pbhogan/BlobTransfers/blob/main/Runtime/BlobTransferSystem.cs

    Preliminary performance tests indicate packet drop has little impact on performance, but RTT delay and jitter have a pretty massive impact. Especially as the allowed transfer rate is increased. I guess this is due to how RPCs are implemented it's creating a lot of packet noise when things go wrong. I'm sure this could be improved quite a bit if the transfers are handled with a custom protocol on top of UDP packets instead. ¯\_(ツ)_/¯
     
    PolarTron and NikiWalker like this.
  10. najaa3d

    najaa3d

    Joined:
    Jan 22, 2022
    Posts:
    29
    @pbhogan's requirements mostly match our own.

    Our game is a Civilization style game with large world map and MB's of serialized data per map. Before joining, all data must be sent to the joining client. It's a command-based turn-based game, on fully deterministic logic -- so if all players start with Game State 'X' - and then execute the same sequence of commands in the same-order, then they'll all stay in synch. If anything goes wrong, we may need to resend the Server's version of game state to a client.... so occasionally this may happen mid-game, causing some pause to all players.

    The other thing we've got is Collaborative Map Editing - so that more than one person can edit the map at once. Same concept here is that the Joining Map Editor needs the entire scene to join.

    Right now, we send this all as One Big Stream -- equivalent to a 50 MB file transfer. This works fine on our LAN, but need something that rides on top of Unity's socket connection logic - so that we don't have to deal with firewalls/etc.

    FTP-style performance is the goal. Priority of this operation should be configurable. For us, it's mostly a don't care, because we make very light-weight use of networking, due to our game being turn-based, Command-sequenced , and deterministic. (Age of Empires was first big game to use this method, AFAIK.)
     
    simon-lemay-unity and NikiWalker like this.
  11. PetriAuvinen

    PetriAuvinen

    Joined:
    Jun 14, 2021
    Posts:
    5
    Please describe your use-case(s).

    I need general reliable data transfer for state synchronization when joining the session and for texture share among other unforeseen cases. For enterprise customers opening a single port instead of multiple ones is preferred so having reliable channel over UDP would be nice. It will also greatly simplify multiplayer implementation to not need to create code for multiple connections.

    Describe how many "files" you expect to have, their average size, and the max size you'd like the ability to send.

    I can't really say and I wish there would be no limitations in the netcode side to create external buffer if the data amount happens to exceed the transport systems's buffer size e.g. you could monitor the buffer and refill it from external buffer as data gets succesfully sent and free space becomes available. Typical use cases would be at few megabytes at maximum.

    Does this data change at runtime?
    • If so, how frequently?
    • And by whom (client vs server)?
      • I.e. Who is authoring this data?
      • Does ALL of this data need to be shared with ALL clients?
      • I.e. Can it be spatially chunked?
        • Is relevancy important to you? (e.g. procedural world, only sending data for the proc-gen dungeons you enter).
        • Is Importance a factor? Or would this data always have maximum importance?
        • What are the streaming requirements?
          • E.g. Are you expecting on-demand download of specific files?
          • What kind of latencies are acceptable for streaming this large data?
            • E.g. A few hundred ms? A few seconds? Tens of seconds? Configurable?
        • Will the client be experiencing "normal" multiplayer gameplay (i.e. snapshot replication) while this streaming occurs?
          • Or will this be hidden by loading menus?
    I wish to handle all this myself and not have additional implementation complexity due to these kind of features. I just want to have the ability to transfer lot's of reliable data when I need to.

    Do you expect the server to implement bandwidth flow control/throttling for this data?
    • E.g. Limiting the client to a specific kbps (using similar controls as throttling snapshots)?
    • Or would you be fine with huge packet bursts?
    I think there should be some kind of mechanism for throttling server, especially in order to quarantee that all clients get continuously served with with important data. There probably should be an important pipeline that gets always priority when stuffing new packet with data.

    Do you expect this data to be cached/saved between sessions?

    E.g. If I (a user) play the same user-generated level 10 times, would you expect netcode to cache it once?
    • Thus, do you expect some kind of "file" versioning and/or change detection & replication?
    • What is the lifetime of this data on the server?
      • One play session? Persistent? etc.
      • Would you expect the server to interface with a database backend to persist this data?
    Once again I would like to handle this myself.

    What format is this data in? E.g. NativeArray<byte>? BlobArray<T>?

    Byte data. NativeArray<byte> would probably be the data format then?

    How do you expect to reference these large blobs of data?
    • Stored directly on GhostFields and/or Dynamic Buffers?
    • Via some kind of replicated reference to different "files"?
    Something efficient for both Jobs and Monobehaviours.

    Finally: Can you rate the importance of this feature?
    Blocking

    And describe of your own planned or existing workarounds (and performance data if you have it).

    I was trying out the Netcode for ECS some years ago and was disappointed with the small reliable buffer size. I also could not find any way to monitor the buffer to see if it's about to overflow and further reliable messages would not get sent. Inability to monitor the buffer might be fixed now but I couldn't make sense of the API documentation to confirm this and I don't have time to make another test in practice.
     
  12. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    I can't really wait any longer for this. Does anyone have experience with creating a side-channel for this kind of data? Any recommendations for a good networking library to use alongside Netcode for the purpose of sending larger world data? For me, I need to send voxel data.

    Edit: this has turned into an attempt to integrate steamworks with transport package and send large packets through that...
     
    Last edited: Nov 12, 2023
  13. bryn-Holonautic

    bryn-Holonautic

    Joined:
    Jun 20, 2023
    Posts:
    7
    • Please describe your use-case(s).
      • Distributing serialised procedural terrain data to players. The bulk of terrain data is an array of voxels.
    • Describe how many "files" you expect to have, their average size, and the max size you'd like the ability to send.
      • A given map is about 3MB uncompressed, and compressed with 7zip is about 112KB. We can use the C# standard library for compression and decompression. For a given game session, the data would need to be synced at most once to each player.
    • Does this data change at runtime?
      • The data can be generated when it needs to be synchronised.
      • If so, how frequently?
        • When a player joins a session in progress, and potentially at game start.
      • And by whom (client vs server)?
        • I.e. Who is authoring this data?
        • The server would send an authoritative copy of the game data. It would potentially also be useful for custom maps to enable a client to upload data to the server, but this is secondary.
    • Does ALL of this data need to be shared with ALL clients?
      • Not necessarily. Data would need to be shared in two cases: 1. a player joins a session in progress with a procedurally generated map, in which case only that player needs the data; 2. a hand-authored (rather than procedurally generated) map, in which case all players need the data at the start.
      • I.e. Can it be spatially chunked?
        • The data is already spatially chunked, but the world is small enough that we'd want to send all chunks to all players.
      • Is relevancy important to you? (e.g. procedural world, only sending data for the proc-gen dungeons you enter).
        • Only at the session level. A game session involves a group of around 2-8 players.
      • Is Importance a factor? Or would this data always have maximum importance?
        • The session could not start/a player cannot join the session until they have the data.
      • What are the streaming requirements?
        • For late joins, only one player needs the data, at one time. For hand-authored maps, when a session is established, players would select a map. At this point we would want to sync at least low-resolution previews.
        • E.g. Are you expecting on-demand download of specific files?
        • This may be useful for sharing handmade maps, but the priority is sending data generated on the server to the other players in the session.
        • What kind of latencies are acceptable for streaming this large data?
          • E.g. A few hundred ms? A few seconds? Tens of seconds? Configurable?
          • Some seconds is acceptable, since this would take place before game start.
      • Will the client be experiencing "normal" multiplayer gameplay (i.e. snapshot replication) while this streaming occurs?
        • Or will this be hidden by loading menus?
        • The loading will be hidden by menus/loading screens.
    • Do you expect the server to implement bandwidth flow control/throttling for this data?
      • E.g. Limiting the client to a specific kbps (using similar controls as throttling snapshots)?
      • Or would you be fine with huge packet bursts?
        • Having control over packet bursts would be useful, especially if sending a lot of packets affects CPU performance, but it's not essential.
    • Do you expect this data to be cached/saved between sessions?
      • E.g. If I (a user) play the same user-generated level 10 times, would you expect netcode to cache it once?
        • Thus, do you expect some kind of "file" versioning and/or change detection & replication?
        • The data is likely to be different in every session, so it is not necessary to cache it. However, we may want caching in the future if we implement a level creator.
      • What is the lifetime of this data on the server?
        • One play session? Persistent? etc.
        • One play session or less. The data can be generated quickly on the server.
        • Would you expect the server to interface with a database backend to persist this data?
        • No.
    • What format is this data in? E.g. NativeArray<byte>? BlobArray<T>?
      • Data is currently marshalled to ReadOnlySpan<byte> for serialisation.
    • How do you expect to reference these large blobs of data?
      • Stored directly on GhostFields and/or Dynamic Buffers?
      • Via some kind of replicated reference to different "files"?
      • The data needs to be deserialised before it can be used, so getting a reference to the 'file' would be ideal.
    • What data-types and filetypes do you expect to replicate?
      • Examples: Terrain data, modded animations, , text files?
      • Currently, only terrain data.
      • Finally: Can you rate the importance of this feature?
        Will Need This Year (or else we'll have to find a workaround).
    • And describe of your own planned or existing workarounds (and performance data if you have it).
      • Planned workaround: synchronising procedural generation settings and a buffer of modifications to apply sequentially. This may work well enough for late joins, but it will restrict our ability to create custom authored levels. Performance to be determined.
     
    Last edited: Jan 23, 2024