Search Unity

system shutdown order and job completion on shutdown

Discussion in 'Entity Component System' started by snacktime, Feb 13, 2019.

  1. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    So it appears that OnDestroyManager can get called before all jobs have been completed.

    I have systems A and B where A runs before B.

    A reads from a NativeHashMap.
    B writes to it.

    When OnDestroyManager is called for B it complains that A's jobs are not completed when I try to Dispose the NativeHashMap.
     
  2. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    The NativeHashMap is owned by your system and passed to other systems. You are responsible for completing jobs that access the data. You could hand the job handle to the system that owns it when writing agains the hashtable for example.
     
  3. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,761
    Of course. You can destroy a system at anytime with World.DestroyManager(). A system only knows it's own dependency chain, it has no idea what systems depend on it so there is no way for it to complete future systems jobs.

    You should try to keep your systems decoupled from each other and I recommended avoiding sharing native containers between systems - it causes a lot of issues that'll keep popping up. If you must have it, you have to manually handle the dependencies yourself.

    -edit- beaten
     
    NotaNaN likes this.
  4. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Not sharing data between systems isn't really reasonable in a lot of cases. Shared data is very common in larger complex games. And as it is now ECS doesn't really provide good abstractions for dealing with that. That said I don't think it's difficult to fix.

    I've been thinking about converting over to using sub systems. A sub system looking mostly like a JobComponentSystem, but where you call it's OnUpdate from within a real JobComponentSystem. So you could encapsulate containers that are private to a sub system in the sub system. An shared containers you could declare on the ECS system, and say inject those as needed into the sub systems.

    That solves a couple of challenges. One you don't have to reason about order by looking at a dozen different files, it's all in a single block of code. And you don't have to be breaking encapsulation by passing around job handles.

    That does potentially pose challenges for some types of work where the job chains get too long, and you don't get as much parallelism. But I think that's still relatively easy to solve while also solving for being able to reason about it all well.
     
    NotaNaN likes this.
  5. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,761
    I beg to differ. A good system should not care about any other system only the data it is given. Been working on a pretty complex project for over 11 months now and since a major refactor about 1.5 months into project, it has 0 containers shared between (over 80) systems.

    If you declare a container in a system, that system owns it and should it should not be used by no other system. If you want to share something between systems, declare it outside of the systems and pass it to them both during initialization.
     
    learc83 and NotaNaN like this.
  6. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    How you pass data to different ECS systems wasn't really relevant, and I didn't even say how I did it. Systems have to care about what other systems are doing at that point due to the job dependencies. Hence the idea of sub systems to move containers to a single dependency chain.
     
    NotaNaN likes this.