Search Unity

Writing to a NativeMultiHashMap across multiple jobs

Discussion in 'Entity Component System' started by PublicEnumE, Aug 25, 2019.

  1. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    I'm passing the same
    NativeMultiHashMap<T, Q>.ParallelWriter
    into three jobs, across three systems.

    I would expect to be able to write to it from all three jobs, in parallel. After all, I can successfully write to it from multiple threads already, when one of my jobs (an IJobForEach) is scheduled using .Schedule().

    Instead, I get an InvalidOperationException:

    Am I misunderstanding how a NativeMultiHashMap<T, Q>.ParallelWriter can be used? Is there something about writing to it from multiple jobs that's off limits?

    Thank you sincerely for any advice.
     
    Last edited: Aug 25, 2019
  2. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,780
    I may be wrong, but I don't think you can write safely to hashmap from different systems. Job multithreading has special treatment for that purpose, to write safely.
     
  3. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    Hmm, thanks for that knowledge.

    What about multiple jobs from the same system?

    Is it restricted to single systems, or single jobs?
     
  4. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,780
    If you schedule multiple jobs in one system, you should be fine.
     
    PublicEnumE likes this.
  5. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    Thanks. That’s very useful to know.

    That’s a pretty unintuitive restriction, at least from my understanding of DOTS.

    Do you understand why this restriction exists? Why would Unity treat multiple jobs from one system differently from single jobs from multiple systems?
     
  6. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    Thats incorrect.

    The restriction is that ParallelWriter by default only gurantees the same IJobParallelFor job works correctly. Eg. we can't prove that you aren't reading in parallel to writing when there are multiple different jobs. Hence this is not safe by default.

    You can use [NativeDisableContainerSafetyRestriction] which completely disables safety for that container. In this particular case it should work fine. But it means there will be no safety checks for this container at all.
     
    PublicEnumE likes this.
  7. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    Thanks, @Joachim_Ante. In this case, would you think it’s unwise to do the following?

    1. Write to the NativeMultiHashMap from three parallel jobs.

    2. Read and use the results in a 4th job.

    Would you stay away from this pattern?

    The alternative I’m imagining would be:

    1. Write to three separate NativeMultiHashMaps in three separate parallel jobs.

    2. In a 4th job, loop over these first three maps and combine them into one new map.

    3. In a 5th job, read from and use the new combined map.
     
    Last edited: Aug 25, 2019
  8. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,780
    Thank Joachim.
    I got question regarding that. Since if we got more than one job in a system, which are scheduled as dependencies (perhaps I missed that critical keyword), wouldn't that ensure safety, for writing into hash maps?

    Btw. Good luck with Unite ;)
     
    Last edited: Aug 26, 2019
    PublicEnumE likes this.
  9. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    And same for jobs in multiple systems, using the inputDeps that are passed in, right?
     
  10. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,154
    If you create dependencies between the jobs, they are not writing in parallel to the NMHM, so this will work, but is not what you initially asked.
     
    PublicEnumE likes this.
  11. PublicEnumE

    PublicEnumE

    Joined:
    Feb 3, 2019
    Posts:
    729
    In that case, why would I get the error from the OP?
     
  12. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,761
    You wouldn't if you pass the handle yourself between systems (or do it from within the same system.)

    That is not the same for multiple systems.

    Systems don't just pass a handle between each other, instead they get a dependency handle in BeforeOnUpdate()

    Code (CSharp):
    1. JobHandle BeforeOnUpdate()
    2. {
    3.     BeforeUpdateVersioning();
    4.  
    5.     // We need to wait on all previous frame dependencies, otherwise it is possible that we create infinitely long dependency chains
    6.     // without anyone ever waiting on it
    7.     m_PreviousFrameDependency.Complete();
    8.  
    9.     return GetDependency();
    10. }
    of dependencies on other systems based off the read/readwrite component queries. This way jobs from multiple systems can run parallel with each other if they don't have conflicting dependencies.
     
    PublicEnumE likes this.