Search Unity

[Jobs][Lags] JobTempAlloc has allocations that are more than 4 frames old

Discussion in 'Entity Component System and C# Job system' started by dyox, Jan 18, 2018.

  1. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    526
    Hello,

    I'm actually using jobs to work with meshes and data.
    I'm using List/Array both can be resized at any moment into the job Execute().

    This message + Spike/Lag appear on log :
    Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak

    What is the best way to work with few allocations in Jobs and why this message cause a massive spike ? (Except pre-caching, that is not possible all times)

    (Editor,Standalone-Mono,Standalone-IL2CPP)

    Thanks.
     
    andywood likes this.
  2. LeonhardP

    LeonhardP

    Unity Technologies

    Joined:
    Jul 4, 2016
    Posts:
    1,273
    Hi dyox,
    This might be a bug. Could you please create a small reproduction project and submit it with a bug report via the bug reporter? If you post the case # here we can process it faster.

    Detailed instructions on how to submit bug reports can be found here.
     
  3. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    526
    It's sent (Case 989338), and here is the code.

    Code (CSharp):
    1. using System.Collections.Generic;
    2. using UnityEngine;
    3. using Unity.Jobs;
    4.  
    5. public class JobAlloc : MonoBehaviour
    6. {
    7.     static public JobAlloc Instance;
    8.  
    9.     public struct Job : IJobParallelFor
    10.     {
    11.         public void Execute(int i)
    12.         {
    13.             for (int Size = 0; Size < Instance.Size; ++Size)
    14.                 Instance.list[i].Add(i); // <--- Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
    15.         }
    16.     }
    17.  
    18.     public int Count = 4096;
    19.     public int Size = 4096;
    20.     public List<int>[] list;
    21.  
    22.     public JobHandle Handle;
    23.     // Use this for initialization
    24.     void Start () {
    25.         Instance = this;
    26.         list = new List<int>[Count];
    27.     }
    28.  
    29.     // Update is called once per frame
    30.     void Update () {
    31.      
    32.         if(Handle.IsCompleted)
    33.         {
    34.             for (int i = 0; i < Count; ++i)
    35.             {
    36.                 list[i] = new List<int>(Count / 8);
    37.             }
    38.  
    39.             Job job = new Job();
    40.             Handle = job.Schedule(Count, 1);
    41.         }
    42.     }
    43. }
    44.  
     
    LeonhardP likes this.
  4. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    526
    [Update] I've found a new way to create this spike/message without any alloc (Mono-JIT)

    Code (CSharp):
    1.  
    2. public class JobData : Monobehavior
    3. {
    4. static public readonly byte[][] Datas = new byte[][]
    5. {
    6. new byte[] { 0,1,2,3,4,5,6},
    7. new byte[] { 0,1,2,3,4,5,6},
    8. new byte[] { 0,1,2,3,4,5,6},
    9. new byte[] { 0,1,2,3,4,5,6},
    10. };
    11. }
    12. public struct Job : IJobParallelFor
    13.     {
    14.         public void Execute(int i)
    15.         {
    16.            byte[] a =  JobData.Datas[i]; // <--- Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
    17.         }
    18.     }
    It appear that using a static field on mono (not IL2CPP), cause an allocation at least one time in the first job accessing it.
    Creating a Warning + Lag.
     
  5. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    526
    Any update ?
     
  6. M_R

    M_R

    Joined:
    Apr 15, 2015
    Posts:
    378
    you should not access global (i.e. static) stuff from jobs, as it would introduce race conditions.
    you should put a NativeArray inside the job and use that

    @UT shouldn't the job compiler prevent this kind of stuff?
     
  7. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    526
    Its totally a non sense.
    Jobs are threads and threads can access to everything.
    Accessing to a static variable do not mean race condition.
    Example here with a readonly static field.
    Passing data with only native array completely destroy the power of multithreading if we need to copy from main thread all data all time.
    Also in many algorithms lookup tables are used , how use them if we can not access to static field ?
     
  8. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    24,669
    I think @Joachim_Ante mentioned they would have some error checking to come...
     
  9. superpig

    superpig

    Quis aedificabit ipsos aedificatores? Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,124
    Why aren't you just using NativeArray/NativeList on the main thread too? No need to copy data around...
     
  10. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    I looked at the bug and replied a few minutes ago, replying here to in case someone else finds this.

    Accessing statics is indeed not intended to be allowed and will be protected against in the future. Instead you should pass native data structures through the job data. This is not possible in many cases with just NativeArray, you need NativeList, NativeQueue and NativeHashMap.

    The reason for the warning about TempJobAllocation is that the jobs take too long (more than 4 frames) which is currently not supported. The fix for this would be to make sure to call Complete to wait for the jobs if they take more than 4 frames.

    Calling Complete is also required to clean up the data for the safety system, so you really need to call it. Calling Complete when IsComplete is true is almost free if you want to make sure you do not wait.

    The main reason for this job taking so long is that the job does a managed allocation which is extremely expensive and often makes the jobified solution slower than running the code single threaded.
    Another reason is that the jobs are scheduled in batch and there is no explicit flushing of the batches in this cases which means the jobs are scheduled later than it seems. This can be fixed by calling JobHandle.ScheduleBatchedJobs(); after scheduling.
     
    laurentlavigne likes this.
  11. LennartJohansen

    LennartJohansen

    Joined:
    Dec 1, 2014
    Posts:
    2,059
    Hi. I tried to allocate a NativeArray in the job itself. I needed a temporary array for a calculation. According to the profiler it still used 302 bytes of GC. Is there a way around this allocation when creating a native array?
     
  12. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    24,669
    I thought the idea was to allocate it outside but fill it in inside...
     
    laurentlavigne likes this.
  13. LennartJohansen

    LennartJohansen

    Joined:
    Dec 1, 2014
    Posts:
    2,059
    for a single IJob it would work. But imagine if you have a IJobParallelFor job of 100 000 items. This will be split in multiple jobs based on the innerLoopBatch count.

    you can not allocate a temporary native array from the outside that all those jobs use.

    The issue with the 302 bytes is the same outside of the Jobs. it allocates this on GC to create a NativeArray, independent of the size of the array. If the array had a resize function we could have a pool of arrays to get no GC, but as it is now it will allocate when you create one.

    Lennart
     
    laurentlavigne and hippocoder like this.
  14. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    73
    In such situation I think you should allocate continues buffer once and access it using an index offset in each job, in similar manner such stuff is done in CUDA or equivalent solution.
     
    Invertex and laurentlavigne like this.
  15. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    1,966
    It would be helpful in general to say why statics are not allowed, otherwise you simply perpetuate myths about concurrency. There is no race condition that can come just from accessing static data in multiple threads, it's done all the time in many apps. It's a very common pattern with concurrent collections in high throughput applications. There might be good reasons not to do that in the context of how jobs work internally, but that should be clarified. We already have enough ignorance in the game development community on concurrency generally.
     
    dadude123 and laurentlavigne like this.
  16. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    526
    Maybe also clarify why 4 frames old is important.
    We must count each frame and call job.complete on main thread.
    So it's totally useless and laggy to have this sort of multithreading.
    At 120 fps, 4 frames is too short for jobs. 120 fps -> job.complete()-> lag...

    Actual system works and keep the fps stable even at 95% of cpu usage.
    When using System.Thread at only 50% cpu usage, spikes and lags appear .
    (8 cpu core example : 7 jobs+render thread+main thread + 7 System.Thread).
    Maybe just open jobs and let user decide what to do with them.
     
    Last edited: Jan 25, 2018
    SiriusRU and laurentlavigne like this.
  17. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    2,012
    I wonder if filling in a job and processing in a dependent forjob is possible (today is my make art day, I'm not touching VS)

    I think they want to control their update path and it depends on the data that jobs can access.
    [AllowStatics] and some warning that can be hidden with a flag would be a nice compromise.
     
    Last edited: Jan 25, 2018
  18. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    The allocation is caused by a managed object we create for tracking memory leaks. There is currently no way around it. We have some ideas for fixing it for temporary allocations, but I cannot give an estimate on when that will be implemented.
     
  19. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    The reason we are not allowing it (at the very least by default) is that the goal of the Unity job system is to guarantee that there are no race conditions.
    Multiple threads reading static data is not a race condition - unless someone is writing to it at the same time. With static variables we do not know who else is accessing the data so we cannot guarantee that no one else is writing it while the job is running.

    So, the limitation is not because we know it is a race condition, but because we cannot guarantee that it is not.
     
    alexzzzz, Prodigga and dadude123 like this.
  20. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    So far we have been focusing on using all cores for the simulation rather than making it asynchronous - which means you almost always complete the job within 1 frame. The 4 frame limit comes from the fact that we are using a specialized allocator for this case, not that we actively try to limit anything.
    Long running asynchronous jobs are slightly different and require some tweaking, but allowing you to choose a different allocator for such jobs which does not have to complete within 4 frames seems like a good start.
     
  21. Per

    Per

    Joined:
    Jun 25, 2009
    Posts:
    456
    Static volatile data or atomics will be fine regardless (even if the result could be some odd values). But for anything more complex you may want to offer a read/write lock generic wrapper instead of totally locking out the option.
     
    Enzi likes this.
  22. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    73
    It does not necessary needs to be about filling in the data, if you need temp table in the job, just preallocate the empty buffer.

    Allocation of one [10 000] buffer is orders of magnitude faster than 1000 x [10] buffer allocation, regardless of the overhead of allocating within the job itself.
     
    laurentlavigne likes this.
  23. Peter77

    Peter77

    Joined:
    Jun 12, 2013
    Posts:
    3,227
    laurentlavigne likes this.
  24. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    Atomics can be ok to use depending on what you use them for, but if you can get the different values from it depending on timing that is a race condition. It might not be a race you care about, but if you for example are using the value of the atomic as an index it could throw an exception when the value is old - which might only happen on a device you do not have access to - so we cannot make any assumptions in the engine about it being safe just because it is an atomic.

    We do support atomic reads/writes of data from jobs, just not through statics. The way we support it is through custom native data structures which can use atomics to do safe operations from multiple threads, examples of this is NativeQueue and NativeHashMap.
     
  25. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    1,966
    Ya I kind of have a knee jerk reaction to throwing around the term race condition liberally. Technically it actually requires both intent and bad code on the part of the user in all but a few cases (In the context of CLR rules). And IMO that's important when talking to people that don't know the domain. Because once they understand it's operations that are thread safe or not, not values, then a light comes on and it all starts to fall into place.
     
  26. UnLogick

    UnLogick

    Joined:
    Jun 11, 2011
    Posts:
    1,562
    A similar problem occurs if you have a non static data container with NativeArrays and then have a job that operates on them.
    Code (CSharp):
    1. struct MyJob : IJob
    2. {
    3.     [readonly]
    4.     public NativeArray<Vector3> inputData;
    5.     ...
    6. }
    7.  
    8. var myJob = new MyJob();
    9. myJob.inputData = dataContainer.persistentReadOnlyData;
    10. myJob.Schedule();
    11.  
    The problem here is that the assignment of inputData apparently copies the array into a temporary array which is not what I want. Copying the data from my persistentReadOnly only data source before any job can use it is madness.

    If Execute takes more than four frames I get the warning. Please don't say something like the Job should know the data container and access it from there. That would be a hard dependency on something that should be flexible. A ref NativeArray might work but last time I checked they were not supported yet.

    Edit: I guess having a ref native array would also break with the no references allowed, but it must be possible to set up a read only data source that doesn't need constant copying.
     
    Last edited: Jan 31, 2018
  27. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    Assigning NativeArrays like that is supported and will not copy the content of the NativeArray. Why do you think it makes a copy?
    It will prevent the main thread from _writing_ to persistentReadOnlyData until you complete the JobHandle of the job you scheduled since there is a job reading from it. The same access restriction applies to disposing he NativeArray so you have to store the JobHandles and call Complete on them at some point.

    The 4 frame limit has nothing to do with the NativeArrays passed to the job. When you call Schedule on the job Unity will internally allocate some space to store a copy of the struct you scheduled with. It is that allocation which cannot live more than 4 frames and unfortunately there is no way around that allocation right now. I agree completely with everyone saying that it should be, the limit is not intentional.
     
  28. UnLogick

    UnLogick

    Joined:
    Jun 11, 2011
    Posts:
    1,562
    That sounds good, I was getting very confused why it would do such a thing. It doesn't explain what I was seeing but I'm unable to reproduce so perhaps my unity instance got borked. I didn't get the warning if I allocated the native arrays directly on the job, only if I created them somewhere else and then copied the NativeArray, however now I'm consistently getting the warning and is unable to reproduce the old results... very weird, but a lot more sane.

    About the four frame limits on Jobs in general, that is actively gonna prevent us from producing a completely spike free experience when something suddenly takes priority and we need to force complete the jobs that in theory could wait, not to mention the book keeping that involves. So please bump the priority to remove that limitation so that it ends up right behind getting us ECS to begin with. ;)

    And about ensuring jobs always finish the same frame, that's specifically not what we want, we want to give you a target frame rate and unless we say Complete this now waiting a few frames is quite acceptable. And if something pops up and we keep saying complete something else, that's fine too.
     
  29. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    481
    Interesting question, but for those seeing this issue: Are you calling Dispose() on the NativeArray<T>s as appropriate to the lifecycle? I don't see this warning if I dispose temp stuff after job completion or dispose arrays marked persistent and set as a member of the owning class in OnDestroy (or other suitable location, depending on the usage).
     
  30. UnLogick

    UnLogick

    Joined:
    Jun 11, 2011
    Posts:
    1,562
    The NativeArray is persistent and for testing purposes my job runs for a full second. Each frame I write a debug log to report if it's completed.

    After four frames I get the warning, regardless if I dispose or not. I believe Tim was right it's not the NativeArray that is the problem it's the fact that I keep the job alive for more than four frames.

    Attempting to test your suggestion I hit this exception:
    InvalidOperationException: The previously scheduled job MyJob:JobCode reads from the NativeArray JobCode.input. You must call JobHandle.Complete() on the job MyJob:JobCode, before you can deallocate the NativeArray safely.

    It seems a bit overkill I need to do this when I've already tested IsComplete.
    Code (CSharp):
    1. if (jh.IsCompleted && !deallocated)
    2. {
    3.     deallocated = true;
    4.     jobCode.input.Dispose();
    5.     jobCode.output.Dispose();
    6. }
    I know you want to ensure that it's Completed... but that's what I'm doing. Reproed with beta 5.
     
  31. LennartJohansen

    LennartJohansen

    Joined:
    Dec 1, 2014
    Posts:
    2,059
    Did they not say somewhere that jobs can not currently be used for more than 4 frames?

    Lennart
     
  32. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    1,966
    They said that was an artifact of the current setup, they don't plan on intentionally limiting the amount of time jobs can take.
     
  33. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    2,012
    4 frame limit - what’s the consensus to prevent those? cut up a forjob in smaller bits?
     
  34. Peter77

    Peter77

    Joined:
    Jun 12, 2013
    Posts:
    3,227
    This error also occurs without using any C# jobs at user side.

    I just tested my 2017.1 project in Unity 2018.1.0b7 and found the following error in output_log.txt. Please note, I did not implement any C# Job code. This error must occur in the engine itself.
    Code (CSharp):
    1. Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak
    2. (UnityPlayer) StackWalker::GetCurrentCallstack
    3. (UnityPlayer) StackWalker::ShowCallstack
    4. (UnityPlayer) GetStacktrace
    5. (UnityPlayer) DebugStringToFile
    6. (UnityPlayer) ThreadsafeLinearAllocator::FrameMaintenance
    7. (UnityPlayer) MemoryManager::FrameMaintenance
    8. (UnityPlayer) `InitPlayerLoopCallbacks'::`2'::PostLateUpdateMemoryFrameMaintenanceRegistrator::Forward
    9. (UnityPlayer) ExecutePlayerLoop
    10. (UnityPlayer) ExecutePlayerLoop
    11. (UnityPlayer) PlayerLoop
    12. (UnityPlayer) PerformMainLoop
    13. (UnityPlayer) MainMessageLoop
    14. (UnityPlayer) UnityMainImpl
    15. (UnityPlayer) UnityMain
    16. (Wolf) __scrt_common_main_seh
    17. (KERNEL32) BaseThreadInitThunk
    18. (ntdll) RtlUserThreadStart
     
  35. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    JobTempAlloc is a type of memory which can be used both by managed and native code. It is used frequently in the engine itself so this does sound like a bug in an engine system unrelated to the managed job system which keeps a JobTempAlloc alive for too long.
     
  36. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    4,355
    Correct we are planning to fix the warnings for jobs that take longer than 4 frames. We definately want to support use cases of long running async computation.
     
  37. ImFromTheFuture

    ImFromTheFuture

    Joined:
    May 21, 2015
    Posts:
    27
    Hey, I'm getting this bug with a SteamVR project I'm working on. After reading this forum post, I understand the bug a little more but how do I solve it?
    Is it an error with the SteamVR package? Should I try reimporting it?
     
  38. josephriedel

    josephriedel

    Joined:
    Oct 5, 2013
    Posts:
    13
    Thanks @Joachim_Ante, just curious if the warning will be fixed in 2018.2? (currently lots of spam related to long running tasks in 2018.1).
     
    Vanamerax likes this.
  39. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    33
    4 Frame max still occurring in 2018.2 And I see no way of hiding the warnings. Jobs are running great in an asynchronous setup, my jobs are 10-40frames. saving tons of main thread time, just need Allocator.Persistent to apply under the hood also.

    From what I understand, this *WILL* be allowed/fixed, so I will continue development around async jobs. Just hope it's fixed in time :)
     
  40. RafaelF82

    RafaelF82

    Joined:
    Oct 28, 2013
    Posts:
    108
    Same here, getting a few of these appearing on same random logs from clients, no idea where it's coming from, no jobs on the game, I think it might be some async loads. I have no idea if that's an issue I should investigate further or not.
     
  41. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    These warnings are from anything using JobTemp allocations. The engine is also using such allocations internally, so if you are getting warnings about it without using jobs in C# it is most likely a bug in the Unity C++ code and the only thing you can really do about them is to file bug reports.
     
  42. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    1,387
    I also get the same error. Not using any c# jobs. It looks like it happens during scene loading. Async probably.
    Project is way too big to file bug report. Making a smaller repro case is probably impossible for such bug. :( Will this bug actually cause the memory leak? or are they just a safe warnings?
     
  43. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    1,387
    After some more fiddling , I have found the source of the bug! It is very easy to reproduce!
    Particle system with shape set as skinned mesh renderer, and then set timescale to 0. Then toggle the particle system's gameobject on and off ...
    This will cause the "
    Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak"
    Unity 2017.4.9
     
  44. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    62
    Great that you found the source, please file a bug report so it gets assigned to the correct team and tracked properly.

    As for the other question, not it will not cause a memory leak, but it will cause fragmentation of the temp job memory pool. That fragmentation causes higher memory usage, and can also cause redundant overflow allocations which hurt performance.
     
  45. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    1,387
    I have just reported that bug.
    Case number : 1079991
    Very easy to reproduce with 100% rate.
     
  46. Nikolai-Ostertag

    Nikolai-Ostertag

    Joined:
    Mar 17, 2015
    Posts:
    2
    When i build my Game it crashes. I think the system runs out of memory could this be a reason?
    I get the warning pretty much all the time.:

    "Internal: JobTempAlloc has allocations that are more than 4 frames old - this is not allowed and likely a leak"

    we use alot of jobs that could take longer then 4 frames.
     
  47. nilsdr

    nilsdr

    Joined:
    Oct 24, 2017
    Posts:
    157
    I had tons of these messages, managed to 'fix' by either disabling casting shadows on my directional light or disabling receiving shadows on my meshes..
     
  48. andywood

    andywood

    Joined:
    Sep 30, 2016
    Posts:
    1
    I'm getting a log full of this error. I'm not using Unity jobs at all. I'm using my own custom thread pool which is very simple and nearly lock-free. I spawn (System.Environment.ProcessorCount - 1) threads and use a message-pump like mechanism on the main Unity thread to read back results without any expected contention. Does Unity just have a problem in general with doing managed allocs on threads other than the main thread? This is very disappointing. I'm not planning to overhaul my entire game engine over this. I can't make a small repro, sorry. My project is large. Version 2018.2.17f1
     
  49. MartijnPW

    MartijnPW

    Joined:
    Feb 11, 2018
    Posts:
    1
    We're getting the same problem also. Seems this warning is given every frame. We're not using Unity jobs, and our directional lights are all set to 'no shadows'. Huge project also. Any thoughts?
    Unity 2018.2.15f1 and 2018.2.16f1
     
  50. llJIMBOBll

    llJIMBOBll

    Joined:
    Aug 23, 2014
    Posts:
    531
    I have this error, and it happens when using async to bake navmesh, im using 2018.2 and have jobs enabled.

    Code (CSharp):
    1. if (m_navMeshSurface [0] != null) {
    2.                 AsyncOperation asyncOperation = m_navMeshSurface [0].UpdateNavMesh (m_navMeshSurface [0].navMeshData);
    3.  
    4.                 // LOADING
    5.                 while (!asyncOperation.isDone) {
    6.  
    7.                     MavMeshProgress1 = asyncOperation.progress;
    8.  
    9.                     if (ProgressBar != null) {
    10.                         ProgressBar.value = asyncOperation.progress;
    11.                     }
    12.  
    13.                     if (ProgressBarText != null) {
    14.                         ProgressBarText.text = "1/2  " + "UPDATING AI NAV MESH... PLEASE WAIT... " + (ProgressBar.value * 100f).ToString ("F1") + "%";
    15.                     }
    16.  
    17.  
    18.                     // LOADED
    19.                     if (asyncOperation.progress >= 0.9f) {
    20.  
    21.                         MavMeshProgress1 = 1f;
    22.  
    23.                         if (ProgressBar != null) {
    24.                             ProgressBar.value = 1f;
    25.                         }
    26.  
    27.                         if (ProgressBarText != null) {
    28.                             ProgressBarText.text = "AI NAV MESH UPDATED!";
    29.                         }
    30.                     }
    31.  
    32.                     yield return null;
    33.                 }
    34.             }