Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Since it does not manage memory, Unity WebGL heap is always bound to crash?

Discussion in 'WebGL' started by ammar_12435, Mar 14, 2019.

  1. ammar_12435

    ammar_12435

    Joined:
    Jun 28, 2018
    Posts:
    24
    Unity mentions in their manual that asset bundles are loaded in Unity WebGL memory heap. They also mention in their blog that dynamic memory size in the heap is not managed (compacted) so fragmentation can occur:



    The image is from Unity official blog and shows the green blocks of fragmented heap memory.

    So if we load an asset bundle of say 100mb in the heap and then unload it, that 100 mb will end up as a fragmented block i.e not added to the free memory blocks.

    Now if we have N bundles of M size, each load and unload may create a fragmented block and this will cause Unity requiring more and more blocks from the free heap memory. Ultimately at one point, the free heap memory will finish and even though there is a lot of fragmented free space, Unity will not be able to use it and the application will crash.

    Is my understanding of this correct?
     
    Stevie2Pants likes this.
  2. jukka_j

    jukka_j

    Unity Technologies

    Joined:
    May 4, 2018
    Posts:
    944
    If you load an asset bundle of size 100MB and unload it, then there will be a free space block of 100MB (that may be fragmented away from the ceiling of Unallocated Memory section), but it is still available for use. I.e. if you then load another 100MB asset bundle after having unloaded the first one, the second asset bundle can take the memory space of the first freed one.

    A fragmentation related Out Of Memory condition will occur only if fragmentation ends up being so severe that all free memory is shattered in several really small blocks, and there is a very large allocation that can no longer be satisfied as a continuous region. I.e. adding up all the fragmented space, there might be enough free memory, but because allocations need to be continuous, the fragmented small blocks cannot be utilized.

    Managed memory vs native memory is orthogonal to the concept of compactable memory. Generally compactable memory allocations are extremely heavy to do, so those are not done for general memory allocations, but only in certain special cases, usually in context of pooled allocations.
     
  3. Cleverlie

    Cleverlie

    Joined:
    Dec 23, 2013
    Posts:
    219
    @jukka_j is there a way to trigger like a defragmentation process? because this means that is deterministic that Unity WebGL player will definitely crash on us always given enough time and enough allocations/deallocations, since we have zero options to deal with fragmentation here, and we need a robust way of handle this for our application that needs to run long times with multiple 3D texture loads (big CT scans) and mesh loads (obj files), which can take chunks of over 200mb, and these fragmentation issue is making the app crash just at the third or fourth load of a new 3D texture (even when the previous one is destroyed with Destroy(texture).

    we are running out of options here, is there a way to trigger some kind of defragmentation procedure every now and then?
     
  4. suntabu

    suntabu

    Joined:
    Dec 21, 2013
    Posts:
    76
    Actually, it's maybe impossible, as unity is using wasm for WebGL builds and you could check details here which is an unclosed issued.
     
    jukka_j likes this.
  5. jukka_j

    jukka_j

    Unity Technologies

    Joined:
    May 4, 2018
    Posts:
    944
    Apologies, I had missed this reply. This comes quite late so not sure if it is useful to you anymore, but in case someone else stumbles onto this:

    There is no defragmentation capabilities currently implemented in Unity WebGL builds. But before that is reasoned to be a limitation or a problem, we should not leap into conclusions.

    Individual memory allocations always need to be contiguous (i.e. linear). When a WebAssembly build runs "out of memory", two things have simultaneously happened:

    1. growing the total size of the wasm heap from outside is refused by the browser (because it has run out of memory as well to grow, and/or it is imposing a limit on max allowed memory for whatever reasons, see the "here" link in suntabu's thread above); and
    2. inside the wasm heap, we ran out of contiguous address space to allocate the needed amount of linear memory to place the allocation to.

    Both of these scenarios must be true for an out of memory crash (OOM) to occur.

    Now, when we get an OOM, the heap will always have "some amount" of fragmentation in it. How much this "some amount" is can greatly vary depending on the game's memory usage patterns. In the cases that we have profiled first hand so far, we have not seen huge amounts of address space wasted to fragmentation to conclude that fragmentation would have caused all of the memory pressure.

    In fact in most cases that we see (the link above), it is the browser that is refusing to grow the application heap further, limiting the application from having access to more than 300MB-500MB of memory.

    That being said, after we update to Emscripten 2.0.19, which should land in Unity 2021.2 Beta channel once it becomes available, there will be a tool called "--memoryprofiler" that will be available for Unity applications. This will be activated by adding a build flag "--memoryprofiler" in the build settings .emscriptenArgs field. This tool will be able to show a visual fragmentation map of the application, and can help developers answer the questions:
    - is my OOM caused by wasm heap fragmentation? or
    - is my OOM caused by browser refusing to give me any more memory? or
    - is my application just using ridiculous amounts of memory (2GB+/4GB+) that won't work in wasm32.

    Keep an eye out for Unity 2021.2 Beta when it comes available, if all go well it will be paired with Emscripten 2.0.19, and we can help developers do such analysis on their builds.
     
    Samasab and De-Panther like this.
  6. Cleverlie

    Cleverlie

    Joined:
    Dec 23, 2013
    Posts:
    219
    Hi, thanks for the reply, we are sure the problem with fragmentation occurs because of the big loads of ctScan data, we managed to workaround the problem by using the newest APIs to load the texture data directly from a buffer with
    SetPixelData()
    , the problem still relies in that when you try to load say a CTscan you need to keep in memory an array of bytes as a buffer, and then the Texture3D itself, we pass the pointer to the bytes buffer to the javascript application that holds the unityInstance, then javascript loads into this "reserved allocated memory chunk" whatever we need, like a new CTScan, and then we redo the
    Code (CSharp):
    1. volumeTexture.SetPixelData<byte>(pixelDataBuffer, 0);
    2.         volumeTexture.Apply(updateMipmaps: false);
    so the only way to workaround the fragmentation issues is that we always keep using the same buffer and we reuse the same texture3D, this limits us because once you defined your initial texture3D dimensions, you are locked into that size, you can't just create another texture with different dimensions otherwise it will occupy a new place in memory, if you try to destroy the previous texture so you can free that memory part then something in the middle of the process might be allocating one byte or 4 bytes just for a temporary float or whatever and then you are screwed, you can't fit a 500mb texture in a 499.99mb place, so you end up with a 499.99mb bubble in the memory layout.

    if there was a way to tell Unity to defragment memory and reassign all pointers so you free again all linear available space, that would be the solution for us. remember we are limited by the 2gb of maximum RAM usage because the array buffers that are used for the heap can't be higher than that in javascript or something like that, we could also use an improvement in that side so our webGL app is not limited to that outdated max limit. The app runs in an electron wrapper as a standalone exe (runs chromium behind scenes) so we have full control on whether the browser wants to give us more ram or not, the problem is the 2gm hard cap)
     
  7. De-Panther

    De-Panther

    Joined:
    Dec 27, 2009
    Posts:
    552
    When manipulating large texture on Unity WebGL, a "native" plugin would work better.

    One option is to write a JavaScript plugin, get the Unity texture pointer, and use it to update the texture from the JavaScript.

    Another option would be to use JavaScript and get the texture pointer like the first one, but doing the texture manipulation on a separate wasm, maybe even run it in a web worker if it takes lots of time.
     
  8. jukka_j

    jukka_j

    Unity Technologies

    Joined:
    May 4, 2018
    Posts:
    944
    Unfortunately defragmentation is not something that can lightly be fitted in. Making any program code defragmentation aware would mean that all of the code must switch from using common pointers and references to using a handle-based mechanism for referencing data (or some kind of global pointer-pointee registry table), that would build that extra indirection to enable memory relocation to occur under the hood.

    This "pointers to handles" transition/extra indirection is not something that can be changed with e.g. a build flag. Since virtual memory is available on all other platforms in existence, Unity (and practically every other modern engine as well) has not been designed to work with this kind of handle-based indirection mechanism, but all code directly access the memory locations where data reside. Changing this now would mean rewriting millions of lines of code.

    We are looking for solutions to this issue, e.g. by building an API to support offloading allocations outside the Wasm heap for large allocations.

    As a workaround, I would recommend attempting to allocate up front the largest buffer/texture you expect to need, and then operate always on that. Or try offloading the work to JavaScript outside the wasm heap like De-Panther mentions above.
     
    Cleverlie and DerrickBarra like this.