Search Unity

  1. We are migrating the Unity Forums to Unity Discussions. On July 12, the Unity Forums will become read-only.

    Please, do not make any changes to your username or email addresses at id.unity.com during this transition time.

    It's still possible to reply to existing private message conversations during the migration, but any new replies you post will be missing after the main migration is complete. We'll do our best to migrate these messages in a follow-up step.

    On July 15, Unity Discussions will become read-only until July 18, when the new design and the migrated forum contents will go live.


    Read our full announcement for more information and let us know if you have any questions.

Official Dynamic shader variant loading

Discussion in 'Shaders' started by aleksandrk, Sep 21, 2022.

  1. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    Greetings from the Shaders team!

    Unity 2023.1.0a11 brings dynamic shader variant loading. This feature allows you to manage the runtime memory usage of shaders.

    The bulk of shader memory consumption in the player runtime comes from two areas: variant parameters and variants themselves. When we do a build, we compile individual variants for each shader, pack their parameters and the code together into chunks and compress each chunk individually. When a shader gets loaded, we decompress all chunks, load all variant parameters and prepare all variants for compilation by the GPU driver when rendering requests them.

    Dynamic shader variant loading enables dynamic decompression of the chunks mentioned above and exposes control over two settings: the size of chunks during the build in megabytes and the maximum number of chunks that are kept decompressed simultaneously for each shader at runtime. Both settings can be configured globally and overridden for each platform. The default values are 16 megabytes per chunk, unlimited chunks. We treat 0 chunks as "no limit".
    Additionally, you can use Shader.maximumChunksOverride to override the chunk limit at player runtime for any shaders loaded after changing this value. The default value is -1, which means "do not override". Setting this property to a positive values will set a fixed limit on the number of loaded chunks, 0 is treated as "no limit", similar to the build time setting.
    When all variants of a shader fall within the chunk limits, we preload all variants, as in the default case.

    Please note that these limits only affect variants themselves, and have no effect on variant parameters.

    We measured the memory savings and performance implications on two projects, Boat Attack and an artificial scene with one shader that has 30 000 variants. Measurements were taken with two sets of settings: default (up to 16 MiB chunk size, unlimited chunks) and with dynamic variant loading enabled (up to 1 MiB chunk size, up to 1 chunk loaded per shader). All measurements were performed on a MacBook Pro M1.
    Memory usage in the artificial scene was reduced from 122.9 MiB (default) to 47 MiB (dynamic loading), or 61.8% reduction; in Boat attack - from 315 MiB (default) to 66.8 MiB (dynamic loading), or 78.8% reduction.
    Initial loading is faster with dynamic variant loading as well - artificial scene loaded the heavy shader in 41.58 ms (dynamic loading) instead of 64.68 ms (default), or 35.7% faster; Boat attack loaded the shaders in 46.89 ms (dynamic loading) instead of 114.4 ms (default), or 59% faster.
    Of course, this is not entirely free. Loading individual variants when they are required takes roughly 10% more time: 0.25 ms per variant with dynamic loading and 0.23 ms per variant with default settings.

    We plan to backport this to 2022 and 2021.3 LTS.

    Stay tuned for more!
     
  2. DavidZobrist

    DavidZobrist

    Joined:
    Sep 3, 2017
    Posts:
    239
    @aleksandrk

    Thanks sounds good. Waiting for the backport to 2021 LTS
    Updated the project from 2019 lts to 2021 1.5 Weeks ago increasing the build time by xx% because of shader variants going into the millions.

    Any idea when we can get a fix for 2021 LTS?
     
  3. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    This looks to be a change to runtime shader loading. So it won't affect compile times at all.
     
  4. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    This is correct.
     
  5. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    By the way, Dynamic variant loading will also be available in 2022.2.0b10, 2022.1.21f1 and 2021.3.12f1.
     
  6. fendercodes

    fendercodes

    Joined:
    Feb 4, 2019
    Posts:
    204
    Do we have to do anything to enable it or it just works straight away?
     
  7. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @drupaljoe you need to change the chunk settings to enable it. By default it's 16 MiB chunks, 0 chunks (unlimited). If you change the number of chunks to a positive value, it will be enabled.
    They are available in Player settings.
     
    DavidZobrist likes this.
  8. AdionN

    AdionN

    Joined:
    Nov 6, 2019
    Posts:
    16
    Just to make sure, it will be available in 2022.2.0+ as well? Aka all versions that will come from now on? Dynamic shaders will work on webgl as well? I am already waiting for Monday to try it out :)
     
  9. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    Yes, it should work on all platforms in all 2022.2 versions starting from beta 10.
    And on 2021.3 patch 12 and later.
     
  10. AdionN

    AdionN

    Joined:
    Nov 6, 2019
    Posts:
    16
    Greetings. So I am working with Unity 2022.2.1 now ( tho same goes with all previous versions) and addressables package. So here I have a problems, this is workflow that I expect to work:
    1) Go to Player Settings -> Graphics -> Save Shader Variant Collection.
    2) Add that SVC file to addressables.
    3) Build addressables with addressables scenes.
    4) In Player Settings -> others I set shaders default chunk size to 16 and default chunk count to 2
    So expected behavior is, then when scene starts and loads, shaders loaded to memory will be only the needed ones. However what I get is shaders using over 300 mb.



    With custom shader striping code I am able to reduce it to ~100 mb.

    So my question is it I am doing something incorrect? is it problem of addressables? Or there is something I am missing to use dynamic shader loading?

    Extra info:
    Platform: WebGL
    43 shaders and total 91 variants used.
    In editor URP Lit uses 0.8MB
     
    Last edited: Dec 21, 2022
    avataris-io likes this.
  11. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    Hi!
    You need to set the chunk parameters before building the addressables - the chunk size setting specifically affects the build.
    Try setting the chunk size to 1MB - does it take less memory after doing that?
     
  12. AdionN

    AdionN

    Joined:
    Nov 6, 2019
    Posts:
    16
    First of all, thank you for your time,

    I know that this is more problem with Addressables than dynamic shaders, but I am very thankful for your help.

    After reducing chink size to 1mb the memory size of URP/Lit got reduced:
    - Universal/Lit shader is in Packed separately addressables group together with Shader variants collection: to 67.6MB.


    -Universal/Lit shader is NOT ADDED manually to addressables group, but Shader variants collection is added manually into addressables group: to 273.8 MB.


    From these results I could guess that Addressables does load all possible shader variants by specific keywords. When everything is in same Addressables group it does load less. But regardless of that the size of loaded shaders in memory is still way too big, from what I can only guess that dynamic shaders loading does not work fully with it.

    Example in Editor the URP/lit reports at about 0.4 MB of memory (as editor does not use addressables directly.).
     
  13. SpaceToastDotNet

    SpaceToastDotNet

    Joined:
    Sep 14, 2020
    Posts:
    12
    On iOS, this literally slashed my app's total memory useage in half. Fantasic work, crew.
     
    dnach and aleksandrk like this.
  14. yasirkula

    yasirkula

    Joined:
    Aug 1, 2011
    Posts:
    2,910
    Firstly, I'd like to say that I've very limited information about how shaders are processed, so my question may make no sense (understandably): Could you explain why decompressed chunks stay in memory? If we're done with them once they are uploaded to the GPU, I couldn't understand why they remain in the memory. If we need them while the shader is in-use, then if we set the maximum number of chunks to 1 and multiple shader variants (chunks) are used by the scene, won't these chunks fight for the single available chunk slot, causing an infinite loop?
     
  15. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    A single chunk usually contains multiple variants of the same shader. Variants are loaded on demand, so as soon as you, for example, move or turn the camera, a new object can come into view and it may need a variant that wasn't loaded yet.

    So no, we don't need them while the shader is in use, we need them only when loading an individual variant.
     
    yasirkula likes this.
  16. yasirkula

    yasirkula

    Joined:
    Aug 1, 2011
    Posts:
    2,910
    OK so it's like cache prefetching in a way, got it! Thanks for the explanation :) 0.02 ms difference per variant with default settings sounds like a very insignificant con so the pros heavily outweigh the cons IMO, thanks for the new feature!
     
    aleksandrk likes this.
  17. Wully

    Wully

    Joined:
    Mar 18, 2014
    Posts:
    15
    This is great, we are fighting with shader variants and this helped to reduce our memory usage on scene load by 50%

    Would you be able to give more detail on the maximum chunks value?
    the maximum number of chunks that are kept decompressed simultaneously for each shader at runtime.

    Kept decompressed where? In memory I assume?
    What happens to chunks which are no longer needed or the number of chunks needed is > maximumChunks?
    If unneed chunks stay in memory are they compressed again to save memory?

    Finally, is there any way we can gauge a good chunk size and max number of chunks?
    It sounds like once stuff is loaded into memory its not unloaded from your other comments. So if we know we have lots of variants, because we aren't stripping them yet, setting the chunk size to 1MB and 1 chunk max would theoretically load in the fewest variants we need and avoid the bloat from the unused variants?
     
  18. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    That's correct.

    We keep the compressed data in memory if we don't load everything at once. Shaders have a very high compression ratio, so it doesn't cost much.
    The least recently used chunk is simply unloaded. Since the compressed data is readily available, it will get decompressed again if needed.

    You need to decide what "good" looks like first :)
    If your goal is to reduce memory usage above all else, 1 chunk with 1MB is perfect, as it will, indeed, keep the least potentially unnecessary data around. This may come with increased load time or a bit longer frame times when a variant is needed that is in a chunk that is not currently decompressed.
    From our tests not loading all those variant up front usually saves more loading time than what's spent later on decompression, but this is definitely HW-dependent, so you should profile on your lowest target devices to make informed decisions.
     
    Beauque, Wully and SpaceToastDotNet like this.
  19. Wully

    Wully

    Joined:
    Mar 18, 2014
    Posts:
    15
    Thank you for the information, that really helps!
     
  20. KarlKarl2000

    KarlKarl2000

    Joined:
    Jan 25, 2016
    Posts:
    611
    Hi @aleksandrk

    Will the "dynamic shader variant loading" help improve build times? (Not at runtime) .. sorry I'm new to all this.

    Can we trouble someone on the Unity team to make tutorials on how to reduce the shader variants compilation time? Or at least best practices?

    I've been trying to make a build the whole day .. my pc is still building.. it's probably at 9 hours now. :oops:

    I'm clearly not doing something right..

    Unity 2021.1.28f1

    Screenshot 2023-01-26 234043.jpg

    Thanks for the help
     
    Last edited: Jan 27, 2023
  21. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,560
    You're on 2021.1, not 2021.3. 2021.3 is the LTS (Long Term Supported) version that comes out after the end of the year and gets the long-term maintenance and sometimes backported features. So you'll have to upgrade to that version if possible. Just make a backup of your project first in case there's something in your project that ends up having obsolete API in 2021.3 and can easily revert if you need to.
     
    KarlKarl2000 likes this.
  22. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    KarlKarl2000 likes this.
  23. KarlKarl2000

    KarlKarl2000

    Joined:
    Jan 25, 2016
    Posts:
    611
    Immu and aleksandrk like this.
  24. qwert024

    qwert024

    Joined:
    Oct 21, 2015
    Posts:
    43
    Hi, first I have to admit I have little knowlegde in shaders so it is possible that my question might not make much sense
    I was wondering how "Dynamic shader variant loading" and "shader variant prewarming" can work together properly?

    Our game lets users upload their own 3d models. I ran the game, loaded those models, created a shader variant collection, and added it to Preload shaders in Graphics Settings or call WarmUp() in Awake() to prewarm it.
    However, variants in that collection are taking nearly 300mb memory. We've tried to strip the variants, but in our case it's really hard to strip more.
    After seeing this new feature "Dynamic shader variant loading" I am testing it now. With my collection placed in Preload shaders in Graphic Settings, I set chunk size:16mb & chunk count:5. The build indeed uses less runtime memory(220mb) than before.
    But what I can't understand is:

    From what I read in How Unity loads and uses shaders doc:
    1. When Unity loads a scene or a runtime resource, it loads all the compiled shader
      variants for the scene or resource into CPU memory.
    2. By default, Unity decompresses all the shader variants into another area of CPU memory. You can control how much memory shaders use on different platforms.
    3. The first time Unity needs to render geometry using a shader variant, Unity passes the shader variant and its data to the graphics API and the graphics driver.
    4. The graphics driver creates a GPU-specific version of the shader variant and uploads it to the GPU.
    and:
    • To avoid visible stalls at performance-intensive times, Unity can ask the graphics driver to create GPU representations of shader variants before they’re first needed. This is called prewarming.
    My understanding is that prewarm happens in 4(creating a GPU-specific variant), but not 1(loading variants into CPU memory).

    Will "Dynamic shader variant loading" make some variant not loaded? (my 300mb reduced to 220mb) If so, is my prewarm still working effectively? (if the variants are not in cpu memory, how do I prewarm them?)

    It would be awesome if you would help me figure it out. Thank you for reading!
     
    james_cg and Deleted User like this.
  25. latrodektus

    latrodektus

    Joined:
    Nov 1, 2016
    Posts:
    13
    Anyone care to share their suggested values for relatively simple games, like "Boat Attack", targeting iOS, Android and webGL?
    for PlayerSettings.SetDefaultShaderChunkCount & PlayerSettings.SetDefaultShaderChunkSizeInMB
     
  26. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    739
    I was hoping to be able to discard my current (messy) solution for optimizing runtime shader memory, but after setting the two values to 1, it seems to have made a difference, but not enough of a difference. I would expect memory to be around 2.7 MB from my testing based on what actually should be loaded in, but I am getting around 7 MB per shader instead. Is it keeping variants around that aren't needed anymore? Or maybe somehow loading more than necessary? Is there anything I can do to improve memory usage further here?

    @aleksandrk
     
  27. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @joshuacwilde there are two blocks of the data - one is the shader code itself (bytecode, GLSL, etc., whatever the graphics API requires) and the parameters (all uniforms: their names, types, where to bind them and other such information). All parameters for all variants are loaded into memory, there's no way for you to control this part. What dynamic shader variant loading controls is the shader code - it's loaded only on demand, in chunks as specified in the settings.
    I suppose the extra memory comes from the parameters in your case.
     
  28. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    739
    Ok, still not sure how that adds up though. Are you saying the parameters as in if the shader has a uniform of 100 float4s, then it will allocate sizeof(float)*4*100*shaderVariantCount?
     
  29. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    No, it's per binding. If you have a hundred uniforms in each variant, you'll have some data about each of them in each variant. Again, it's not the data that's going to be used for this uniform, but the information required to tell the shader where to find this data and do the necessary checks/conversions when necessary. If you have 100 different floats in each variant, it won't be sizeof(float) * 100 * variantCount.
    For example, if you have a constant buffer and a uniform value inside, this binding info most likely at least includes the constant buffer name, the uniform name, the uniform type, it's offset in the constant buffer, etc.
     
  30. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    739
    I see. Would love to see that get reduced in the future to only for variants loaded in, as that sounds quite possible. We have many variants that a player will never use due to never using those associated graphics settings, as I'm sure many other games do. Thanks.
     
  31. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    I tried to do that earlier and, unfortunately, it's not as simple as it sounds.
     
  32. swantonb

    swantonb

    Joined:
    Apr 10, 2018
    Posts:
    177
    Hi folks! From my understanding the dynamic shader variant loading should work out of the box with unity 2022.3 too, right?

    If yes then, well, the size of my URP/Lit shader is 1.9 GB in memory

    If not then please let me know what is the extra step required to make it work.

    In my project I give the player the option to switch from forward to deferred, I have a few URP renderer assets in the URP asset and i set the camera to use different URP renderer assets. I'm saying this because indeed, I surely get the forward+deferred variants in my build, but dynamic loading should take care of that and load only the variant that is in use (forward or deferred ones based on what player chose in options) , right?

    Thank you!


    Edit: yup aight, the thing seems to work, i set the default chunk size to 1mb and the size of the lit shaders got much smaller indeed:


    https://gyazo.com/39c6ebcae7e316201d1a83dfcf1f2aa6

    However, could you give me any tips on making it even smaller than this?
     
    Last edited: Aug 21, 2023
  33. james_cg

    james_cg

    Joined:
    Nov 13, 2019
    Posts:
    17
    Hi @aleksandrk

    In my case: I have about 2000 variants in svc, the memory usage increased 300MB after pre warmup, to avoid stalls. Just like @qwert024 posted before.

    Will "dynamic loading" unload what I have prewarmed before, and cause stalls when I use some variant(like creating a model of effect)?

    In short words, can I use both "svc prewarm" and "dynamic shader variant loading" in my game ?
     
  34. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @swantonb dynamic variant loading is off by default. You need to adjust the number of chunks parameter to a value greater than 0 to see the effect.

    @james_cg dynamic variant loading only affects variants that are not yet warmed up. On top of that, it only affects shaders that are loaded after the setting is changed.
     
    james_cg likes this.
  35. swantonb

    swantonb

    Joined:
    Apr 10, 2018
    Posts:
    177
    Yes, I said in my post that "i set the default chunk size to 1mb and the size of the lit shaders got much smaller indeed, However, could you give me any tips on making it even smaller than this?"

    Because the size is still too big, as you can see in my screenshot
     
  36. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    It won't get smaller than that without stripping variants.
     
  37. swantonb

    swantonb

    Joined:
    Apr 10, 2018
    Posts:
    177
    Wdym by stripping? I'm pretty sure the unused variants are stripped.

    So you're saying 0.8gb for shaders is the best that can be done?
     
  38. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    It's unlikely (although not impossible) that your project is really using that many variants. Even if a single variant is 100 KB (and I think they are smaller in case of URP/Lit), 0.8 GB would translate into ~8000 variants.
     
  39. swantonb

    swantonb

    Joined:
    Apr 10, 2018
    Posts:
    177
    Well that sounds about right, i have around 10000 materials.

    Can I have your best tips to reduce variant count please?
     
  40. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    Do the materials differ in features they use much?
    Here's a thread that links to documentation about improving variant count: #1, check `Build-time shader variant stripping`
     
  41. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    12,015
    I'm having trouble figuring out what are good values to enable this feature. I don't feel like the docs or this thread provide enough info so I can decide what are good chunk count and size values for my project. Any pointers?
     
  42. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @AcidArrow If your goal is to reduce the memory usage, I'd recommend setting it to 1MB and 1 chunk and checking whether there are any extra hitches while loading. If there are none, you're done :) If there are, try increasing chunk size first (I wouldn't recommend going over 16MB for mobile).
     
    sameng, wwWwwwW1 and AcidArrow like this.
  43. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    12,015
    Appreciate the info, but this prompts another question: Could I have other goals when using this feature?

    From the way it is presented and my understanding of it, it seems it could potentially greatly reduce ram usage, with a tradeoff of maybe making hitches worse (a bad configuration would introduce extra ones, a good would maybe make the "existing" ones last a bit longer). Are there configurations where a completely different goal (and different tradeoff) is achieved?
     
  44. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    The intended use case is, indeed, runtime memory usage reduction.

    However, smaller chunk size leads to slightly worse compression ratio for shader data (and we do compress it). It, of course, depends on the shader and on the target graphics API.
    It can also reduce the loading time as we don't pre-create all the variants in memory, but loading an individual variant for the first time takes longer (around 10% slower on a mobile device if I remember correctly). If the needed chunk is not in memory, it has to decompress the data first, which can lead to additional delays.

    So, changing these settings can influence the build size and the loading times.
     
    sameng and AcidArrow like this.
  45. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,650
    Can I use dynamic shader variant loading to bypass the entire shader stripping/building process that occurs when creating a player?

    It would be tremendously useful for me if this is supported, as it would significantly reduce the iteration time when testing a player. When the shader variants are loaded and compiled asynchronously at runtime, it would not pose much of a visual problem for development builds for me.
     
  46. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @Peter77 no, unfortunately not, as it's not talking to the Editor at all.
    We know it's a very useful feature to enable - and there was an attempt to do this earlier.
     
  47. sameng

    sameng

    Joined:
    Oct 1, 2014
    Posts:
    185
    Hi, thanks so much for this feature @aleksandrk !!
    And thank you SO MUCH for backporting it.

    I've carefully re-read the thread as well as documentation multiple times but there are a lot of assumptions that I do not have the prerequisite knowledge for, nor are there any resources linked to learn more, and I don't know what to search online for.

    Could you please provide more information regarding these settings for those of us who aren't as knowledgeable?

    The only thing I've really gleamed is:
    Try 1MB - 1 Chunk. If it stutters, try increasing chunk size.

    CHUNK SIZE
    1. The default was 16MB, and the suggestion is to try 1MB.

    • Can you please explain the large difference from the default--what were the benefits of the default 16 MB vs 1MB?
    • What are the tradeoffs between a larger chunk size and smaller chunk size?
    • Is there any reason for a user to choose 4MB? Or 8MB? Is there any reason to set a larger size like 32MB?
    • My standard shader is taking 500 MB at runtime -- what would be a potential recommendation for this?
    • Does this affect when shaders get Unloaded as well?

    CHUNK COUNT
    2. Your recommendation is Chunk Count 1, seems like a big change from Unlimited. Could you explain?

    • Why is this recommended to try? And why is the recommendation to increase Chunk Size vs Chunk Count?
    • Could you please explain in what scenario would a user set Chunk Count 2? Or 4? Or 8?
    • Does Chunk Count change performance?
    • Is there any reason to set unlimited chunks with this feature?

    PERFORMANCE
    3. The chunks are decompressed -- I assume CPU usage? Could you please give a little more info about this?

    • Does increasing Chunk Size increase or decrease the decompression CPU usage? You mention it might be negligible? • Is this cost per-frame, or is it only upon loading a new shader?
    • Does increasing Chunk Count increase or decrease the CPU usage?

    I apologize for all the questions. The settings are a little opaque to me, perhaps because I don't have the prerequisite knowledge nor do I know what to search. I want to make an informed decision when setting these. I will do further testing for my project as well -- but my build takes hours, and changing these settings around takes a lot of time.

    Thank you! Please take your time to answer and thank you for the hard work!!
     
  48. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    16MB was the size of chunks before this feature was introduced. We choose to not change the default in order to avoid potential regressions when people upgrade from previous Unity versions.

    Larger chunk size means higher memory consumption and higher likelihood of hitting a chunk that's ready when rendering needs a new shader variant (which, in turn, means getting the data to the driver faster). There is also a small benefit for the disk size when using larger chunks because chunks are compressed independently of each other. Smaller chunk size has the opposite effects.

    Yes, if you prefer faster variant loading over increased memory usage. An additional thing to consider is that when the chunk is decompressed it requires a continuous region of memory to decompress into. If the device doesn't have enough free memory due to fragmentation, the app will crash.
    This decision may vary based on the target platform - you may want to spend more memory on desktops and choose memory usage over loading time on mobile.

    The recommendation doesn't change much with the memory footprint of the shader. All large shaders will benefit from using dynamic variant loading if you're OK with trading loading time for memory usage.

    No. If a shader gets unloaded, all its data gets unloaded as well.

    This feature was introduced to reduce runtime memory usage of shaders. Most of the time only a small subset of variants that get compiled are actually used, and any memory that holds the variants that aren't is not used in the best way. The lower the number of chunks, the lower the memory footprint of the shader. Unlimited is another default that set in order to not introduce any regressions when upgrading.

    There's little difference between increasing the chunk size vs increasing the chunk count. In the end, it will use more memory for the shaders that do not fit into the set limit.

    If a chunk that contains the variant is not ready, we need to decompress it. This may involve removing the least recently used chunk from the "ready chunks" cache. If you let N chunks to be in memory, you'll get more cache misses that do not have to remove older chunks. But this operation is not very costly.

    Yes, if you want to disable it :)

    Yes, this is done on the CPU.

    The more data you have to decompress, the longer it will take. It can become non-negligible at some chunk size depending on the HW.

    This cost is paid once each time we're hitting a chunk that's not in the cache.

    Decompressing two chunks costs more than decompressing one chunk that's 2x the size. By how much is very HW dependent.
     
  49. sameng

    sameng

    Joined:
    Oct 1, 2014
    Posts:
    185
    Thank you so much for the answers. Super super helpful, cheers!

    I've written a summary based on my understanding. Please chime in if I got anything incorrect.
    I've marked the ones I'm unsure about with a (?)


    By default, Unity will load all shader variants into memory.
    This can cause big memory usage by shaders, as the memory is taken up by a lot of unused variants.

    Dynamic Shader Variant Loading gives us the ability to not load all variants into memory!
    + This can drastically reduce memory usage at runtime.
    + This can also greatly reduce scene load times.
    - This could introduce stuttering when a new shader is needed.

    Setting Chunk Count to 1 means that Unity will only load the 1 Chunk that's needed. (?)
    + This reduces load time, and reduces runtime memory usage.

    However, whenever a shader variant is needed, Unity will need to load that Chunk.
    The default is 0, which means Unity will load all Chunks for all requested shaders.

    If we set the Chunk Count to 1, that means Unity will only load 1 Chunk for each Shader variant at startup. (?)
    + Now Unity is not loading all variants into memory. This can greatly reduce memory usage!
    + This can decrease initial scene load time as well.
    - This may add stuttering/hitches when a new variant is needed -- for example, when an object first appears.
    - This hitch happens only once, when that variant is loaded.

    At Chunk Count 2, that means Unity will load at minimum 2 Chunks for each Shader variant (?)
    By increasing the Chunk Count:
    + This can increase the amount of Chunks loaded, which may reduce stuttering.
    + This can decrease the amount of decompression needed, as more "ready" chunks remain in memory.
    - This can increase memory usage and initial scene load time.

    We can further reduce the memory usage by reducing the Chunk Size to, for example, 1MB. The default was 16MB.
    By reducing the chunk size:
    + We have lesser wasted memory in each chunk, reducing the amount of wasted chunk space.
    + We load smaller chunks into memory, which can reduce load times.

    However, the smaller chunks come with a worse compression ratio:
    - This can increase build size.
    - This can increase the CPU usage by about 10% when loading chunks.
    - Fewer variants are in memory, which means hitches may be more common.
    - Decompressing a bigger chunk is easier on the CPU than decompressing multiple smaller chunks.


    Summary

    Try 1MB, 1Chunk, and see if you have increased hitches.

    If you have hitches, to reduce it:
    • Increase Chunk Size or increase Chunk Count:
    Which one to increase can depend on the Hardware (?)
    • Increasing Chunk Size is slightly more wasted space in each chunk.
    • Increasing Chunk Count is slightly more CPU usage when decompression. (?)
     
  50. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    Mostly correct :)

    Unity will keep at most N chunks in memory per shader. It will start with a clean state (no chunks loaded at all).

    There's no direct relation between decompression time and chunk count.
    Suppose that the scene ends up using 50 variants from a given shader. If all those variants happen to end up in, say, two 1MB chunks, you'd end up loading only those two chunks. If the chunks are not consecutive, increasing chunk size to 2MB will not be any different from the 1MB case, but it will use more memory and take a bit longer to decompress each chunk.
     
    sameng likes this.