Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Scriptable Build Pipeline performance

Discussion in 'Asset Bundles' started by Hyp-X, Jul 18, 2019.

  1. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    431
    Hi,

    I saw that SBP came out of preview along with Addressables so I decided to give it a try.

    I'm writing this while I'm waiting for my first build using
    CompatibilityBuildPipeline.BuildAssetBundles

    I have plenty of time because it looks like it's significantly slower than the legacy build system.

    Shader packing stage looks like 10x slower at least.

    During build I see <10% CPU usage all the time less 5% at some points.
    Minimal IO traffic (whole project is on SSD anyway.)

    I was hoping that
    ChunkBasedCompression
    gets some love, but looks like it is still single threaded.

    Is there any improve planned regarding speed of builds?
     
  2. Hyp-X

    Hyp-X

    Joined:
    Jun 24, 2015
    Posts:
    431
    Wow running it again with no asset changes rebuilds everything...
    The old system supported incremental builds and got over this if nothing had to be changed.
     
  3. Ryanc_unity

    Ryanc_unity

    Unity Technologies

    Joined:
    Jul 22, 2015
    Posts:
    332
    Most of Unity's build API's are main thread only and don't scale well with CPU count. On the other hand, If you look at the CPU usage during the Shader compilation step, that scales really well. We have a few changes coming that make more of the build API's thread safe so we can scale them up appropriately. One of which is the Archive & Compress API, which in profiling was nearly a third of the time spent building was spent there.

    As for incremental building, it only rebuilds what is necessary to rebuild, but unlike the existing BuildPipeline.BuildAssetBundles, the Scriptable Build Pipeline incremental build / rebuild process is more detailed and thus correctly rebuilds data in all the cases where the existing pipeline failed to do so. What you might be seeing and thinking is a full rebuild is the caching process that validates if the previous data is still valid or needs rebuilding. There are a few cases that will trigger a full rebuild, such as changing the serialization layout for classes or structs (adding, removing, renaming a serializable field) as there is no way today for us to flag a single class or struct as a rebuild dependency.

    Over all performance is something we are working on improving as a whole, such as more threading support, updating compression libs (lzma update was a 30% improvement for archive & compress), more detailed script dependency tracking, and so on.