Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct here to familiarize yourself with the rules and how to post constructively.

  2. Unity 2022.1 is now available as the latest Tech release.
    Dismiss Notice

Does the Unity Compile System run single or multi-threaded (or both?)

Discussion in 'Scripting' started by kbm, Apr 10, 2020.

  1. kbm

    kbm

    Joined:
    Aug 7, 2014
    Posts:
    83
    I want to know if the Unity Compilation process (meaning: The stuff that happens when I change some script and the Unity Editor reloads) utilises multi-core processors or not. If possible, I would like to get a definitive answer to this from one of the Unity Developers or someone who is knowledgeable enough to answer it.

    I have been doing some googling and have not found a definitive answer from anyone, only vague replies and/or guesswork.

    This is really important, because the compiler running on a single thread would mean the per-core performance of my CPU was vastly more important than the number of cores. I am buying a new workstation because right now my compile times are up to ~11 seconds which is just way too high (yes, I am already using .asmdef files) and would like to know in which kind of CPU to invest.
     
  2. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,248
    If it takes 11 seconds you should have enough time to open the task manager and see if Unity uses considerably more CPU than 1/CoreCount or not. If it does it does, it if doesnt it doesnt.
    I would imagine (which is one of those guesses you mentioned) that it only uses one thread per script it compiles, which would also mean it uses multiple threads if you compiled multiple scripts at the same time.
     
    Last edited: Apr 11, 2020
  3. kbm

    kbm

    Joined:
    Aug 7, 2014
    Posts:
    83
    Thanks, I will try that out! I would just very much like to get an answer from someone at Unity so we can all stop guessing what it does.
     
  4. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,021
    The thing is internal workings/behaviour are often not released so that way if down the line they find a need to change it they can more easily.

    Altering documented behaviour is harder, because since it was documented, tools may have been written with that documented behaviour in mind. Where as if it's not documented, then tools (well developed tools that is) are developed under the idea that the inner workings are undocumented and therefore subject to change and therefore should take that into consideration.

    This isn't to say this specific case is undocumented for that reason (or undocumented period... Unity has actually publicly talked about their compiler process many times).

    As is, as of Unity 2018.3, they use an incremental compiler (before then you had to include the package yourself). As well as assembly definition files... which you already know about.

    ...

    As for your speed specifically. What are your current specs like?

    I'm not sure how large your project is... but over the years I've worked on projects with LOTS of code in them. We're talking over 1000 code files, in the several megabytes of code. And I've never seen 11 second compile times.

    My current rig though of course is a rather new Ryzen 3900X that has a max frequency of 4.6ghz. But this is only a new machine I just built a few months ago. My previous build was a 3.5ghz i7 2700k which I rocked for nearly 10 years, and is still powering my partners machine to this day, and we never saw horrendous compile times there either. Even when I've booted up my very old i7-920 (a 13 year old processor) I still don't get that slow a compiles on very large projects.

    The only machines I've ever had slow compile times on were like my wife's laptop when I'd travel. Which well... being an i3 dual core clocked in the 2.something ghz range and not a whole lot of RAM and a basic platter HDD. It's to be expected.

    NOTE - compiling isn't just processor dependent. RAM and disk IO play a role as well. All that code has to be read from disk, and all the compiling has to be pushed in and out of RAM. So keep that in mind when spec'n out your machine.

    Actually, what is your disk and RAM situation like currently? Cause like I said, I've never seen slowness that high (especially when using asmdef files), but I've been on SSDs w/ a ton of RAM for I don't even know how long now. Heck even in my day job my work machine which uses HDD has way slower compile times than when I do my game stuff on my personal rig.
     
  5. PraetorBlue

    PraetorBlue

    Joined:
    Dec 13, 2012
    Posts:
    6,766
    Is it the compilation in your project that's taking a long time, or is it something else, like baking lightmaps?
     
  6. kbm

    kbm

    Joined:
    Aug 7, 2014
    Posts:
    83
    Thanks for your reply!
    My PC specs are as of now:

    CPU: Intel Xeon E3-1231 v3 3.4 GHz
    Graphics: Geforce GTX 970
    RAM: 16 GB DDR3
    Disk drives: SSD Samsung 840 and 850 EVO

    About 4-5 years old, I think.. but it shouldn't be that bad with this setup, right?
    My project has indeed a LOT of files as I currently use Entitas ECS which has a Code Generator (Roslyn) that leads to 3-4 times as many classes and files as "normal". I guess about 1000 files with sometimes multiple classes in them sounds about close to my case.

    I currently use .asmdef files for my external libraries only but I don't use that many anyways, most of the code is my own.

    Maybe something is seriously misconfigured in my Unity settings?

    Anyways, I am currently thinking about buying exactly the same CPU as you for my new workstation (Ryzen 3900X) so that seems like a good bet for fast compile speeds?
     
  7. kbm

    kbm

    Joined:
    Aug 7, 2014
    Posts:
    83
    Good question, I have no idea to be honest...how would I find that out? With "compilation" I mean the dotted circle icon in the lower right corner of the Unity Editor that starts spinning after I have some change in scripts. The whole process of this thing spinning, Unity practically freezing for multiple seconds and it stopping takes about 11-13 seconds each time which is driving me absolutely nuts.

    When I start a new project with only a handful of script files, maybe one external library, the compile process takes about 2-4 seconds, I guess. Is that normal?
     
  8. Joe-Censored

    Joe-Censored

    Joined:
    Mar 26, 2013
    Posts:
    11,821
    Unfortunately your machine is kind of low end today. Quad core Xeon from 6 years ago (actually on par performance wise with the i7 2700k released in 2011 mentioned above), slow DDR3 memory, and the 840 EVO suffered from poor write performance compared to later EVO SATA models. Between your machine and your large number of files to compile, that might explain everything.

    As for lightmap baking, that would occur when making a change to the scene, not your code files. You'd see a progress bar in the bottom right which says which step it is on. You can also look at what your lighting settings are set to (autogenerate checkboxes mean it will automatically light bake, uncheck those to do it manually when you want it to).
     
    Ryiah likes this.
  9. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,021
    I mean, I can't guarantee nothing. But I'd hope it would perform well.

    Here's my specific specs and I compile unnoticeably fast all the time:
    upload_2020-4-10_20-31-28.png
    Note - the EVO 850 is where my unity projects reside. The 970 evo 2TB is my OS drive where Unity resides. The 970 evo 1TB is my linux dual-boot.

    Also, I have no idea why speccy sees my single 2060 RTX Super w/ 8 gigs as 2 w/ 4 gigs...

    I also just got done playing some video games, so my temps are a little higher than usual.
     
  10. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    882
    Sorry this is lite stupid what you are trying to telling here.
     
  11. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    17,569
    This. I'm going to be lazy because it's late at night and just use PassMark here. The Xeon E3-1231 v3 would have been a sound purchase six years ago when it was brand new but it's a far cry from a good processor now. It scores about 7,000 which would make a single core score 1,750.

    https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E3-1231+v3+@+3.40GHz&id=2246

    By comparison a 2200G (first generation Zen) scores 6,800 and a 3200G (second generation Zen) scores 7,200. I was able to pick up the 2200G for $60 several months back making the Xeon essentially a very inexpensive budget chip.

    https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+3+2200G&id=3186
    https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+3+3200G&id=3497

    Meanwhile the 3900X scores just under 32,800 which would make the single core score be 2,733. I'm not going to pretend that this is an accurate way to measure performance, but this means a modern processor is essentially twice the speed of the Xeon.

    Furthermore available cache on the 3900X and 3950X make a massive difference. I don't feel like digging right now but Gamers Nexus has benchmarks showing absurd performance increases thanks to modern compilers having very high cache hits. He won't just see a doubling of performance upgrading from that Xeon. He'd likely see triple.
     
    Last edited: Apr 11, 2020
    Joe-Censored likes this.
  12. kbm

    kbm

    Joined:
    Aug 7, 2014
    Posts:
    83
    Thanks for your replies. The consensus seems to be that my CPU is quite underpowered and maybe my older SSD and RAM could be bottlenecks? Still, the issue stands: I would really wish for an illuminating reply by one of the Unity Devs on the topic, so we could all stop speculating on what priorities to have for fast development workstations.
     
  13. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,021
    Hrmmm, my server is a E3-1225 v3, not exactly the same, but closeish to the same time period.

    I wouldn't really want to develop on that machine for several reasons. Speed being one of them. I mean, it'd outperform my wife's laptop. But I would expect lower performance. But at the same time, my 2700k that I only recently upgraded from didn't have outrageous compile times, and that ranks lower than the E3-1231v3.

    If I feel frisky, I may test it out. Just see. Though, best case if I did. It'll be in a virtual as I won't be installing unity bare-metal on my server.

    @op - I mean... if you want, I can test your projects compile time on a 3900x, since that's the processor you're looking for. Up to you though, since you know, it's your source code and all.
     
    Joe-Censored and Ryiah like this.
  14. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    17,569
    My post is only somewhat speculative. My current system (Ryzen 1600X, 32GB DDR4-2666, SATA SSD) was built as a replacement for a five plus year old machine (Phenom II X4, 16GB DDR3-1333, SATA SSD) and my performance saw an increase in the two to three times range.

    A five year time period is sufficient to see at least two die shrinks and at least one major architecture and multiple minor architecture revamps. Plus we saw the introduction of M.2 NVMe slots for consumers and we moved to a faster memory standard. Speaking of which we're about to make the jump to DDR5 in 2021.
     
    Last edited: Apr 11, 2020
  15. valarus

    valarus

    Joined:
    Apr 7, 2019
    Posts:
    324
    Would be nice to optimize Unity not to use that high hardware resources.
     
  16. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,248
    You want it to use less of your hardware? Do you realise that this means everything would take longer?
    Or do you want them to do what they do now but simply faster. Why would you assume they didnt already optimize it?
     
  17. valarus

    valarus

    Joined:
    Apr 7, 2019
    Posts:
    324
    I mean that scales good enough under different configurations.
     
  18. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    28,842
    I just throw scripts that I'm not going to change into the Plugins folder so Unity creates a separate assembly for them (don't need to recompile). There is no scenario where I'll bother using asmdefs though since they are an absolute pain.

    In any case, I left this here on the oft chance people aren't into upgrading just yet.
     
  19. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,021
    Sooooooooooo.... did you curse me?

    All day yesterday and today my Unity 2019.3 has been DOG slow to compile. I've NEVER had this before. But it's now taking upwards of 10 seconds to compile. And I don't have thousands of files. (oddly, my 2018.x version for another project isn't slow, and does have thousands of code files)

    My Unity has been open for a couple weeks... it might need a restart or something. But I can't restart right now... work work work! (my day job, not unity stuff)
     
  20. Waz

    Waz

    Joined:
    May 1, 2010
    Posts:
    274
    To answer the actual question, no, Unity (at least to 2020.3) doesn't use much threading at all when building your game. For this reason, newer CPUs (which, primarily just have more cores) don't have a huge advantage over older CPUs.

    As @Yoreki commented, you can see this by watching CPU performance in the Task Manager.
     
  21. yu2tu

    yu2tu

    Joined:
    Aug 14, 2015
    Posts:
    19
    I read another post in here where a Unity mod says, "In IL2CPP process, IL > C++ and C++ compiling both multithreaded after 2020.x (couldn't remember)." So I was expecting that it's multithreaded. For c# compiling when working on the editor and changing scripts, however, they say it's a single-threaded per assembly definitions.
     
  22. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    1,694
    Right, also keep in mind when you use multiple assembly definition files and you have changed multiple files in different assemblies which have dependencies, they have to be compiled in order anyways. So the compilation process in general can not really speed up much. Note that assembly definition files only make sense when you mainly work in one of them so its the only one that needs to be recompiled. Keep in mind that editing something inside an assembly means all assemblies which depend on that assembly also needs to be recompiled. This is at least the top default assembly. So only use assembly definition files for packages or systems which are decoupled and don't really change often.
     
    yu2tu and hippocoder like this.
unityunity