Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Feedback (Case 1161371) Compiling empty project takes significantly longer

Discussion in '2019.3 Beta' started by Peter77, Jun 9, 2019.

  1. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    Compiling C# files in an otherwise empty project takes about 3 times longer in Unity 2019.3 than in Unity 4.6. Please see the provided videos.



    Reproduce
    • Create new project
    • Right-click in Project window and chose "Create C# Script"
    • Observe gears icon in lower right corner as indication when Unity is done
    • Repeat these steps with Unity 4.6 and Unity 2019.3

    Actual
    Creating and Deleting, probably recompiling in general, C# source files in Unity 2019.3 is significantly slower than in previous Unity releases.

    In my test:
    Unity 4.6 = 2 seconds
    Unity 2019.3 = 6 seconds

    Expected
    Newer Unity versions should not be slower than older ones.

    Note
    Please use hardware similar to the PC I used to submit the bug-report with. It probably can't be reproduced on high-end machines.
     
    trombonaut, erelsgl, NotaNaN and 15 others like this.
  2. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    QA replied, unfortunately not really what I was looking for:
     
    tankorsmash, wlwl2, Immu and 4 others like this.
  3. Pyromuffin

    Pyromuffin

    Joined:
    Aug 5, 2012
    Posts:
    85
    I have noticed compile times have taken a hit with this release. Didn't @Joachim_Ante say they wanted to get iteration time under 500 ms? This attitude toward performance regressions is frankly unacceptable.
     
    gronkey, Achie1, P_Jong and 6 others like this.
  4. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    Today I received another email from QA. It seems they looked another time at this issue, here is the reply:
    I've observed packages often use Assembly Definition Files. I hoped if code is compiled via asmdef to a separate assembly, it would not affect unrelated code to reload, but it seems that's not true?!

    In this case, the more packages I've added to the project, the longer every "recompile" takes. Where it does not necessarily have to be the actual recompile, but the JIT and reloading of every single assembly in the project, even assemblies that don't have a dependency to the code that changed?

    Is this how it works?
     
    phobos2077 likes this.
  5. pointcache

    pointcache

    Joined:
    Sep 22, 2012
    Posts:
    579
    500 ms compilation. :D
     
  6. HaraldNielsen

    HaraldNielsen

    Unity Technologies

    Joined:
    Jun 8, 2016
    Posts:
    139
    Hi guys, trying to shed some light on how things work :)
    Packages are enforced to use asmdef's, so the package's code do not get included into user compiled assemblies and by result should minimize the compile time when the packages are already compiled.
    During a domain reload, JIT is only a cost when the code is executed, but if packages is using InitializeOnLoad, or other's that are invoked after a reload, and they are doing something time consuming, it will slow down the whole flow, and will do so even if you are referencing the package or not in your own asmdef.

    We are working on some tooling that will help Users understand what is happening, and what is taking up time under compilations/domain reloads.
     
  7. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    Tooling will not help if we can’t do anything about it, it will only help us make more pointed complaints.

    In the meantime, can you turn TextMeshPro back into an asset? I would then potentially be able to get rid of the package manager, it has only brought problems it seems. The collaborate team spent so much time turning collab into a package they forgot it is also supposed to function.
     
    trombonaut, P_Jong and sicklepickle like this.
  8. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    TBF, you can just import many packages, including TMP, into your project, delete the assembly definition, and remove them from the package manager if you want. Not sure what you'd gain, but you can do it.
     
    Awarisu and phobos2077 like this.
  9. ROBYER1

    ROBYER1

    Joined:
    Oct 9, 2015
    Posts:
    1,454
    Gotta love performance regressions :/
     
    trombonaut, marserMD and phobos2077 like this.
  10. DGordon

    DGordon

    Joined:
    Dec 8, 2013
    Posts:
    649
    I'm very unclear if splitting our project into different .asdef files (engine, editor, game data) is actually something that should theoretically make a noticeable difference, or if there's still enough other cruft going on that even if we got rid of the actual "compile" time, it still wouldn't make a big difference.

    Our project is really split into three areas: the engine, editor, and game data (.cs files generated by editors). In theory it sounds like it should be split into different .asdef files ... but I'm having a hard time telling if its worth refactoring at this point for a speed boost.

    Would putting all the "engine" files into its own .asdef, which does not call InitializeOnLoad, cause their time to essentially be zero when a recompile happens because of game data, or is it going to be something like 80% because there's other factors involved I dont know about?

    Does having InitializeOnLoad in editor files mean we incur zero compile time, but we still incur the time of however long it takes for my InitializeOnLoad code to run (which should be obvious)? Or are you saying that InitializeOnLoad will also cause other things (ie: the JIT cost) to kick in ... and if so ... what does that actually mean? Should I still expect that having all editor files in an .asdef even with InitializeOnLoad will cut the time by a large percentage compared to not having it in an .asdef, or does .asdef vs no .asdef become much more comparable once InitializeOnLoad is factored in?

    Basically ... it would be great if I could change a data.cs file, and ONLY have the data .asdef do whatever it needs to during recompile. But I'm not sure if that's what .asdef actually accomplishes.
     
  11. willemsenzo

    willemsenzo

    Joined:
    Nov 15, 2012
    Posts:
    585
    Is there any news on this for future releases? It takes me more than a minute to compile something that usually took me about 10 seconds.
     
    trombonaut and phobos2077 like this.
  12. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    Over a minute doesn’t sound right at all. Have you profiled it with Profile Editor to see where the time is actually spent? Sometimes you can be (un)pleasantly surprised...
     
  13. willemsenzo

    willemsenzo

    Joined:
    Nov 15, 2012
    Posts:
    585
    Haven't done that but even in an empty project with a single very simple script it takes much longer than in previous Unity versions.
     
  14. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    I’d still do it.
     
  15. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    Tested Unity 2019.3.0f1. It's still the same as I described in the first post. 2019.3.0f1 takes roughly 6s, which is about 3 times slower than Unity 4.6.

    Bug-report updated with this information.
     
    trombonaut, phobos2077 and konsic like this.
  16. jamespaterson

    jamespaterson

    Joined:
    Jun 19, 2018
    Posts:
    401
    For what it is worth from my limited experience probably one wants to go fully asmdef, or fully not. Having both mixed in a project increases compilation time as a kind of "catch all" asmdef is built in this scenario for the code not think. already in asmdef. Given the package manager is the preferred way to get a bunch of very useful standard features e.g. Post processing, textmesh etc then asmdef is probably more and more important for larger projects. Disclaimer: don't take my word for it.
     
  17. HaraldNielsen

    HaraldNielsen

    Unity Technologies

    Joined:
    Jun 8, 2016
    Posts:
    139
    Sorry for getting back so late, yes the tooling with help with two things:
    - To see if you as a user is doing something in your project that takes a long time. Today using the profiler can be going trough a big haystack if you just want to see why iteration time is slow.
    If you know what is slow, maybe you can take actions to be able to fix it.
    - To make much more informative bug cases, the more we know up front the faster the correct team will get it and be able to fix it.


    "Would putting all the "engine" files into its own .asdef, which does not call InitializeOnLoad, cause their time to essentially be zero when a recompile happens because of game data, or is it going to be something like 80% because there's other factors involved I dont know about?"

    Putting code into asmdef's will only help on compile time. Given you have only have to compile what has changed.
    But InitializeOnLoad is called everywhere, every time.
    The reason is that a domain reload is basically shutting down the .NET runtime, and restarting it (to some extend), so everything needs to be reinitialized. So if some package, or your code contains InitializeOnLoad that is slow, it will hurt domain reloads.

    "Does having InitializeOnLoad in editor files mean we incur zero compile time, but we still incur the time of however long it takes for my InitializeOnLoad code to run (which should be obvious)? Or are you saying that InitializeOnLoad will also cause other things (ie: the JIT cost) to kick in ... and if so ... what does that actually mean? Should I still expect that having all editor files in an .asdef even with InitializeOnLoad will cut the time by a large percentage compared to not having it in an .asdef, or does .asdef vs no .asdef become much more comparable once InitializeOnLoad is factored in?"

    Yes. Fx when opening a project that is already compiled, will still need to start the domain, will also call all InitializeOnLoad's.
    Even if compile time would be instant, InitializeOnLoad would still have a cost.
     
  18. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    After I updated the bug-report, QA replied with the following text. Thanks for the speedy reply!

    ... which is basically the same response I received earlier.

    I'm really questioning my effort to have reported all these regressions, as it seems none of these issues is important enough for Unity Technologies to fix.

     
    Last edited: Dec 2, 2019
  19. FcsVorfeed

    FcsVorfeed

    Joined:
    Aug 15, 2014
    Posts:
    50
    this is reason why i so angry to unity 2019.3's Technologies now,
    this version is the most slow & lag before any version
    no matter what i do, any thing!!!!
    don't ask me what i do, because the every thing is so slowly

    when i click the editor
    when i create new script
    when i change the script
    when i enter or exit platMode
    when i built the apk
    when i open the editor
    when i select the gameobject
    can you believe? everything is lag and slow

    i report dozens of feedback emails, 99% the QA tell me they can not reproduce it & close
     
    Last edited: Dec 3, 2019
  20. SugoiDev

    SugoiDev

    Joined:
    Mar 27, 2013
    Posts:
    395
    I know it is not much, but I would like to add my voice to the crowd that is asking for more performance in editor.
    Nothing Unity did since I started using it in 3.x is more important than speedy editor for fast iterations.

    Keep up the good fight, @Peter77
    There are dozens of us right now.

    I know there are many more lurking that have yet to get their voices here. I hope more will come out soon. In particular the more experienced developers that had to deal with this death-by-a-thousand-cuts daily in larger projects.


    I hope to see any editor performance regression being treated as high priority bugs instead of "by design".
    My operating system does an insane amount more than Unity, and it boots faster than Unity can recompile + reload its assemblies.
    Is it me or that should not be the norm?

    Good quality gameplay design depends a lot on fast iterations for tweaks. I'm loving the assembly reload-less playmode entering, but I think we can do better.


    Edit: I would like to add that I know that Unity is putting more effort in Editor performance and this message is just to express support for their efforts. In particular, I know several key areas are now being monitored for performance and analytics is being gathered.
    There's even some effort for a .NET Core port for the editor (imagine that!).

    I really think this time we'll have something, so I think it's worth that we try and voice our opinion now to show us, the developers using this tool, support this effort.
     
    Last edited: Dec 2, 2019
  21. Xarbrough

    Xarbrough

    Joined:
    Dec 11, 2014
    Posts:
    1,188
    I’m backing everyone who asks for better editor performance and faster iteration times. It’s a pain to see how Unity is getting slower each year. Especially looking at script reload times, it’s getting worse. It doesn’t help that we can split up code into assemblies, because ASMDEF overall still makes everything slower. The more assemblies, the larger their overhead it seems, so we settled on mirroring the legacy split: runtime, editor, plugins. I do love the concept of packages, but now their assembly reload overhead makes me sad.

    Not ranting or flaming, I’m still positive that Unity will find good solutions, but they need to know that users care about editor performance. ;)
     
  22. jdtec

    jdtec

    Joined:
    Oct 25, 2017
    Posts:
    302
    Thanks to @Peter77 for investigating this in so much detail!

    I have more or less moved to 2019.3 b12 now after cancelling and reverting back to 2019.2 a couple of times. I can't really put off anymore as I thought I wanted to use the latest DOTs packages.

    The time for a one line code change, tab to editor and wait until Unity editor UI becomes responsive so I can press play is now ~10 seconds with 2019.3 b12.

    I just tried 2019.2 Unity with the same project again out of curiosity and it's still quite bad, 2019.2.3f1 takes ~6 seconds but at least I get the opportunity to press the play button first so then I'm just waiting for it to launch rather than waiting to press play to then wait additional second or so for it to launch.

    I've seen the threads on the new UI and even though it feels like I'm beating a dead horse I have to say the first impressions on my eyes when launching 2019.2 again was very pleasant. The old UI is a lot clearer to pick icons/text out on and feels nicer to look at.

    I can get over the UI but the time from script edit to editor launch is concerning. I hope you guys can start prioritising editor performance more soon.
     
  23. Refeas

    Refeas

    Joined:
    Nov 8, 2016
    Posts:
    192
    I also feel like I should add my 2 cents. First of all, @Peter77 - you sir are the unsung hero of Unity beta-testing. I feel like Unity should pay you for the insane amount of work you put into bug reporting and performance testing. Keep up the great work!

    And now to the concern at hand - the performance of the editor is absolutely mind boggling at the moment. I spend most of the time just waiting for Unity to do its thing instead of actually spending time on the developement. We are working on a pretty big project, and even tho we are still at the beginning, the script recompile time and play mode entering is extremely slow compared to even 2019.2 (sure, the new fast enter playmode sounds nice and all, but it requires some specific implementation and it only "hides" the ongoing performance issues). We are currently stuck at 2019.3b11 because from b12 to rc1, the lighting is completely broken for us at the moment... Just looking at the amount of bug reports mostly regarding performance in this subforum which are still unresolved and the fact that 2019.3 is already in RC1 is really scary.

    I like Unity and the way it does things, but I also like my time and don't enjoy wasting it by waiting for a constantly frozen editor.
    If someone like Peter reports a performance regression and QA replies it's by design, I honestly don't know what to think of it. As mentioned above multiple times, performance should be #1 priority and swinging around with buzzwords like DOTS is not gonna make the problems go away. I like the fact that Unity tries new technologies, but the current ones really need a big polish phase.
     
  24. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,338
    @willgoldstone, you just fired off a round of requests for top 5 annoyances. Performance is one of those!

    The replies Peter's getting here is, well, not good enough. If the package system means that you cannot achieve the performance we need from Unity, then the packages have to go. This is exactly what we're talking about when we're talking about Unity adding bloat - new features that slow down everything else.

    When we go "hey, the editor is too slow", some of the people from Unity are indicating that it's of major importance to fix that. Other parts, like whomever wrote the QA reply, seems to go "well that's too bad". And that gets aggravating.



    TO BE FAIR my personal experience is that Untiy 2019.2 is faster than 2018.4 by quite a lot, but if that's because the editor improved or because we bought into the very very bad idea that is assembly definition files in the 2018.4 project is anyone's guess.
     
    Last edited: Dec 3, 2019
  25. QA-for-life

    QA-for-life

    Unity Technologies

    Joined:
    Dec 23, 2011
    Posts:
    89
    Hey all,

    Thanks for chiming in and for raising good points.

    First of all, we take performance seriously. Joachim's statements about performance are still our goal and what we strive to achieve. Obviously we are not there yet.

    We have internally had a debate about the decision to close these as "by design" and have come to the conclusion that this is not the correct way to look at these issues. If anything these bugs are an unintended negative consequence (regression, bug), but certainly not something we designed. We will treat them as such.

    That said, the problem stated in the responses are valid. In order to give the source for a package for you to insert and change in your project, it has to be compiled. That much we can all agree. But performance is our problem to solve with whatever trickery we can come up with, and it is something we must pay attention to in all our packages. This is not going to be a simple fix.

    I hope that alleviates some of the concerns, even though it won't solve the problem right now.
     
  26. illinar

    illinar

    Joined:
    Apr 6, 2011
    Posts:
    863
    Here is my anekdote. I've had 20 second compilations in this new project in 2019.3. So I put asmdefs everywhere and it din't make a dent. Then I read this thread and removed unused packages. It cut the "compile time" by more than half. (Must be mostly URP to blame, and I'm gonna need it back..) But it's still 6-10 seconds.

    Another pretty bad thing, I'm not sure if that's a bug, is that it locks inspector while compiling. So adding asmdefs was pretty "fun". After each new definition file added and each change to it I sat and waited 20 seconds unable to do anything while I could be selecting another one and editing it if it wouldn't lock me.
     
    Enzi likes this.
  27. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,363
    Let's start simple: Why is compilation 3x slower than in 4.6?
     
  28. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    Because nowadays Unity compiles against 4.6 .NET runtime equivalent, which is substantially bigger and has much more sophisticated compiler than 2.0.
     
  29. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    I'm assuming you mean 3.5 instead of 2.0. But really, it's that slower?
     
  30. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    Enzi and SugoiDev like this.
  31. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    The 3.5 is deprecated. Unity uses Roslyn now instead of Mono for compilation, and it's known fact, that Roslyn is slower than 2.0 Mono.
     
  32. konsic

    konsic

    Joined:
    Oct 19, 2015
    Posts:
    995
  33. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    134
    Any source?
     
  34. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    It is deprecated now, but it wasn't when Unity was actually faster to compile.
     
  35. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    Someone did nice summary here:
    https://github.com/oleg-shilo/cs-script/wiki/Choosing-Compiler-Engine

    Roslyn is entirely different beast to Mono compiler, it's apples to oranges. Roslyn is entire language interpreting framework which makes it heavier than just a simple compiler as Mono, especially in earlier framework versions where language features ware much simpler.

    The compile time increase is a factor of many small tip bits and culture changes within the Unity editor itself, and it would be unreasonable to expect the compile times would be smaller or similar to past versions.

    It's similar evolution that WWW pages took, in the past they weighted like 100kB to facilitate fast load speeds. Those days the networks are much faster, and most web page weight a few MB at least.

    What we can count on is that the compile time does not grow faster than the speed of our development machines.
     
  36. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    No, when Unity is advertising the goal of 500ms from change to effect times, it's completely reasonable to expect compile times to be smaller or at least similar.
     
  37. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    134
    And in that summary I see no benchmarks, moreover, I see this:
    upload_2020-1-11_0-38-36.png

    Yes it would be reasonable. Not only because Unity themselves set the 500ms goal but also because compilers do evolve too. Also other stuff evolves too, like UE4 iteration times decreased because of the compiler upgrades, hot-reload and other stuff (PCH). Iteration times decrease is not acceptable unless the software has something to provide in excuse, like something really huge and groundbreaking which required to decrease iteration times.

    No, it's not. The example is wrong and unrelated. We had a few decades of hardware increasing its calculating power significantly every year. Jonathan Blow has an amazing talk about what gone wrong and when. Unity just came to the same path.
     
    Incode likes this.
  38. Incode

    Incode

    Joined:
    Apr 5, 2015
    Posts:
    78
    The web is a perfect example bad design, poor optimization and failure to take advantage of hardware developments. Yes in the past they had 100kb pages, but on dial up modems. Web pages might be a few megabytes now, but they can also take 100x longer on download speeds 1000x faster. If the expectation for Unity (or any tool developer) is that consistently bloating compile times are just the natural course of things, they will never develop beyond a hobbyist's toy. Luckily that doesn't seem to be the case here, but we can't be lax in our protests for better tools. Things rarely improve if people are content with what they have.
     
    erelsgl and SugoiDev like this.
  39. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,618
    If you like that talk, you must watch this one too :)

    In the video Casey talks about that hardware quality is at its historical best, yet software quality is at its historical worst.

    He talks about why software is so much less reliable and so much less pleasant to use today than it was. It should be much more pleasant to use today, the hardware is so much better and it's so much easier to get reasonable performance out of it.

    He talks about why he believes this has happened and provides examples that are easy to follow.
     
    Last edited: Jan 11, 2020
  40. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    134
    I know him very well (and I saw that video), he worked on The Witness editor and engine and I read his article about semantic compression. He has a lot of good insights like Blow do, I really like their way of thinking and the effort to align software development to a better path.

    Software development relies heavily on workaround and hacks, it always did but developers had to evolve too and start writing reliable and cleaner code. Also I hate Electron and overall obsession with web stuff (and I hate Unity for that too).

    And speaking of iteration times. Web came to Unity in a form of package manager and this is the one of the reasons Unity became slower all around immediately.

    And speaking of legacy, developers overrate the need of legacy and the fear of cleaning the legacy code. I do think that software such as game engines should be rewritten when it's needed and never rely on the legacy code (but of course there are some legacy codebases which work well even now, maybe they need a bit of refactoring, like old unreadable methods for C++ when people were writing them on a narrow screens so all of the functions look like skyscrapers).
     
    Peter77 likes this.
  41. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    You might want to read the full paragraph first:
    Bringing bland statement like this does nothing to support your cause. Unity can counter it simply saying the 500ms is achievable only using newest 64 Threadreaper on PCIE-4 NVMe drive, and they would be correct.

    If you want to truly compare if the compilation time is slower, please bring out your old 2014 PC with crawling HDD and maybe 4 threads, compile your project and compare it to multicore & SSD monstrosity you are using those days. We will see what's slower, and I have a hunch about the outcome.

    And you can cut your compilation times by half just by dropping most of the skippable packages that are included by default in the project. Something that you could not do without package manager, so I would call it a net positive. The package mentality is not very well implemented in Unity yet, but the potential is to enable users to drop 70% of functionality available, which is not usable for their project, thus reducing the overhead of the editor significantly. We are not there yet, but it's a transition time.

    Most people did not get the WWW comparison. What matters is the relative drop in performance, not the absolutes. Similarly, if it took 1-2 sec to load page in the past, if this stays the same on newest hardware and network infrastructure, its sufficient enough. It's not failure of optimization, its being intelligent about your priorities. Perfect example of "bad design" is focusing on things tat a fraction of your userbase will notice - like dropping the webpage load times by 25% ~0.5sec in this example.

    I fail to see how comparing 5 year old software performance, with probably 1/3 of the features of current iteration provides any meaningful feedback, rather than stating the obvious. If the compile times for your current project are limiting enough to kill your productivity, then YES, please bring them to Unity attention. But please, don't be the old grandpa, "In my days it took such and such because of reasons!".

    PS. Bit*ing about today's software quality, really? You must be either very young, or you don't remember those clunky monstrosities like office'97, old photoshop of visual studio 2003 for that matter. No, the software quality increased tremendously over the decade, but I agree there are more bugs in general due to ease of fixing them over the net.
     
  42. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    They can’t though since I highly doubt even that machine can compile in less than 500ms.
     
  43. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    What are you even talking about?
     
    Enzi likes this.
  44. interpol_kun

    interpol_kun

    Joined:
    Jul 28, 2016
    Posts:
    134
    I always do that. And still it's significantly slower than it was before.

    Nothing proofs your statement about Roslyn being slower than Mono. Compiler infrastructure initialization on a cold start could be slow, as I stated before the article does not provide any side by side comparisons on hot starts and hot reload times, it does not provide any comparisons at all to use it as a proof for your point which was, I will remind you, "it's known fact, that Roslyn is slower than 2.0 Mono".

    It's their bald statement, not ours. BTW you can read the thread where they said about significant (x4.5 to x14) decrease of compilation times. Also they dropped the overall idea and upstreamed updated incremental to the Engine and, as the Joachim said: "18.3 is shipping with a similar but more robust version of the incremental compiler. In many scenarios we see up to 5x speedups in C# compile time in 18.3". We (at least me) see the opposite.

    There are lots of case studies about loading times and revenue conversion for online services. You can google it, if you really want to dive deep into subject and not to make strange comparisons. Just a little quotes (you can also google them):
    • Walmart and Amazon both observed a 1% increase in earnings for every 100 milliseconds of improved webpage speed.
    • Yahoo saw a 9% increase in traffic to every 400 milliseconds of webpage speed improvement.
    • Google loses 20% of their traffic for every additional 100 milliseconds it takes for a page to load.
    • Amazon calculated that webpage load slowdown of just one second could price it $1.6 billion in revenue annually.
    But this is what we are talking about here. People literally wrote that editor is not enjoyable to work with as it was before because of the overall slowness. But you came to fight with everyone in the thread just for no reason.

    I personally have no enthusiasm to create a new prototype in Unity on weekends just because I know that I will be spending hours on SRP problems and overall slow workflow. It's not that I lost enthusiasm at all, I just now work in UE4 for prototyping, the Engine that was slower than Unity works almost identical now (it's even creates new projects faster than Unity, unbelievable). Unity is not fun for me to work with any more.


    Dude, I have the same PC since 2014. Literally the same. Except the second monitor I bought and 16 gigs of ram (I had 8, now it's 24GB).

    Yes, new PS got a lot of new features. But the overall quality? Duh. Even the oldest versions from the previous decade work faster and better than the CC suite. I can speak even for VS. A lot of companies use 2013 Visual Studio even nowadays, not to mention they are still on Windows 7.

    I fail to see the logic in that excuse, because we have a living example of decreased iteration times when the software got a lot of new stuff. I mentioned UE4 and their way to deal with extremely painful C++ iterations, they succeeded and they still have a room for improvements (and, which is more importantly, the will).

    Maybe you can stop attacking people who actually do care about Unity performance and who provide valuable feedback while keeping the discussion alive so the topic will actually bring Unity's attention? I see a lot of people being concerned about Editor performance these days. I am concerned too and our "bitching", as you named it, happened to reach UT.

    I don't really like your tone and all that false comparisons and overall superficial knowledge of topics. Which is the root of the strange points you make.

    You have a right to be OK with the new increased iteration times. But we and the Unity themselves understand that small iteration times are better in all ways and the goal should be to decrease them or at least not to increase.

    Also I am not a native speaker so there could be some ambiguous sentences that will offend you. So no offense, ok?
     
    Last edited: Jan 11, 2020
    erelsgl, Awarisu, Peter77 and 3 others like this.
  45. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    Neither am I. It was not my intention to attack anyone, if I came as such, then I'm sorry.

    Let me try to unpack what I actually mean.

    Decreased iteration times are bad, and I do agree the overall performance of the editor has gone down, and I also agree it's a problem that should be prioritised by UT.

    But, having said that, using 5 years old unity version as a benchmark is not reasonable.The editor back then, and what we have now is not comparable.

    I absolutely adore @Peter77, and the work he does with the benchmark stuff, when he compares current beta version with up to v. 2017 sometimes, but going as back as 4.6 is wrong, because the two software packages are too different. Closing the ticket as "By Design" is understandable to me.

    I was speaking about 1-2 sec load times, and optimizing those, which I deemed a fools errand. The above case studies had load times of 7 seconds average against advertises max 3 sec load times, so my I think my point stands correct.


    Sadly, its true I only have anecdotal evidences, hardly anyone bothered to actually test this thoroughly.

    But, you could look for instance on https://dotnetfiddle.net/ and notice, even for simple example they start you with, the compilation times are 2x as high when switching to Roslyn from .Net 4.7.2.
     
  46. konsic

    konsic

    Joined:
    Oct 19, 2015
    Posts:
    995
    NET 4.7.2 Compile: 0.172s
    Roslyn Compile: 0.031s
    NetCore 3.1 Compile 0.006s
     
  47. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    My bad, I have read results wrongly :( .
     
  48. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,791
    The basic building blocks between 4.x and 2018 are the same, I don't know what people think we gained for the loss of fast iteration.

    At least for the stuff I'm doing the two versions are far less different than people pretend them to be. I mean you CAN zoom in the Animator controller now, but I doubt zooming in on mecanim is the reason everything else became slower.

    I'm often using both on a machine from 2012 and 4.x is far more pleasant to use.
     
  49. rastlin

    rastlin

    Joined:
    Jun 5, 2017
    Posts:
    127
    https://unity3d.com/unity/whats-new/unity-5.0

    I don't know, but to me the list of features is massive, and it's only between 4.6->5.0. Heck, 4.6 did not even had PBR. The fact that you don't use those features does not mean, there aren't plenty for massive number of other developers.
     
  50. Metron

    Metron

    Joined:
    Aug 24, 2009
    Posts:
    1,137
    Again, this should not result in a 3x-4x times slow down of the editor. I'm not on a huge project and whenever I switch from VS to Unity after editing 1 (one) file, I have to wait for up to 20 seconds... And I have a strong machine.

    Entering play mode results in a crash one out of ten times. And starting a debug session takes quite some time also.

    All this was less present in older Unity versions.

    As some said, it's not even funny anymore.
     
    erelsgl, konsic, bradshep and 4 others like this.