Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Workstation Specification for Unity Development

Discussion in 'General Discussion' started by leeprobert, Sep 4, 2017.

  1. leeprobert

    leeprobert

    Joined:
    Feb 12, 2015
    Posts:
    49
  2. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    I would go with a 1950x. 16 cores, great for baking light. I'm on a 1800x with 8 cores right now, its fast, but you always want more cores!
     
  3. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,965
    Have to second the recommendation to purchase an AMD Threadripper. You simply get far more bang for your buck right now with an AMD system. Below is an example I threw together in a few minutes with a price tag of $3500.

    https://pcpartpicker.com/list/QJbBKZ

    There are a couple other differences from the Puget Systems build. For starters, I choose Samsung 960 Pro SSDs which have considerably higher performance (3.5GB/sec read 2.1GB/sec write) compared to the now rather dated and essentially budget tier 850 Pro SSDs (550MB/sec read 520MB/sec write).

    Second, I choose a GTX 1080. There wasn't any particular reason for that other than to bring the cost to roughly the same amount as the Puget Systems build.
     
    Last edited: Sep 4, 2017
    Meltdown likes this.
  4. Meltdown

    Meltdown

    Joined:
    Oct 13, 2010
    Posts:
    5,816
  5. grimunk

    grimunk

    Joined:
    Oct 3, 2014
    Posts:
    274
    If you are talking workstation, go as big as you can afford. The time it saves doing various tasks like baking lighting, compiling, and generally not hitching up will save you money in the long run.

    We run (now 3-year-old) i7-3930x/4930x processors, with older rx 480s and GTX 980s. These systems have held up very well, and working on anything slower is painful. Right now it looks like Ryzen is going to bring the biggest bang for the buck, so if we get another system in the near future, it will probably be one of those.
     
  6. Xype

    Xype

    Joined:
    Apr 10, 2017
    Posts:
    339
    Yeah they got the price, intel still has the performance with their new ones though, but that is going to be very termporary (and I am not talking about unity). Intel made a choice to change archetecture in a way which makes it work better for things that are not properly multithreading, and overclocking, however as more and more software is developed to multithread that is going to bite them in the backside.
     
  7. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    The 1950x with its 16 cores is very hard for Intel to beat, atleast when we talk content creation. Sure the 7920x has 12 cores and higher single core perfomance. But its glued lid instead of soldered like the AMD so you will have cooling problems etc. And the 1950x is still faster because of more cores when it comes to content creation
     
    Meltdown likes this.
  8. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,965
    Only for single-threaded workloads. Intel's i9-7980XE (18 cores/36 threads) has a higher boost clock (4.4 GHz) but the base clock (2.6 GHz) is abysmal and since you can only have up to two cores boosted you'll be spending most of your time at base speeds with the heavily multi-threaded workloads that justify the purchase of these chips.

    By comparison the AMD Threadripper 1950X may have a lower boost clock (4.0 GHz) but the base clock (3.4 GHz) is significantly higher. We'll have to wait and see the benchmarks to compare the two but I would be surprised if the Intel processor came out ahead in most of the workloads these chips are intended for.
     
    Meltdown likes this.
  9. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Its a bit unfair comparing with the i9-7980XE if includ the price point
     
  10. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,965
    Someone needing a high-end workstation for development will likely be able to afford the difference. Besides it's helpful to see the whole picture rather than just looking at a single processor from each company. Just look at the difference in base clock between each model of the i9 (in the spoiler because the image is huge) and the difference for the Threadrippers.

    AMD:
    ryzen.png

    Intel:
    i9.jpg
     
    Last edited: Sep 7, 2017
  11. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    And like I said, Intel uses glued lid while AMD solders, huge difference in heatspread
     
  12. ddelorme

    ddelorme

    Joined:
    Aug 16, 2015
    Posts:
    5
    Thanks for this post I've been googling if unity would benefit from the AMD threadripper to purchase it for a new build but was worried there would be some sort on incompatibility issue. Looks like it would be the best choice from the research i've been doing.

    Have any of you build a threadripper 1950x rig if so how do you like it? Also what motherboard did you choose. ATX or E-ATX board. I know with an ATX board you have more options to mid size cases but are they big enough to sufficiently cool everything?
     
  13. 3pns

    3pns

    Joined:
    Sep 5, 2014
    Posts:
    4
    Hello,
    I'm looking forward to build a hackintosh for Unity3D development as I'm morking on a Mbpr for now.
    Everybody talks about the benefits of multi threaded processors for baking lights, but what about compilation time in Unity ?
    Exporting android apk or iOS projects, then compiling iOS project on XCode is where I loose most of my time waiting for nothing.
    Would compilation time be rougly the same on let's say an Intel 7700k 4c/8t and a threadripper 1950x ? Or would the number of core of the threadripper or any heavily threaded processor accelerate exponentially compilation time ?
     
  14. Xype

    Xype

    Joined:
    Apr 10, 2017
    Posts:
    339
    Nope, compile times usually run about the same, compile is compile. The benefits after are wonderful though.
     
  15. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    I monitored the CPU usage while building our game, it only used the 8 cores / 16 threads very briefly

    upload_2018-2-3_14-15-15.png

    Actually most of the time the CPU is close to 0 so I wonder if they couldnt speed things up :D
     
  16. Xype

    Xype

    Joined:
    Apr 10, 2017
    Posts:
    339
    did you zoom that out so that very briefly was at the end of your compile? Would make sense, that would be a garbage collection spike.
     
  17. Xype

    Xype

    Joined:
    Apr 10, 2017
    Posts:
    339
    That is actually a pretty solid spread you got there, You won't dev to make all threads run heavy all the time. True multithreaded operations have to weave their loop results in and out in a very controlled and organized way. It is a dangerous programming task for stability. Luckily Unity is doing it for us and making it so we can just click buttons and checkboxes lol.
     
  18. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    When I bake light and build game I want it to be fast, dont need to use the computer if it means it goes faster. It could also be a option when you build. Using 100% CPU on all cores is not dangerus if your computer has correct mounted cooling etc
     
  19. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    I waited for the CPU to spike, it happened around mid build. So the screenshot is from around mid build
     
  20. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    This is how it looks when baking light btw

     
  21. Xype

    Xype

    Joined:
    Apr 10, 2017
    Posts:
    339
    Ahh would be curious to know what it hit about then. I would assume likely shaders and or baked meshes
     
  22. Xype

    Xype

    Joined:
    Apr 10, 2017
    Posts:
    339
    Not dangerous as in your computer is going to explode, dangerous as in you can kill the entire thread. Any processes using that thread then dies with it while it restarts. Now do that to core0 by accident, hi blue screen of death how are you today.
     
  23. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    You mean you starve a thread? Then you have a bug in you threading code :D

    Blue screen of death only happens if you have faulty hardware or faulty driver. (on NT Kernel system)
     
  24. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Yeah I didnt have the Unity progress bar up so cant help you there sorry
     
  25. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Built again, an monitored more closely

    upload_2018-2-3_16-58-26.png
     
  26. 3pns

    3pns

    Joined:
    Sep 5, 2014
    Posts:
    4
    This is some very precious data you have posted there, thank you.
    I think the most reasonable option for me is to go for the 7700k as it's the best cpu compatible with my mobo. I was thinking about waiting to buy a new rig but seems like it's not worth it.
     
  27. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Thats only build times. There areaothrr aspecta too, like bake light that time is basicly cut in half with 8 cores
     
  28. Deleted User

    Deleted User

    Guest

    So in essence, besides light baking, neither going up on CPU cores nor on SSD speed will get you the speed boost you'd expect from builds and compiles, am I correct?=
     
  29. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Just go with teh 3950x, you get great singel core perf and 16 cores. Cant go wrong. Plus its future safer if Unity deciced to offload their build steps to more cores
     
    Deleted User likes this.
  30. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,965
    I haven't read through the thread lately but a fair amount of information here is no longer valid. AMD's third generation Zen-based processors have massive caches compared to the previous generations of Ryzen and Threadripper. Thanks to this compile jobs with them see massive performance improvements. Check the video below for more in-depth info.

    The Ryzen 3900X and 3950X have 64MB L3 cache. The Threadripper 3960X and 3970X have 128MB L3 cache.

     
    Deleted User and codegasm like this.
  31. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Its pretty insane that the L3 cache on a modern CPU is far larger than an entire harddisk bak in the late 80s early 90s :D
     
    frosted likes this.
  32. orcinus

    orcinus

    Joined:
    May 7, 2013
    Posts:
    15
    I don't think more cores would help, since actual multi-core load is only a tiny part and is short.
    Not sure what is the bottleneck, because neither of the systems on 9900K machine ever get locked up above 50%.
     
  33. Joe-Censored

    Joe-Censored

    Joined:
    Mar 26, 2013
    Posts:
    11,847
    Have you picked up your 3950x yet? I'm curious how well it really does.
     
  34. MrArcher

    MrArcher

    Joined:
    Feb 27, 2014
    Posts:
    106
    Think he got banned, if you click his profile pic it says this user is unavailable.
     
  35. Deleted User

    Deleted User

    Guest

    I'll be getting the 3900x next year. I'll report if it improves performance.
     
    Joe-Censored likes this.
  36. Joe-Censored

    Joe-Censored

    Joined:
    Mar 26, 2013
    Posts:
    11,847
    Doh!
     
  37. I just ordered the parts for a new computer, looks like this:

    - Sliger SM580 case with perforated side panels for breathing
    - Gigabyte X570 I AORUS Pro WiFi MoBo
    - AMD Ryzen 9 3900X (later I can replace it with a 3950X when the supply stabilizes and it will drop back the price to bearable levels)
    - Corsair SF750 SFX power brick (750Watt, platinum, full modular)
    - G.Skill Trident Z Neo DDR4-3600, 2x16GB (the ITX MoBo only has two slots)
    - NZXT Kraken X62 CPU cooler (AIO, 280mm, top exhaust)
    - 2xNoctua NF-A14 PWM case fans (bottom intake)
    - Samsung 970 Evo Plus SSD - 1TB (system)
    - Intel 660p SSD - 2TB (data)
    - EVGA Geforce RTX 2080 TI Ftw3 Ultra

    Will see how it turns out.
     
  38. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,965
    Everything looks great to me.

    By the time this happens we might very well have the next generation of chips. Currently the only way to obtain one for a reasonable price is to buy a prebuilt from a company like iBUYPOWER which only charges slightly above the MSRP.
     
    Lurking-Ninja likes this.
  39. iamthwee

    iamthwee

    Joined:
    Nov 27, 2015
    Posts:
    2,149
    That's pretty much the same as my rig (thanks Ryiah very happy now) except I have an intel i9 chip because I was kinda wondering if I could virtualise macOS to work on ios app dev, and I read somewhere that virtualisation was easier on intel chips.

    Still haven't tried it though.

    P.S the 2080Ti is <3
     
    Lurking-Ninja likes this.
  40. SerkanSebastian

    SerkanSebastian

    Joined:
    Jul 18, 2013
    Posts:
    1
    I'm torn between this set up, waiting for 4900x and upgrading to Threadripper. Have you had the chance to test your setup yet?
     
  41. I ended up NOT upgrading to 3950x. So basically I have it in its basic form. I realized I'm okay with it, this CPU performance is plenty for my use case. It runs everything without a glitch.
    I usually don't "test" my computers, I don't care about the synthetic performance, only about the real one. And nothing could sweat it so far, not even these days, it became a little bit hot here in North California for a couple of days and the cooling is plenty on the system and the semi-open case helps a lot I guess. I also ended up not installing one of the Noctuas because of the plenty of airflow and I'm using that space for the cables. (The case is a little bit tight for this config, but it ultimately fits, I had to bend the power supply holder a bit to install the videocard though. :D but when everything is in there you can bend it back)
     
    angrypenguin likes this.
  42. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    I love my 3950x but wouldn't buy it now. I would wait for zen3. What's nice is AMD will let us with x570 upgrade so I will probably upgrade my 3950x when they release. But I don't think we should hope for more than 16 cores on AM4 it's only dual channel plus they will compete with their 3960x if they give it 24 cores

    Its also just fun seing all those 32 threads at work

     
    Lex4art likes this.
  43. Armynator

    Armynator

    Joined:
    Feb 15, 2013
    Posts:
    66
    I quickly tested an i7-2600, an i7-6700k and a Threadripper 3960X for a few common tasks. (All running at stock speeds with fastest supported RAM from their product page)

    Tests were made in Unity 2020.2.0a15 with a small 2D URP project. (Windows 10 Pro, OpenGL 4.5, a few scripts, some externally compiled C# class libraries as imported DLLs and a few different simple sprite shaders)

    Tested things which benefit from multiple cores are:
    • Shader compilation
    • The CPU lightmapper
    • Texture compression (especially crunching)
    • Build time (IL2CPP)
    Most things didn't really scale with more than 4 cores however:
    • Asset import time (except texture compression)
    • Script compilation time
    • External DLL import/reload time
    • Build time (Mono)
    If you are compiling many shaders really often, more cores are a good idea. An initial build with shader compilation took the i7-2600 107 seconds, the TR 3960X only 43 seconds. Even the 3960X spiked to almost 90% CPU usage for a while.
    However, once the shaders were cached after the first build, the difference was a lot smaller: 20 seconds for the i7-2600, 15 for the i7-6700k and 13 for the TR 3960X. (Mono Build on Windows)


    For IL2CPP builds (with already cached shaders & low code stripping) thing are a bit different. The i7's used all 4 cores/8 threads at 100% for the initial build, while the 3960X was at 25% total usage max. The i7-2600 took 207 seconds, the i7-6700k 109 seconds and the 3960X only 47 seconds. (This one almost looks like it was cached already)
    After the first IL2CPP build was done, future builds were faster again. 76 seconds for the i7-2600, 58 seconds for the i7-6700k and 44 seconds for the TR 3960X. (Tested 3x and took the average, -/+ 2 seconds max difference for all of them)
    The CPU usage was really similar to Mono builds, hitting 100% for a few seconds in the beginning and going down to 15% on all cores except one. (This one core was at around 80% instead of 15% like the others)

    The i7's were pretty close to their expected single-threaded performance difference in most cases. In plain benchmarks the TR 3960X is about 10-15% faster than the i7-6700k in single- and dual-core, while the i7-6700k is 35-40% faster than the i7-2600. I could notice about the same difference in Unity between them for DLL import times.
    Reimporting/overwriting an external .Net 4.5 class library (as DLL) took 15 seconds with the i7-2600, 9 seconds with the i7-6700k (40% faster!) and 7-8 seconds (~15% faster) with the TR 3960X.
    According to the Windows 10 Task Manager, DLL imports are using only 1-2 threads most of the time. (1 second 100% spike at the end) So this task does not benefit from multiple cores at all.

    Similar result for script changes. Adding a simple Debug.Log line in a script took the editor 5 seconds to refresh with the i7-2600, 3 seconds with the i7-6700k and 2 seconds with the TR 3960X.

    Of course there are more things to test. How well do Shader Graph or VFX graph work with more cores for example? What about DOTS?
    But sadly I don't know that much about these tools yet :p

    In conclusion I'd say that more cores are pretty useless in most common, basic cases.
    Yes, IL2CPP benefits from it a bit. But usually you will be using Mono builds anyway for prototyping.
    Yes, the CPU lightmapper can take full advantage of a Threadripper, but the GPU lightmapper is even faster and hopefully replaces the CPU lightmapper soon.
    Yes, texture crunching takes full advantage of all cores as well, but are you really importing that many high-res textures daily?

    If you are using multiple Editor instances at once, or if you are an artist, permanently making changes to textures or shaders, a 3950X or even a Threadripper might be worth the money.

    If you are a solo developer mostly focused on a single, smaller project however, I'd say you should stick to something less expensive.

    At the end of the day all that really matters is your usecase I guess.
    Personally I'm working on 3 networked projects at once, with a Linux VM for web development running in the background, so a TR 3960X with quad channel memory was a good choice.
    If I'd be still working on smaller singleplayer games however, I'd definitely would choose a i9-10900k for the better single- and dual-core performance today.
     
    Last edited: Jun 29, 2020
    codegasm, Joe-Censored and SugoiDev like this.
  44. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    Have unity even said that GPU light baking will ever work with bigger than demo sized scenes?

    There is always bakery offcourse
     
  45. RobRab2000

    RobRab2000

    Joined:
    Nov 28, 2012
    Posts:
    29
    So I'm building to WebGL several times a day and it takes a bloody age (especially when I'm debugging and need to make development builds). I also need to build for windows so I'm doing regular platform switches. (AssetDatabase v2 is a bloody hero!). In addition to this I'm building about 2gb of addressable asset bundles (around 3-400 individual bundles) again at least once every few days (but sometimes it can be several times in one day). Lastly its not uncommon for me to have three or four separate instances of Unity (and again, just as many instances of Rider) open at the same time.

    I'm trying to decide between 3900x, 3950x or threadripper.. (Of course I'm going to probably wait till Zen3 regardless) and I'm wondering if the extra cores are going to benefit my workload? It seems like because WebGL has to be built with IL2CPP there may be a big benefit to more cores.. (also more cores are yummy!)
     
    ickydime likes this.