Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Unite Austin 2017 - Writing High Performance C# Scripts (C# Job System, new Entity Component System)

Discussion in 'General Discussion' started by Peter77, Oct 28, 2017.

  1. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    Dive further into the upcoming performance optimizations coming to Unity. This talk covers the continued development on delivering high performance C# scripts with a look at the upcoming C# Job System and newly revealed Entity Component System.

    Sign up for the Unity Technical Preview of the C# Job System:
    https://create.unity3d.com/jobsystem


    • 03:40 - Data Oriented Design Overview
    • 31:05 - Native Containers
    • 35:00 - What does a C# job look like?
    • 39:00 - IJobParallelFor
    • 41:40 - Express C# Job dependencies
    • 45:00 - Use C# Jobs in Entity Component System
    • 50:00 - What happens when you make a mistake in C# Job System
    • 55:15 - Simplest way to write a C# Job (IAutoComponentSystemJob)
    • 57:30 - More complex demo to write C# Job
    • 1:00:40 - C# Job Compiler
    • 1:04:35 - New HLSL style C# math library
    • 1:05:50 - When does all of this ship?
    • 1:07:30 - Q & A
      • 1:12:30 - Enemies without GameObject?!
      • 1:14:07 - What Unity Components can you access in a C# Job?
      • 1:14:58 - Integration with the Physics System
      • 1:17:15 - Deterministic Compilation
      • 1:18:44 - Debugging Job dependencies
      • 1:21:50 - Physics, will there be SphereCast, LineCast, Colliders, etc?
      • 1:23:53 - Is there anything that hinders the new system being C# rather than C++
      • 1:26:49 - Why the new system and not improving MonoBehaviour?
      • 1:32:07 - Is there a new "First Pass" compilation for C# Job code?
      • 1:34:24 - UT introduced an "Archetype", which is a set of Components
      • 1:35:31 - UT exposed the "PlayerLoop"
      • 1:36:29 - How get stuff rendered in new Entity Component System when there are no GameObjects anymore?
      • 1:38:32 - How to debug Entites?



    New and exciting technologies are coming to Unity in the form of Job System, C# Compute Compiler and a new Entity-Component System. Nordeus engineers provide a walkthrough of how they created the epic battle from the Unite Austin keynote and explore working with these new systems first hand.

     
    Last edited: Dec 10, 2017
  2. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I'm glad they chose to go with the best performance first, then figure out how to make it easy rather than just do all these half assed compromises.

    It looks a bit overwhelming at first but it's just a matter of getting your feet wet. I'll probably start by using it for any slow parts that bring me under 60fps on target device (assuming I'm aiming at that), then as I get used to it, expand it out to the rest.

    From what I'm seeing, the overhead of the existing unity component stuff, monobehaviour etc is still slower than processing 1 job so it's a no brainer to use where you can, I guess.

    I already know that it's basically impossible to adopt my AI (which does a lot of state stuff with delegates) to this pattern, but that does not mean I can't use it. For example I can continue doing my AI as it is, close up but in the distance, I could be doing this vague automated behaviour.

    Any thoughts from others?
     
  3. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    I'm also seeing it quite challenging to adopt this for AI, with tons of states and dependencies and whatnot. It will be an interesting exercise to figure out if and what parts can be replaced with the new system without sacrificing code maintainability/complexity too much.

    I do see where the new system would be highly applicable in my project though, but mostly on "lower level systems". But that's fine to me!
     
  4. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It's great real old school game programming performance boosts wrapped up for a new generation of game programmers.

    Hot Speed Tips: You can use arrays of small structs to process lots of tightly packed data fast on modern hardware.
     
    AndrewGrayGames likes this.
  5. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,346
    It there a text based transcript somewhere? Also, is it already within the engine?
     
  6. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Q1: He mentions HLSL style trigonometry precision... So will there be a future C# Job to GPU subsystem?

    Q2: Apparently it only has access to the Transform component, so what about Mesh, Image Texture, Navmesh, Occlusion data?
     
    Last edited: Oct 28, 2017
  7. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,346
    Definitely not. Check the actual talk. This is not what hlsl-style math is about.
     
    dadude123 likes this.
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I have, what's interesting is the code is all C# code just some of it has a IJob tag so instead of being run through mono it goes to the Job Compiler. For the current version the Job Compiler builds multi-threaded CPU code.

    In theory the Job Compiler could build GPU code (e.g. target CUDA, DX Compute, GPUOpen).

    The Job code is already set up to be batch based and parallel so there is the ideal future opportunity to allow for it to be compiled and run on a GPU.

    After all modern GPU's are amazing at batch based small/simple jobs, although I think bandwidth between CPU/GPU can be a problem.

    And if Unity are already using HLSL style syntax and features, it would be crazy for them not to also be working on a CPU and GPU Job system.

    Also in the big RTS demo it is mentioned that they are using a GPU animation system, surely if you were working on that you would be tempted to write a GPU Job System?

    I believe a lot of GPU Compute solutions have a C/C++ to GPU build path so in theory adding a C# Job > IL2CPP > GPU path should be possible.
     
  9. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Theoretical Question:

    If the shown demo which manages about 40,000 units at playable frame rates were to run on both CPU and GPU could it do way more?

    I'm assuming that every frame the game processes it's updates on the CPU and then passes them to the GPU so for 16ms the GPU is sitting there waiting for input.

    Now consider that a modern GPU can have 1000 cores, although not as complex they would be much more powerful for simple batch calculations than 10/20 cores on a CPU.
     
  10. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,346
    Arowx, in theory we've colonized Alpha Centauri already. In practice we are not even close.

    "in theory" discussions give no practical benefits. Practical part that is available right now is the only one that matters. Talking about what is "possible" "in theory" is a waste of time and nothing more.

    And for the love of Cthulhu, do some gpu programming yourself so you familiarize yourself with its limitations.
     
  11. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    It's great to see focus on performance. It's a bit frustrating to watch some of the direction they are going in though.

    One of the fundamental design flaws of ECS is that it forces the batching paradigm up to high. You always have to make a trade off here, but basically you want to avoid coding in this paradigm unless you actually need the performance gains it gives. Which means 90%+ of your game logic is going to be better off outside of their ECS system.

    Which is ok if you understand that. It's selling this like this is a great higher level abstraction that rubs me the wrong way. Because it's not. They don't even try to address some of the more well known shortcomings of ECS like dependency handling and how this plays havoc with data at the database level and make querying data generally a pain. I think the reason for that is because as engine developers, those are not things they would ever encounter really. But most of them are well known by anyone who has tried to use ECS on a substantial game.
     
  12. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,211
    Yes, and getting an explanation and the accompanying examples in a format that isn't a video. Unfortunately my brain is the type that focuses on typos, grammatical errors, and presenters that are not trained speakers making it very difficult to really understand after the first viewing despite waiting till I was wide awake after a good night's sleep.
     
    Socrates and hippocoder like this.
  13. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Good discussion and test of the Unity ECS system and a more old school managed approach...
    https://forum.unity.com/threads/how...ops-in-unity-me-and-my-unorthodox-way.413419/

    It depends on the level of performance and number of game objects you are dealing with, ECS is fine on modern hardware at scales of 1-10s to 100s of objects.

    If you need 1000s to 10,000s of game objects then you need a more batch based or multi-threaded system.

    Unity ECS and Job System are systems that you can choose to use or not use depending on your projects needs.

    Hopefully the addition of the Job System will open up more Unity systems e.g. Navigation, Physics, Occlusion so you can roll your own system.

    PS you mention databases to my knowledge this is not a topic raised in most game development discussions, so what types of games tend to use DB's?
     
  14. ZJP

    ZJP

    Joined:
    Jan 22, 2010
    Posts:
    2,649
    This will not be easy to use without good and many tutorials. :confused:
     
    Gametyme likes this.
  15. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    Its not intended to be used by beginners though.
    It's a way to implement high-end optimizations, not a new way to do literally everything.

    I can only speak for myself here but I think the video gives a great overview of what will be possible, and is half a tutorial in itself already.
    For me at least it won't be hard to use, and for the few limited sub-systems where it actually makes sense to use this, it will work great.
     
    angrypenguin likes this.
  16. Zephus

    Zephus

    Joined:
    May 25, 2015
    Posts:
    356
    I couldn't get through this. This would've been 30 minutes shorter if they got someone to present it that is actually capable of speaking in front of people.

    From what I understood this is something nearly nobody will use. It's just too much of a hassle to build your game around this system, but when you actually need that extra performance, you'll probably want to take a look at it.
     
  17. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,211
    Good documentation and tutorials should not be restricted to beginners.
     
  18. Sluggy

    Sluggy

    Joined:
    Nov 27, 2012
    Posts:
    852
    I can agree with that to a point. It's clearly not meant hobbyists or rapid dev cycles. I doubt I'll find myself using it much since I've spent quite a lot of time building a library of code that is focused on rapid-iteration of design and is not at all thread or cache friendly. I've tried the 'thousands of mobs' thing before and there were so many factors beyond mere programming that limited me that this wouldn't have helped at all.

    That being said I can see this being very useful for large-budget games to really push the boundaries of Unity. With some specialists that can focus on just these parts of the system they can really leverage a lot. Some smaller devs might be able to try some smaller but unique ideas that were impractical before. As well, Unity Team themselves can use it under the hood for some performance boosts that come to us for free.
     
  19. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,346
    Is there a transcript or code samples? I don't want to sit through 2 hours of video.
     
  20. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I do have one quibble load balancing...

    For instance I write a game using the Jobs system and build it on my 8 core FX8320 CPU get it running well and with all the enemies/hordes particles and effects I want at 60Hz.

    Then someone runs it on a 2, 4, 6 or 16 core machine, what happens?

    Or in my dynamic game I take advantage of the Job system and as in the RTS example allow players to have archers, only some player will opt for an army of only fire archers (more particles and damage radius) and two players both use this strategy.

    How can I balance the load on the Job system or will my game need to be dynamically scaled in army size based on the users CPU cores, and what impact will there be for multi-player games?

    Or is there a way to do Job LOD'ing, where I can dynamically reduce the precision or logic applied to game objects based on the CPU and GPU* load?

    e.g. Level of Detail of Unit Navigation/Collision Avoidance

    *The flip side of this Multi-threaded job system is the potential maximum load a systems GPU can render.
     
  21. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Interesting article on maximum bandwidth of CPU's dependent on cache sizes...

    https://www.techpowerup.com/231268/...improvements-improveable-ccx-compromises?cp=4

    So does this mean that for best performance we should tailor the jobs memory footprint to the processors cache sizes and if so will Unity provide system/platform cache size data or a management system that helps developers build to multiple target platforms with optimal job system settings?
     
  22. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,522
    I've heard of programmers doing exactly that when trying to get the most out of a single target platform. The examples were very specific cases, though - they knew the one, exact CPU model they were targeting.

    What you're talking about there would involve using different data sets and potentially different algorithms on a per-CPU basis. I'd say that the best way to make it multi-platform is the same way we use now - aim for the lowest common denominator, and any extra juice the others happen to get is a bonus for them.
     
    Ryiah likes this.
  23. xCyborg

    xCyborg

    Joined:
    Oct 4, 2010
    Posts:
    628
    This actually made perfect sense until Joachim introduced that bloody TransformAccessArray, it just ruined the whole workflow.

    Why not add a wrapper or a struct version of Transform component to use it with ComponentDataArray<T> ? That way we can pass it to a normal IJob or IJobParllelFor like other primitive types and IComponentDatas.

    I hope they won't go with specialized IJob/IJobParllelFor* interfaces and different Execute and Schedule signatures for each component they might jobify in the future, otherwise we'll have a really confusing workflow and Unity API will turn into Frankenstein API in no time.

    I hope they change it before shipping, otherwise great work! Applied for the preview, can't wait to test the AutoComponentSystemJob to provide feedback.
     
  24. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Well it's not just about threading but also ensuring the data has sequential and linear access as I understand it. That's more efficient than just threading, which Unity could've done a while ago. At least that's how I understood the rationale behind the current setup.

    I'd have to actually code with it to see how it plays out and how complex it becomes.
     
    Ryiah and angrypenguin like this.
  25. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    No different then if you compared a pc running a single fast cpu to a single slow cpu, with some caveats.

    Let's assume Unity has a good batching and concurrent queue implementation.

    Then you have to consider you are operating in a shared environment. You can't plan on pushing all of the cores available. High performance concurrent apps pretty much require being the only app running to get the best performance. Because some other app with a different model can just destroy your thread pinning strategies and such, which will impact context switching and cpu caches and so on.

    On the server side where we generally run one app per box, you can scale threads almost linearly quite a ways, like 48 cores and more, without significant context switching or lock contention. But we usually tune those by using strategies that pin threads to workloads based on their io and cpu usage patterns.
     
    angrypenguin likes this.
  26. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OK I'll rephrase the question how will my game know how busy the cores are and therefore be able to manage the workload it issues at runtime?

    Will FPS be the only indicator and would that be a bad one as the problem could be GPU load?

    Is there a "Task Master" style Unity API that can tell me the state of the system, it's cores and the gpu on a running game?
     
  27. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,522
    I've never dealt with that particular problem before (well designed and written single-threaded code hasn't failed me for my particular needs yet) but given that we can see individual core load in Windows' Task Manager, I assume there'd be some way to measure or estimate how busy each core is. Mind you, that's probably not the only metric that matters.

    On that note...
    Your FPS is a long way removed from the thing you're talking about measuring and bears no reliable relationship with the activities of an arbitrary core in a multi-core system. It's kind of like suggesting that we can check how much water is in a dam by turning on a tap in our house.
     
    Ryiah likes this.
  28. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,346
    It is possible to poll OS for CPU load (on linux, windows, android). The thing is you don't get to decide which core does which thing. That's the job of operating system.

    If you need to measure CPU load and change workload based on it, and this is happening in unity project and in a video game, I'd say you have a design problem.

    Your game logic should be abstracted away from hardware and things like CPU and CPU load.
     
    Ryiah likes this.
  29. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,043
  30. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I don't think those things will be fine grained enough for a game to work out what can be done in a frame...

    The Unity Job System also runs across those cores and uses up some time, you can see this in the profiler, so the data is available when developing. Will there be a similar system at runtime, where I can find out how much time I have left to hit a target frame rate.

    I Imagine that this job compiler can be used in two ways to make massive multi-unit games like ashes of the singularity or a bit like Nvidia's PhysX Apex where more none-essential things like, particle effects, leaves, NPC's can be brought into a game scene to boost it's 'ambiance'.



    Why do I think of Star Wars original and Digital Remastered scenes when I think of this!

     
    Last edited: Nov 11, 2017
  31. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    The remastered scenes suck true ass though, AND Han fired first.
     
  32. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    How do you decide now how much work you do in a frame? How do you now account for systems with varying types of cpu's or gpu's?

    This is no different really. Unity is just giving you a system that spreads work efficiently over multiple cores, something it wasn't able to do before. It doesn't really change how you decide how far you can push a system. You do that using the same methodology that you do now. Only now you have one more thing that can potentially get ahead of other parts of the system. But we already have things like that we deal with now, this is just one more of those.
     
    angrypenguin likes this.
  33. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Martin_H likes this.
  34. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    The following talk goes into a few details how to use FrameTimingManager, starts at 15:20
     
    Last edited: Nov 12, 2017
  35. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  36. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    I've updated the first post with a new video from Unity 2017, where Nordeus engineers provide a walkthrough of how they created the epic battle from the Unite Austin keynote and explore working with the Job System, C# Compute Compiler and a new Entity-Component System first hand.
     
    Martin_H likes this.
  37. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    You cannot do what you want how you want to do it.

    Sampling the current state of hardware only tells you what it's doing right now. It gives you no insight into other apps that are running. So you have no clue about the patterns of resource usage they have. You can't know what they will do in the future. You can't even know what your own app will be doing in the future at the granularity you need to, because that logic is tucked away in unity where you can't see it.

    So if you are app A and you measure core usage, see you have 50% free. So you assign some work. What you don't know is that app B assigned a crapload of work, but you didn't see it because it was in an IO cycle or just due to the rather course grained granularity that the .NET level measurements provide. The result is you cause a huge spike in resource usage.

    You can verify this yourself. Just set the affinity of the main unity thread to a specific core, and go grab one of the tools that let's you measure cpu usage. You will see measurements that say the cpu is basically idle. Now try doing a bunch more work when you see that. It won't work, because what you aren't seeing is that over say a few seconds, the load averages to something completely different. And even then standard deviation comes into play.

    The only thing you can do is measure over a time interval large enough to account for an accurate standard deviation in resource usage. If that doesn't make sense read this:
    https://zedshaw.com/archive/programmers-need-to-learn-statistics-or-i-will-kill-them-all/

    Nobody really tries using a dynamic approach for this even though it is possible to some extent. Because in the context of actual use cases, it usually doesn't make much sense. I mean what kind of feature would you have anyways where it could use say 6 cores, but the game would still run good with one? What would that even be given the context of the job system? You are talking at such a hypothetical level that any discussion about actual approaches you could use, isn't even possible really.
     
    dadude123 likes this.
  38. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I get what your saying but aren't device/platform game modes becoming a thing Windows 10 has one now and Console platforms well are always in game mode.

    So we have platforms that can limit the cores/threads the OS uses for background tasks.

    Now I want my game to run a couple of different job systems the ideal way to set this up would be to divide up the cores/threads available between Unity and my jobs then you have dedicated cache accesses per core as we are not swapping jobs mid frame.

    With a more granular system that monitors the cores/threads I can choose available 'free' cores/threads and load them as needed. Even leaving a core/thread free for OS activity.

    How could I manage the cores/threads available with just CPU activity levels?
     
  39. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    Isn't this what the job system and scheduler already do? It's trying to achieve 100% core utilization by scheduling jobs in a clever way, trying to avoid idle and giving cores something to do all the time.
     
  40. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Yes if you somehow limit what the rest of the system is doing you can count on having more resources available. I was really only addressing the ability to dynamically tune it at runtime.

    My guess is that any optimizations that are worth doing, are already done in the job system. Like the batching system would almost have to be abstracted out so various job systems feed into it, and it then organizes all of the work from all jobs in the most efficient way. I don't think you will really have to worry about any of it.
     
  41. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,803
    The biggest bottleneck I have is rendering. The second is physics as each object will have a MeshCollider though these are 100% static except a CharacterController. Since files I load are parsed and converted at runtime I cannot bake OC and when I have baked these scenes with thousands of objects it actually bit into performance instead of enhancing it. I could use Culling Groups as it can be done at runtime but it is a bit tricky figuring out how to handle a bunch of spheres to turn objects off and on based on occluded visibility. Some objects may be very long and narrow, others like long hallways floors, ceiling and walls would not get covered suitably for occlusion by a sphere, perhaps many in some kind of grid but then that is adding to complexity. Perhaps that is a candidate for this Jobs System. Can the new RenderLoops stuff be handed off and gain performance? Or is DrawMeshInstanced a candidate if you can round up all duplicates and run them through somehow. Any thoughts on how to make all these new systems, subsystems and opening up of the engine to enhance performance on scenes with massive number of objects?
     
    neginfinity likes this.
  42. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,522
    Unless you're doing really naive or really heavy stuff, or targeting very low power platforms, I expect that will be the case for most people most of the time. I think I've only had one case where I had to really optimise CPU-side stuff, and the goal of that project was directly "simulate as many people in a crowd as you can".
     
    dadude123 likes this.
  43. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,803
    I was going to write "For most of us the bottleneck is rendering". However I decided i have no business speaking for others and rephrased it. Do you have anything pertinent to add?.
     
  44. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    For colliders, what matters is what will they interact with. So base it on distance from whatever they can interact with is the best approach, vs can you see what they are attached to.

    I've created runtime building systems that manage 100k+ dynamic objects using spatial hashing and runtime mesh combining to handle the rendering. And spatial hashing to enable/disable colliders based on distance from player/npc's, whatever might interact with them.

    Even if you could use the job system to make the brute force approach work better, I doubt it would come anywhere near the above approaches.
     
  45. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What can we do with all the processing power this system makes available, most games do player input some enemy AI and then animation, physics, graphics and sound?

    Will we just see lots of Unity games doing more leafs and grass blowing in the wind on high core systems, or will NPC's become deeper and more responsive or will weather systems and deeper simulations become possible?

    e.g. fluid dynamics, object destruction, object creation

    What aspects of our games could benefit most from this multi-core bonanza?
     
    Last edited: Nov 14, 2017
  46. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    Perhaps the following talk gets close to this question...

    Low, medium, and high: Standard fare for GPU settings, but why not CPU? Today the power of the CPU on end uers' machines can vary wildly. Typically, developers will define their CPU min-spec, implement the simulation and gameplay systems using that performance target, and call it a day. This leaves the many potentially available cores built-in to modern mainstream CPUs sitting idle on the sideline. In this talk, Intel shares which systems in Unity are CPU scalable. Learn how to easily determine the power of the CPU in your scripts when enabling and configuring those systems. Giving your players the best experience possible on all levels of hardware is the ultimate goal, and now it's easier than ever.

     
    Arowx likes this.
  47. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    What about thinking about this another way, what games makes best use of multi-core systems and what do they do with them?

    So a game that benchmarks better with more cores and what features does it have or turn on with higher capacity CPU's?

    I think Battlefield 4 was one of the games that started turning up features with it's levolution concept, big scale triggerable destruction or multiplayer turbulent seas. Dice were often presenting on how to take advantage of all of a platforms CPU/GPU performance.



    What other games take advantage of more cores and multi-threaded rendering to boost their games?
     
  48. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Could the multi-threaded job system with the right engine api access (mesh/texture/shader/collider) open the way to much larger game worlds that can be streamed into memory on demand on separate threads and their transforms being multi-threaded will allow origin shifting within vast open world spaces?
     
  49. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    6,465
    Why don't you sign up for the technical preview of the C# job system, give all your ideas a try and report your findings:
    https://create.unity3d.com/jobsystem
     
    dadude123 likes this.
  50. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    I have but so far no feedback, was hoping I could apply it to my Unity benchmark cube mark and really see some performance boosts.