Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Will the job system improve networking and multiplayer speeds?

Discussion in 'NetCode for ECS' started by Arowx, Feb 15, 2018.

Thread Status:
Not open for further replies.
  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    So the job system is highly optimised to allow lot of units to move and Unity to do more things over more threads.

    Could networking benefit from the job system, so updates from and to other players will not be as dependent on the main thread, could they even run on separate threads/jobs?
     
    Flurgle likes this.
  2. Wigen

    Wigen

    Joined:
    Aug 31, 2013
    Posts:
    33
    You could receive the data then pass the data to be processed in jobs. So i don't see why you couldn't.

    But that being said the issue with "a lot of units" you still need to pass the same amount of data with or without threading and that all has to do with network latency/packet size. If you have a lot of units you would need to make your physics deterministic so you don't pass a ton of data and just mouse clicks/movements..
     
  3. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I think UNET NetworkTransport is already jobified/multithreaded internally for sending/receiving msgs ( @aabramychev might be able to confirm )

    But with the job system, you can jobify the gathering and applying of game states for networking, so there's definitely a speedup to gain here
     
  4. angusmf

    angusmf

    Joined:
    Jan 19, 2015
    Posts:
    261
    Perhaps restating what PhilSA said above, but the job system and ECS could allow (context-specific) packet optimization and compression/decompression algorithms to be developed that wouldn't have been feasible before. So maybe less data will be passed after all.
     
  5. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Networking and serialization are just largely separate from what you do with the data once it's deserialized.

    Efficient serialization/compression and networking won't change at all. The best approaches there are well known already and concurrency is completely orthogonal to the core problems there.

    What you should really be focused on for networking/serialization is GC. Zero GC is possible there and if you are going to spend time optimizing, that is by far the biggest bang for the buck area.
     
    MechEthan and nxrighthere like this.
  6. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    FYI you can get zero GC networking/serialization in 2018 via DotNetty along with protobuf-net combined with ArrayPool. Zero gc as in no per message GC, you still have to initially allocate the pooling.
     
  7. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    I agree with @snacktime. GC pressure is the biggest problem and you must focus on the efficient buffer management and make the code less allocatey while working on networking stuff. Splitting logic across multiple threads/tasks/jobs will not save your application when GC will become angry and kill the performance.
     
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Wasn't there some mention of the new compiler technology working outside of the GC in the original video I think they mentioned something about the GC and the new system??
     
  9. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Found it...
    From here https://forum.unity.com/threads/eta-on-c-job-system-and-new-ecs.512032/#post-3349526

    More info here:

     
  10. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    From my point of view, this job system and friends is reinventing the wheel which makes it easier for Unity users to write code that runs in parallel. Personally, I don't see a reason why the job system is better than TPL, except that it works outside of the Mono which should be ditched anyway in favor of .NET Core. There's no quantum mechanics, and with this job system Unity simply gives you basic control over memory allocations with a bunch of headaches that you will encounter in the development process. At the moment this system doesn't offer the flexibility which you have with TPL. It also involves many new issues and bugs from the facts that I'm reading. Write or not GC-friendly code is up to you, and I believe you don't need such systems for that.
     
    Last edited: Dec 19, 2019
  11. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It's not .Net ...
    It's compiled to C++.

    See post above yours.
     
  12. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    I know that this system is written in C++, can you be more specific where I said that it's a part of the .NET runtime? Please read my post above again.
     
    Last edited: Feb 20, 2018
  13. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Then you understand that Compiled Vectorised Batched C++ code is significantly faster and more performant than .Net.

    It's just sounded like you were comparing the new job system with Tasks in .Net, or by native Tasks were you referring to another system?
     
  14. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    To begin with, the language can't be slow or fast. It can be runtime, JIT compiler, interpreter, and so on. The performance of the system depends on the implementation of technology. Using C++ instead of C# doesn't make your code magically faster and more performant. I can write C# programs/libraries that will outperform the similar solutions written in C++ because I know how to utilize the power of .NET platform using modern solutions (Roslyn one of them).

    Yes, I'm comparing it with the .NET TPL (not the Mono crap).

    Unity games/applications will still work on Mono/IL2CPP platforms and Boehm GC has not gone anywhere. In the right hands, the job system will solve only some of the performance issues. It's not a magic wand and it doesn't solve the core problems. Maybe Burst will change something, time will tell.
     
    Last edited: Mar 7, 2018
  15. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,964
    Right, but its not .NET, so no point comparing an apple to a giraffe?
     
  16. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    As you wish.

    If someone else (like a woodpecker above from my ignore list) thinks that comparing them, it's like comparing apple to a giraffe:
     
    Last edited: Feb 20, 2018
  17. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    The overhead in using C# vs compiled C++ is variable but for most non-trivial examples C++ is faster Benchmarks C# vs C++
    Also aren't they designed to do different jobs Task is a multi-threaded asynchronous process system. The C# Job System in Unity is designed to run vectorised data driven batch based operations quickly (like transform operations or raycasting).

    One is general multi-threading, the other game oriented multi-threading.
     
  18. Actually yes, and no. I agree it's not a magic wand, the performance will still depend on how you write your code. Obviously. And on the top of that, multi-threading is not good for everything, neither the data-oriented design.

    But don't forget that if you stay on the managed side and you don't open up the memory allocation on a transparent way to the C++ side (which Unity is written in) and to the C# side (which obviously your scripts are), you can't really save the context switching. Which usually involves a bunch of unsafe code and boxing as well.

    I see their point, they're aiming on a lot of targets at once. They try to give us a safe method to write multi-threaded code, try to make it relatively easy to manage unmanaged memory and compile our code to native code. This way it can run closer to the Unity core. And of course there is the enforcement of data oriented design-thinking.

    Doing this much stuff at once, I think it's not bad. But of course will see how they pull this off.
     
    Krajca and nxrighthere like this.
  19. IsaiahKelly

    IsaiahKelly

    Joined:
    Nov 11, 2012
    Posts:
    418
    I think Joachim's response to many of these complaints in another thread is worth reading, if you hadn't already. Here's one quote from it:

     
    Alverik likes this.
  20. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    Synthetic benchmarks are one thing, and real-world applications are another. Benchmarks are good for gap-closing, but they don't give a complete vision of what is happening under the hood of your game/application.

    @LurkingNinjaDev That's right, thanks.

    @IsaiahKelly This quote tells me only that the developers so far had some progress in concurrency. As for the race conditions, I'm not afraid of them because I have full control over the source code in my projects with a powerful enough code analyzer which shows many other potential concurrency problems. The race conditions are just the tip of the iceberg in multi-threaded/asynchronous/parallel code.
     
  21. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OK Why don't we try something how about a large a simple cube RTS battle can you run with Task vs the Job system?



    Old thread on this very topic -> https://forum.unity.com/threads/large-scale-rts-battles.196401/

    That covers the basics of how to optimise this type of thing above the standard rigidbody/terrain approach.

    If I throw together a quick test project or dig out the one I used here we can have a standard test platform that we can use to write Task vs C# Job System versions of?

    Note the job system has been used to do this:



    Could you achieve this with .Net multi-threading?
     
  22. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    I'll do a couple of tests and come back here with the results.
     
  23. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Or we could use my Simple RTS game as the test-bed with a larger scene and more troops and tanks by default?

    It's a tiny step up from cubes - https://arowx.itch.io/simple-rts
     
  24. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Comparing the Task helper functions to anything really, is wrong. That's the lowest common denominator that hardly anyone uses that much for high performance stuff. If you are going to compare something you are creating today, compare it to the best known current approaches.

    Assuming the challenge is possible, that the api's needed work outside of a job context, this is basically how I would approach it.

    If just working from scratch, most likely you would use a single writer design. You would have long lived threads most likely using spinwait to receive batches of transforms (or some equivalent) to update. So you just map out the work to those threads and do the work there. And by map out I mean just signal them to start working on the existing data, being that it's single writer. You aren't actually passing anything.

    That won't be as efficient as a ring buffer. I'd most likely use LMAX Disruptor if pushing enough data to matter.

    How would that compare to the job system? Far better then some naive system using async/await or Task.ParallelFor.
    C++ would be marginally better, but not by a large amount. But that assumes Unity api's designed to work in this context.
     
  25. IsaiahKelly

    IsaiahKelly

    Joined:
    Nov 11, 2012
    Posts:
    418
    @nxrighthere Did you actually read the whole post? He addresses memory allocation too. I am by no means an expert on the subject, but the point of sharing that was to hopefully help explain why Unity is "reinventing the wheel". It's great if you don't need any of this, but Unity is not used by just you, and I think you might be missing the whole point here or philosophy behind it.
     
  26. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    And what is the philosophy behind it? Leave core problems for ages, create solutions to get around some of them? Move away from OOP in favor of DOD? Seriously? Don't waste my time please.
     
    Last edited: Feb 22, 2018
    Deleted User likes this.
  27. Necromantic

    Necromantic

    Joined:
    Feb 11, 2013
    Posts:
    116
    Yes, Joachim said they are working on a special compiler for the Job system that is designed to better optimize more engine/math related tasks.
    Timecoded Youtube link for the compiler info.


    The new ECS itself, if done and used right, can already improve many things compared to the classic Component approach Unity has used so far.

    It can be a paradigm shift. It will be interesting to see how people and workflows adapt to it.
     
    Last edited: Feb 22, 2018
  28. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Alverik likes this.
  29. IsaiahKelly

    IsaiahKelly

    Joined:
    Nov 11, 2012
    Posts:
    418
    Sorry, I did not realize you had such strong prejudices. I mistakenly thought you would be interested in learning more, but I'm afraid this has turned out to be a waste of time for both of us. Apologies.
     
    Last edited: Feb 22, 2018
    ippdev and Alverik like this.
  30. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    It'll take a long time before the job system and friends will become mature enough and will be ready to use in real games/applications. At the moment, it's still in early development stage and there's even no documentation for it. And yes, this system is not the thing I want to learn. I never liked the data-oriented design, especially in the context of game development.

    In Unity, there are many core problems that need the attention of the tech developers. But over, and over, and over again instead of focusing on things that matter the most and the largest number of Unity's users will benefit, they are developing such Frankenstein systems. What do you have as a result? One obsolete scripting backend/runtime in the dawn of modern cross-platform .NET solutions, and the garbage collector from the stone age.
     
    Last edited: Dec 19, 2019
  31. Well, not to defend Unity over anything, they made some questionable decisions, but I also think you're wrong. DoD is substantial in the majority of the game development. It's crucial to feed the CPU efficiently.
    If you don't like it, that's your problem. I'm curious how they pull it off.

    And I think you're not fair at all. Just take a look at the Experimental Scripting Previews forum. They're working on a lot of "modern cross-platform .NET"-feature as well. Why do you think we can't have both?

    There are a lot of engine-choices out there, I really don't understand why is it good to slam Unity over these things.
     
    Alverik and angusmf like this.
  32. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    But it's not necessary to push hardware needs to higher level abstractions. That's my main beef with ECS. There are known ways to keep a more object oriented approach at the high level and still feed data to the cpu in a cache friendly way.

    Now that beef is primarily at the level of looking for the 'best' abstractions for concurrency and performance. ECS works fine, but it's definitely not at the leading edge of design in this area.
     
    nxrighthere likes this.
  33. Yeah, I know. It's not the newest idea, but it's necessary if someone wants to use Unity for any decent high-performance application.

    I don't hate OOP, my day job is a Java Business developer, so I know what we're talking about. Also I learned to code first in C64 Basic and Assembly, so I'm around for a long time.
    Which means I learned coding on not OOP way, I had to count the _bits_ I've put into a sidescroller demo in C64 assembly. I had to be careful how many bytes a thing is in 64k memory.
    Feel free to sketch up a system which would be OOP, does not translate a horrible amount of virtual table-usage, feeds the CPU with data efficiently with classes and objects.
    So please, enlighten me, how you propose to have an OOP system which translate a tight, low level system, which can feed the CPU efficiently and can be programmed OOP on high levels, also allows you to write multi-threaded pieces without ANY chance to create problems out of that. Also all of this should be support the latest .Net libraries and all of the platforms Unity supports. I'm really curious.
     
    Alverik likes this.
  34. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    Yes, they are working on it, I know. I also know it took them 7 years to mark one of the most important things as the planned. I hope we will not grow old by the time they bring it to the engine out of experimental status.

    The primary game engine I work with is Xenko, not Unity.

    @Arowx After some tests, I started creating a demo for you, in order to demonstrate how asynchronous tasks with an improved scheduler can be used for such massive-scale simulations with a more than fairly good frame-rates.
     
    Last edited: Feb 23, 2018
    zhuxianzhi likes this.
  35. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356

    I'm not advocating for a strict OOP design. I'm coming from a more practical angle. I've done half a dozen ECS implementations over the years and strict implementations are problematic and the problems stem from batching being forced into higher level abstractions. Systems might need to deal with data that way, but it makes things like querying components and managing component dependencies a real pain.

    So what I would be looking for is move batching lower by making it smarter. That's been the current trend in this area. Ring buffers work pretty well here and have been adapted to scenarios where queues were once used. Sometimes you need a bit of abstraction over ring buffers to make it work correctly with certain higher level abstractions. Like reserving slots in the ring so you can process data in a pipeline before putting it back on the ring.

    A simple example of how ring buffers might be used with unity is you have a scoped config where you say what api's you want to use. Unity allocates the buffers (native buffers of course) accordingly and then the various api's read from the ring. Right there it beats the performance of NativeArray and friends, and IMO makes for a more flexible base to build on. While you will need abstractions for pulling data off the rings, you can hide most of the low level memory management entirely.

    Pair that with a high level design based on single writer to get rid of fencing.
     
  36. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194

    You might have a good point this developer found that a process running via Parallel was faster than the same process running in the Job System -> https://forum.unity.com/threads/job-system-not-as-fast-as-mine-why-not.518374/

    Note that the UT developers mentioned that 18.1's Job system does not have the Burst compiler (Due in 18.2+) that will provide the speed boost the presentations have shown.

    Tip: Benchmarking or Challenges are a good idea!
     
    Alverik likes this.
  37. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    Currently, I'm creating a demo not in Unity. I'm using another engine, which is very well integrated with the TPL. The demo will be compiled using .NET Core SDK and enabled concurrent Server GC.
     
    Last edited: Mar 10, 2018
  38. Necromantic

    Necromantic

    Joined:
    Feb 11, 2013
    Posts:
    116
    I did not say that ECS are something new. But if they are done properly and you want to handle them correctly you have to use a different approach than what most people are used to with Unity or a lot of other programming paradigms in general.
     
  39. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    @Arowx So, as we remember, Joachim showed the Unity's boids simulation with 20000 agents \ ~45 FPS on 8-core CPU using the job system. Here's my simulation with 20000 agents \ ~67 FPS on 4-core CPU using the TPL with some neat improvements.



    Each agent follows the logic:
    1. Move and explore in any direction, but stay within the radius of a specified area
    2. Avoid obstacles which prevent movement
    3. Seek for the swarm to join
    4. Match the velocity of other members of the swarm
    5. Stay in a peaceful location after exploration

    It took me about a week to create this demo, but as you can see it was worth it.
     
    Last edited: Mar 6, 2018
  40. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Cool work and the first step in comparing the two systems, as we need one simulation implemented well using both approaches.

    So can you convert it to the new job system or give others access to the code so a Job system version can be created and compared?

    Potential benefits of doing this is other developers can help improve your version and ensure the Job System version is as optimal as possible.
     
  41. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    When the release version with the Burst will be available, I'll port this demo to the Unity.
     
  42. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    Great. Would love to see this benchmark comparing apples to apples by someone who is not Unity.


    Our experience is that with our approach DoD style code + Burst / SIMD +Component System + C# Job system we get > 100x speedups compared to traditional ObjectOriented code.

    I think at the end of the day it comes down to the naked cold hard truth of how fast it runs. It doesn't even matter how great the demos we at Unity make run, the only thing that matters is that guys like you can get unmatched performance that you couldn't reach before. Only doing it yourself will make it real.

    Btw. our latest Boid simulation demo running on ECS is now taking 0.9 ms on a MacBook Pro with 25.000 boids...

    Using Burst + Entity Component System + C# jobs:
    upload_2018-3-6_16-26-5.png
     
  43. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    @Joachim_Ante No doubt, this system is shaped to be a powerful tool in the right hands. But you guys are developing such systems and talking about a high-performance while the engine offers today only one obsolete scripting backend paired with Boehm GC. I've been working with Unity for about 9 years and the first time when I touched it was the days when the Island demo was become available (in Unity 2 if I remember correctly). Unity has changed greatly since those times, but the games/applications are still utilizing ancient core technologies. I know that you, Joachim, are one of the Unity's core developers, and I'm as a person who after all these years migrated to another engine (because of these and other problems), can tell you that this system is not the thing that I'm waiting for a very long time.

    ECS is a good step forward, and I know that support for .NET Standard is under development. I guess the garbage collector upgrade is also under development, but I can't find any information about this. From my point of view, the thing that will really make the single biggest impact is adding support for .NET Core with concurrent Workstation/Server GC. The time of Mono is almost over, Unity Technologies is a partner of .NET Foundation and you should know this.
     
    Last edited: Mar 13, 2018
    dadude123 and Deleted User like this.
  44. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    While Garbage Collector upgrades are definitely nice to have, I've already been working without ever requiring any "in-game GC allocs except during initializations" for a while (let's call this "without GC" for readability). Even with a better garbage collector solution, I'd still be coding without GC because it's always going to be better than with-GC, no matter how good the garbage collector is

    Someone can correct me if I'm wrong, but I am under the impression that the intended ECS workflow is without GC too (something about pre-allocating chunks of 64 KB for every entity group? I can't remember the details). Basically even if you keep instantiating entities non-stop, it never does any memory allocations. It's like a built-in pooling system of sorts. So I don't think a new garbage collector would make a difference with the ECS. Again; I'm super unsure about this so I'd appreciate clarifications
     
    Last edited: Mar 7, 2018
    Alverik and Lurking-Ninja like this.
  45. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    We can't always write GC-free code. GC will be triggered all the time, especially in complex projects with different assets/dependencies/libraries. Even if you are writing GC-friendly code, noticeable pauses, stalls and jitter will still be one of the biggest problem in Unity (this is what I experienced in my projects at least). ECS is might be made to work optimally in a garbage collector environment, but it doesn't solve these problems, they must be resolved at the CLR level.
     
    Last edited: Dec 19, 2019
    ProtonOne likes this.
  46. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    I agree that we need a better garbage collector, which is why it is being worked on by the Platform foundation team in Unity.

    That said, based on the type of games our users are building with Unity. For example 90FPS VR games, the use of a garbage collector while the main game loop is running is just limiting if you are trying to make hard gurantee about framerate. The MS GC does not make hard gurantees about keeping gc stalls to less than 1ms and there is no GC in any language that makes such gurantees.

    Thats why we are very much aiming in a direction where loading assets / scenes / instantiating entities etc can all be done without causing any GC allocations.

    There might be all kinds of reasons why in specific cases you want to go with traditional class / GC approaches, but ultimately the most performant solution is that the majority of unity games just never allocate GC memory while the game is running.

    So we very much want to enable that.
     
  47. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    See azul zing. It doesn't make strict guarantees per say, but in practice it never hits anywhere near 1ms on loads that a game handles.
     
    nxrighthere likes this.
  48. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    567
    Also, here's an article about the Microsoft's concurrent Workstation/Server GC. The conclusion to which I come after working on complex projects is there. The pauseless GC like Azul Zing is also mentioned there in the comments.

    Today's .NET Core with CoreCLR is a mature and powerful cross-platform solution for creating a high-performance games/applications. I have only positive experience while working with its ecosystem, which I can not say about Mono.
     
    Last edited: Mar 9, 2018
    MadeFromPolygons likes this.
  49. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    I can't completely blame Unity for taking the route of just avoid the .Net runtime for one reason. Other platforms. If we were just talking about PC, I'd argue strongly against that approach.

    The state of garbage collectors is not quite as bad as people think IMO. We don't really know how good the modern .Net collectors are compared to java, because the MS approach of make it a black box is still with us. We can take educated guesses by comparing how they do with default settings, but bottom line until they expose all the knobs, we won't know.

    It's not terribly difficult to get a well tuned app in java doing similar work as a game engine down to sub 4ms pauses every few minutes and longer 15ms -20ms pauses on the order of every 15 minutes or so. Tuning makes a big difference.

    So if we assume the .Net collectors are themselves basically on par, I think it's fairly safe to assume that the ease of managing memory via custom value types would make .Net do better then the JVM, or at least as good with less effort.
     
    MadeFromPolygons likes this.
  50. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,926
    I'd argue that 4ms pauses every few minutes are a pretty big deal. At 144fps, which is what I'm aiming for for the games I make (and what I hope will become the new standard soon), it results in a very noticeable stutter

    Besides, indirectly, the GC-less ECS workflow makes you write code that is probably way more efficient than anything people would write if they did it in a way that required GC (using lists of classes instead of structures of arrays contiguous in memory, not pooling stuff, etc...). I really think there are only advantages to no-GC approaches
     
Thread Status:
Not open for further replies.