Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Any progress or development of DOTS to GPU compute?

Discussion in 'Entity Component System' started by Arowx, Jun 10, 2021.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    DOTS is amazing but is limited to the computing power of the users CPU cores...

    Sitting next to that CPU will be a GPU with way more computing power in the order of 10 to 30 times the processing power ( AMD Ryzen 9 5950X ~1 TFLOP - high end GPU 10-30 TFLOPS).

    Imagine a Mega City Demo that can simulate a living breathing city and factions battling within that city?

    So any progress or development of DOTS to GPU compute?

    PS Think of how amazing it would be to be able to write code that works on the CPU and GPU within a system that can balance the utilisation of both for maximum effect.
     
    bb8_1 likes this.
  2. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    You can use compute shaders all you want already. The cost is in transferring data between CPU and GPU.

    I'm not sure what you are imagining, perhaps some kind of new language that can be compiled to both c# and cg/hlsl and used by the job system to schedule on CPU and GPU somehow automatically deciding when it's good decision to utilise GPU and pay for transfer cost?

    I wouldn't hold your breath waiting for this one if I were you.. :) But DOTS is just a technology stack that adds to your toolbox, you still have compute shaders available to you now.
     
  3. Mortuus17

    Mortuus17

    Joined:
    Jan 6, 2020
    Posts:
    105
    In my personal opinion, that isn't necessary.

    First off, you can invoke compute shaders from Unity already.
    Secondly, most games today are GPU bound whereas they often only utilize very few CPU threads... Offloading a ton of work to the GPU is thus in many, many cases worse for performance. Unless you can max out all threads of a system, there is no use in even thinking about it but if you do need to do it, you have the tools you need at hand already.
     
    Nyanpas likes this.
  4. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    406
    Hence herewithin lies the real issue. It's so weird that so many applications out there even today are singlethreaded, completely disregarding the major advances in new CPU-technology. I wish I knew why there is such a massive gap between software and hardware now.
     
  5. Because writing multi-threaded code without advanced support is hard. This is why the Job System is awesome. It makes writing multi-threaded code incredibly safe and easy. Well, easier.

    And engineers and companies are lazy. They do the least amount of work they can get away with.
     
    Nyanpas likes this.
  6. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    406
    C#'s own Thread-class is good too IMHO. I've worked a lot with that in the past.
     
  7. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Yes but I need to know at least two programming languages and APIs as well as the differences in CPU and GPU multithreading as well as the code/platform techniques to combine the two.

    What if could just write DOTS code then tick a box or add a [GPU] tag to allow it to run on the GPU.
    Hasn't that been reduced via Resizable BAR allowing more than 256MB memory access from CPU to GPU, and the arrival of mobile hardware with shared CPU/GPU memory?
     
  8. Guedez

    Guedez

    Joined:
    Jun 1, 2012
    Posts:
    827
    I don't see it being useful besides in very niche cases. Since it takes 1~4 frames to get data out of the GPU. Unless the job result was intended to be used specifically by the GPU, like the job result is the position of a bunch of instanced meshes, I don't see it being very useful.

    EDIT: Now... about the onboard GPU, I am unsure how they work, but as far as I am aware, they not only lie in the CPU (thus no cost of memory transfer GPU <-> CPU), they are mostly unused when you have a dedicated one. I could see some sort of advancements that would let you run burst code in the onboard GPU as if it was some more extra CPU cores. Last I asked about such technology, I remember the answer being mostly a question of there not existing the proper drivers to use the GPU as such.
     
  9. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    406
    Pathfinding is not something you need to do per frame, same goes for other kinds of locational queries, as well as AI, which can be timed to be around once per 200ms or more. However, the GPU is already being hammered by people's grand visions of imsims and open worlds, so it is best left for rendering in most cases.
     
  10. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Good point with 4k resolutions and >200hz refresh rates but what about DLSS and FSR that allow up scaling of the rendering maybe we could fit in some DOTS GPU compute cycles.
     
  11. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    406
    Ok, so let's say the CUDA cores are left for anything but rendering on the Nvidier-architecture. Then mayhaps it could be the game-"AI"-cruncher while the rest is doing their things to maintain an image on the screen? I wish I knew more about these things but my time has been extremely limited recently...