Search Unity

  1. If you have experience with import & exporting custom (.unitypackage) packages, please help complete a survey (open until May 15, 2024).
    Dismiss Notice
  2. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice

Unified model across Jobs/Compute..

Discussion in 'Entity Component System' started by jbooth, Dec 3, 2018.

  1. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    So, I realize there are much higher priorities right now, but I've been converting a lot of code from OO to Jobs lately on a contract project I'm on. What I'm finding is that while the job system is fast, the GPU is still a whole lot faster in many cases. As such, I end up moving systems between CPU and GPU code a lot, or having libraries of routines which need to be maintained across both.

    It seems like the natural evolution of this is to write everything in C# and just be able to cross compile it into a shader when needed. The semantics between writing compute and jobs are not that different, and I could imagine with a large and significant amount of work getting it down to some kind of target attribute at compile time. I could even imagine compiling both options like a shader variant and launching them based on some kind of system analysis or load balancing tech. Certainly there would be a large number of challenges with this, but it seems in the ballpark.

    Even just being able to write everything in C# such that common functions can be shared would be huge. Of course, you might break determinism if you start moving things to GPUs, but for most things that's not an issue.
     
  2. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    Do you you mean moving work to compute shaders. For example cloth simulation, physics, culling, AI etc cross compiled to compute shaders or do you mean moving rendering work to be written in C# and cross compiled to GPU?
     
  3. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Both or either? In my current case, I'm doing a ton of mesh generation, procedural texturing, etc. As an example, I have to compute some mesh geometry based on combining various height maps and the current view, then texture it based on procedural rules. The geometry must be dynamically computed because it's based on view, while the texturing can be precomputed, and the geometry data is too big to store, but must exist for physics in an area around the player. If it was just rendering it could be all on the GPU, but it needs to exist on the CPU for gameplay. So I'm using the job system to do all the geometry generation, and an offline system to do all the texturing, which runs on the GPU because the Job system was too slow for this to be responsive in the editor.

    So I end up having the code which computes the geometry positions both in C# and HLSL, and that system has a number of artist controlled layers, so it can be quite a bit of code to duplicate and keep in sync through changes. Additionally the resources are currently prepared for each target as well (Texture2D for the shader, NativeArray for jobs, though I realize it's possible to read a texture in Jobs). Being able to share this code would be a big win, even if the launching code for each was different. Being able to share all the code and simply [TargetCompute] with it, well, that'd be crazy sauce.
     
    BenzzzX and Lurking-Ninja like this.
  4. I guess something like this was the reason having the HLSL-style Math library.

    It would be great for tasks where we don't need the data back in system memory/CPU and maybe sometimes when we need them.
     
    FROS7 likes this.