Search Unity

Question CPU to communicate with the GPU simulation data (compute shaders). what's the approach?

Discussion in 'Shaders' started by unity_3KoGAU6lZHRhFA, Apr 17, 2023.

  1. unity_3KoGAU6lZHRhFA

    unity_3KoGAU6lZHRhFA

    Joined:
    Jul 26, 2022
    Posts:
    7
    People are constantly saying things like "you could do it on CPU, but it's faster to compute on GPU, it's cool!" and so on, but they almost never consider the practical usage of those things, especially in real time.

    i want to have voxel lighting just like in minecraft, but more dynamic. i want the light to spread smoothly as a fluid in cellular automata. every iteration (maybe every third of a second), the light in each tile depends on the light of neighbouring tiles. The light can mix and spread to arbitrarily long distances. and all of that in a large 3d open voxel world.

    Almost everybody will say - GPU. make a compute shader and all that. i suppose 3d texture is just perfect for those things. Okay but...

    The question here is scalability. Not the size scalability, but the functional scalability! imagine a certain MonoBehaviour like Plant that uses the information of light of a certain voxel. how can this Plant object know does it have sufficient light or not? how am i supposed to get that information on the CPU if the simulation runs on GPU? what is a good practice to have a proper communication between CPU and GPU in real time? people are saying it's extremely slow to transfer data back and forth like that. then is there any reason to use GPU in my situation at all??
    this problem has been demotivating me in creating games for so long. GPU computing seems like a godlike gift, and yet, for me it always felt like a rabbit hole. just because those objects use the light data computed on the gpu in real time, do i need to have Plant, Player and all those gameobjects to also be on the GPU then?? If so, then i want a proper tutorial.
     
  2. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    Sure you can pass data back from the GPU. It's not slow -- the data bus is bidirectional and quite fast -- the problem is latency. When you say "hey GPU, give me this piece of data", you need to wait for it -- sometimes up to 5-10ms. Just a single sync point per frame is enough to tank your framerate.

    What you want to do is send some work to the GPU and then get that data back 1-2 frames later with an AsyncGPUReadback. In your example, you might want to have a "light manager" object on the CPU that keeps a copy of the buffer. Every few frames, you tell the GPU to do the calculation, and then the next frame get that data back and copy it to your CPU buffer. GameObjects just access it through this LightManager.

    Although for that particular case, you might be better off doing it in a Burst job on the CPU.
     
    unity_3KoGAU6lZHRhFA and Ryiah like this.