Search Unity

Sampling an environment with compute shader

Discussion in 'Shaders' started by arsnyan2002, Jan 4, 2021.

  1. arsnyan2002

    arsnyan2002

    Joined:
    Apr 30, 2019
    Posts:
    30
    Not gonna lie, I don't understand a single thing about compute shaders, so the main question will be "can this be done or not" and only then it'll be "how to actually make this done".

    I have an object with compute shader attached to it via C# script. This object is not camera, it can be absolutely anything, but I prefer it to be a standard sphere.
    I need to capture low-res 360 degree image of an environment, including non-static objects, from that game object. In real-time, of course. I could use realtime reflection probes, but they don't suit my needs. But it does absolutely everything the same way as I need my sphere to do.

    I was thinking about casting small amount of rays in all directions and getting color from hit objects, but then I realized it isn't possible or I couldn't find any solution to get object data from raycasting (which, btw, is done weirdly).

    So, the point is, can it be done and if yes then how? Is there at least some examples of how to project world to texture without attaching script to camera?
    Thanks in advance.
     
  2. koirat

    koirat

    Joined:
    Jul 7, 2012
    Posts:
    2,074
    arsnyan2002 likes this.
  3. arsnyan2002

    arsnyan2002

    Joined:
    Apr 30, 2019
    Posts:
    30
    Thank you. That solution isn't bad, but I don't think it's suitable for me, because first of all, I will have lots of such objects, and attaching camera to each of them, and then rendering to cubemap is gonna be too slow, as this process cannot be parallelized. And that's why I was thinking about using Compute Shaders.

    I think it's an acceptable solution, but if there's any other ways of solving this issue that you may know of, I would be glad to hear that.
     
  4. arsnyan2002

    arsnyan2002

    Joined:
    Apr 30, 2019
    Posts:
    30
    Okay, maybe that's not the best solution ever, but the best I could think of is attaching camera to all spheres and then using compute shaders as I need with low resolution. Point is, that's still very expensive. At least I think so.
    I really don't think capturing cubemaps is what I need, as my goal is to get something like that.
    Although I need it to run realtime, the performance itself doesn't matter to me. I make this not for the gamedev purposes, but graphical stuff.