Search Unity

Question How to relocate/redirect camera pixel rays?

Discussion in 'Shaders' started by Magic-4e, Apr 4, 2021.

  1. Magic-4e

    Magic-4e

    Joined:
    May 9, 2018
    Posts:
    25
    Hello.

    Shaders can draw pixels on the screen based on rays that are casted from the camera towards objects with those shaders in your 3d scene.
    But I was wondering, can you manipulate those rays?

    Is there a way to redirect them and/or move them to a different location.
    Kind of like this example:
    Portal shader example.png
    I really wonder if there is a way to do this.
    In my head it makes sense, but is there any actual way in practice?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    Unless you're using ray tracing, which if you're using Unity you probably aren't since that's a still experimental feature of the HDRP that requires a lot of hoop jumping just to enable, this isn't possible. GPUs render using rasterization, there are no "rays" to speak of, because that's not how hardware rasterization works.

    Instead the way you'd achieve something like the above would be using render textures or stencil portals. The TLDR version of both is you use separate cameras to render from the original and the "redirected" views.

     
  3. Magic-4e

    Magic-4e

    Joined:
    May 9, 2018
    Posts:
    25
    Ah so that's why I can't find anything on this method.
    But that really makes me wonder how rasterization actually works, compared to raytracing.

    I can't imagine how you would get the surface pixels of an object without casting rays into your scene.

    Do you know a video that could visually explain this?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    Ray tracing & rasterization are very similar when it comes to triangle rendering. Both are using some kind of ray / triangle intersection equation, but the difference is rasterization uses the fact it's on a regular grid and using a linear projection to simplify the math significantly compared to the ray traced version. It becomes a purely 2D position / triangle overlap test for each triangle, using vertex positions that have been pre-transformed into screen space.

    Depth sorting is done in two parts: First is manually sorting of the meshes, and sometimes individual triangles, on the CPU before issuing them to the GPU. Second is using a Z buffer, aka a depth buffer. Depth is determined by interpolating the pre-transformed vertices' depth across the triangle using perspective correct barycentric interpolation. At each pixel that passed the point / triangle overlap it checks to see if the interpolated depth is closer to the "viewer" than what's already in the depth buffer and skips rendering if it's not. If it is, and it's an opaque surface, it'll replace the value in the depth buffer and render that pixel. Note, this isn't part of rasterization strictly speaking, but is how GPUs overcome the limitation of rasterization vs ray tracing. Ray tracing sorting works either by collecting all intersections along the ray at one time and then sorting them to find the closest, or by tracing through a some kind of spatial hierarchy like a BVH that limits how many triangles need to be tested against.

    I honestly don't know of any good you tube videos that break it down visually. Only really heavy math blogs / lessons.
     
  5. Magic-4e

    Magic-4e

    Joined:
    May 9, 2018
    Posts:
    25
    I see.

    So rasterization flattens the scene to calculate all triangles.
    But if the depth buffer captures depth by sending Ray's into the scene.
    How is that not as expensive to do as ray tracing?
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    It's not. There are no rays being used with rasterization. None at all.

    The depth value is calculated using the barycentric position of each pixel within the triangle to interpolate the pre-calculated screen space depth of the 3 vertices. "Barycentric" is a fancy word for "how close to each vertex of a triangle relative to the dimensions of the triangle".

    The depth is compared against and saved to a 2D screen space depth buffer (aka a render texture) for each screen pixel. It's still effectively all 2D math!
     
  7. Magic-4e

    Magic-4e

    Joined:
    May 9, 2018
    Posts:
    25
    Ah I see.

    Thanks for clearing this up.

    It was hard to find out how rasterization really was different from raytracing.

    Most articles where quickly going way to deep into it and I couldn't process all that information.
    Either that or it was to simple.
    Like: rasterization = cheap and raytracing = not, but it has better out of the box reflections.

    Anyway, thank you for your comments.
    Hope this helps somebody else too.