Search Unity

Question Converting clip space from one camera to another

Discussion in 'Shaders' started by OscarLouw, Sep 3, 2020.

  1. OscarLouw

    OscarLouw

    Joined:
    Jun 22, 2017
    Posts:
    7
    Hi all! I'm fighting with a strange problem here.

    I am rendering two meshes to a texture (via a command buffer) and sending as a global for use in shaders. I use this texture to move vertices or do alpha clipping cutout on objects. Everything is great.

    The problem happens in the shadow pass. Since I am comparing vertex position sent through ObjectToClipPos with a screen space texture, and the shadows are drawn from a different perspective and orthographic camera, the shadows end up being completely misaligned from the clipped pixels or offset vertices.

    Some pictures to demonstrate:




    How I'm sampling the texture in Amplify Shader Editor:



    The screen position node essentially takes the vertex position, multiplies it by the ObjectToClipPos matrix, and divides by the W component (as far as I understand).

    Should I be sending the renderTexture's camera's viewProjectMatrix to a shader global and use that in some way to offset the viewport texture based on the camera that is currently rendering?

    (p.s. praise be to bgolus!)
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    You should probably look into how shadow maps are used, because they're solving the exact same problem. They render the scene depth from the "view" of a light and when rendering the scene compare the object position against it. The trick is you need to know the conversion from world space to what ever space the depth texture in question was rendered from.