Search Unity

Is there a way to get screen pos depth in shader?

Discussion in 'Shaders' started by orangetech, Nov 20, 2020.

  1. orangetech

    orangetech

    Joined:
    Sep 30, 2017
    Posts:
    50

    depth buffer.(left)
    I need position's depth on right object.

    As I know,after rasterization,GPU can "Early Depth Test".
    So I think user can't get any position depth.
    There is a 2x2 Screen:
    What I want is calculate P0 depth.
    Any information is helpful.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    I'm not sure I understand exactly what you're asking, but if you want to know the Z depth of a specific vertex position, that's trivial. The position value the vertex shader outputs has that already.
    Code (csharp):
    1. float4 clipPos = UnityObjectToClipPos(v.vertex);
    2. float zDepth = clipPos.z / clipPos.w;
    The only caveat being that you need to do a little extra if you're on OpenGL/ES, since that graphics API is a little funny in how it handles the clip space to depth compared to everyone else.
    Code (csharp):
    1. #if !defined(UNITY_REVERSED_Z) // basically only OpenGL
    2. zDepth = zDepth * 0.5 + 0.5; // remap -1 to 1 range to 0.0 to 1.0
    3. #endif
    If you want the z depth of a specific 2D screen position, then the only information you have is what's in the camera depth texture. With out a third dimension in the coordinates you have there is no depth to calculate.
     
    Littlewhinging likes this.
  3. orangetech

    orangetech

    Joined:
    Sep 30, 2017
    Posts:
    50
    As my understand,what ever vertex position. when vertex transmit to fragment shader, the vertex value wiil be Interpolation fit on pixel position.For example,real point is (0.6,0.7),but fragment shader recive (1,1)(0,1)(0,0),the point was interpolated.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You’re not quite understanding what “interpolated” means in this context.

    The vertex shader is calculating the clip space position (a 4 dimensional version of screen space & depth) of each vertex of the mesh. Sets of three vertices are used by the GPU to calculate the screen area the mesh’s triangles cover. The GPU then runs the fragment shader for the pixels the triangle covers, passing along the interpolated data calculated by using the barycentric coordinates of that fragment’s triangle position to get a blend of the 3 vertices that make up that triangle.

    It’s not that it’s a vertex position snapped to at he pixel center, it’s that the fragment shader doesn’t know anything about the vertices anymore, only the triangle surface, which is what the interpolated data represents.

    There’s no straight forward way to get any specific vertex positions from the fragment shader, as the fragment shader doesn’t have direct access to the individual vertices, or any of the data needed to reconstruct their positions.

    You could use a geometry shader to pass along all 3 unique vertex positions to all 3 vertices of each triangle, and use nointerpolation for the output semantic. But the main thing is there’s not ever “one” vertex for the fragment shader.*

    * Ignoring the now rare and not evenly fully supported point rendering mode.
     
    shaochun likes this.