Search Unity

Depth intersection line with constant thickness

Discussion in 'Shaders' started by Johannski, Oct 14, 2019.

  1. Johannski


    Jan 25, 2014
    I'm trying to draw vectorlike lines on a cube that are generated from object intersections.

    My first take was to use the depth of the cube as a comparison to the pixel depth of the objects that should intersect with the cube:


    The result is correct lines, but with different thickness. Especially the cylinder (circle at the top) has some very thin parts, because the cylinder geometry looks in the direction of the camera, and therefore only few pixels are found that have a similar depth to the cube depth.

    Another thought of mine was to use two materials, to get the area of the intersection, in order to use a post processing effect to generate the lines.

    One Material renders with Front Face culling and a greaterEqual ZTest, the other one Backface Culling and a LessEqualTest:


    The purple area of the circle looks quite good, but the concave shape at the bottom shows a problem:

    There is an overlapping area that is purple but is outside of the actual intersection.

    Another possibility would be decals/projectors, but I would like to have a break in the line when moving around a corner.

    The last though I had was to calculate the intersection on the CPU. However, I fear to hit a performance issue when calculating everything on the CPU for hundreds of objects.

    I'm wondering: Is there maybe an interesting usecase for geometry shaders to get a better result.
    Or is there another combination of stencil shaders with which I can get the intersection for concave objects. Target platform is windows and I'm quite flexible on how to solve it, so there is a lot of wiggle room here. I hope I'm missing some shader solution I haven't thought of.
  2. bgolus


    Dec 7, 2012
    There are two parts to this.

    For most intersection shaders like this you're calculating the distance between the depths of the surface you're rendering and the depth of the surface in the depth texture. This will change as you move the camera and depends heavily on the facing of both surfaces in relation to the camera. If the two surfaces are very close to the same angle, they're going to be closer together for longer, making a thicker line. If the two surfaces are perfectly perpendicular then the line is going to be very thin as the two surfaces are only close to each other briefly.

    The other part is the width of the line is limited to the geometry you're rendering. In the case of the cylinder for example, you're viewing some of the side geometry edge on, so that's as thick as it can ever show a line.

    You can use some math tricks to try and take the normal of the surface you're rendering, the inferred normal from the depth texture (or depth normal texture), and the angle the camera is viewing those surfaces to calculate a better "depth" to test against, depending on if you want the line widths to be constant in screen space or in world space. You just need to be mindful of parallel surfaces as the above math can lead to infinity or NaN. But you're still limited by the second problem if you want the lines to be on or outside of the intersection. For that you could use a geometry shader to expand each triangle in screen space, and pass along the original triangle's vertices. With that you could then do a distance field check against the triangle and the depth buffer to draw your lines however you want at any thickness. The drawback of this is it's potentially expensive, and you'll get overlapping lines on each neighboring intersecting triangle's edge.

    It really boils down to the geometry shader doesn't have any way of knowing where the intersection is. Really, neither does the fragment shader. Unless you test every triangle of one mesh against every triangle of another you don't know where the intersections are, and neither of those are things that geometry or fragment shaders are really capable of. It's also way more complex than it seams to get right.

    And hence why people fall back to what you're already doing and just kind of shrug it off as close enough, or use projectors to get something that is also wrong, but also close enough.
    Johannski and neoshaman like this.
  3. Johannski


    Jan 25, 2014
    Hi Ben,

    Thanks a lot for your extensive answer, I highly appreciate your input! At first the problem seemed quite easy to solve, but as you pointed out, it is actually a lot more complex to solve than I anticipated. I think a CPU calculation will be the way to got for me, since the shapes will be quite simple. Too bad that there is no simple solution to this at first rather simple looking problem.