Search Unity

  1. Unity 2019.1 is now released.
    Dismiss Notice

Outlines for such mesh?

Discussion in 'Shaders' started by zerotech15, May 24, 2019.

  1. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    Hey guys, anybody knows how to make outlines for such mesh? It is a single Mesh object consisting of multiple separated polygons with triangles. Vertex scaling method doesn't work because sometimes triangles are far away from origin (0, 0, 0) and shader cannot be used to calculate the center of each triangle. This effect will have to work on mobile devices.

    upload_2019-5-24_14-57-39.png
     
  2. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    Edit: projection has to be Orthographic
     
  3. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    1,635
  4. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    interesting.. thanks mate. I'm curious though how do I use blur to segregate overlapping mesh polygon projections with an outline?
     
  5. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    1,635
    Whole technique boils down to:
    1. Drawing mesh to RT
    1.1 Copying RT
    2. Bluring the second RT
    3. Cutting the initial RT from the second RT
    4. Rendering the actual result.

    So if something overlaps, it will be cut out by the initial mesh.
     
  6. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    1,635
    The end result will look like this:
    upload_2019-5-24_14-57-39.png

    As you can see you won't be able to get outline for the polygons that are inside.
     
  7. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    Unfortunately I need every polygon to have independent outline.
     
  8. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    1,635
    You can try setting it up as two separate models / RT's. Then drawing them independently of each other.
    That would require two extra passes though.
     
  9. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    Could you please elaborate? What did you mean by "two separate models"? The issue here is I might end up having to render hundreds of polygons separately processing them in real time with dramatic increase in draw calls.
     
  10. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    1,635
    It will. I don't know how to help you otherwise. Maybe some sort of shader magic could do it.
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,667
    Sure, you'd just need to...

    Oh.

    Yeah, that makes things harder.

    There are kind of 4 different ways to do outlines.

    The shell method, aka doubled geometry:
    1. Two passes, one normal, one drawing back faces and scaled from pivot. Works great on meshes that aren't too complex and have their pivot centered nicely. Totally fails on anything batched, or even mildly more complex than a box or sphere.
    2. Two passes, one normal, one drawing back faces and pushing vertices out based on normal. Works great on meshes that have nice rounded normals and doesn't require a good pivot placement. Fails on anything with sharp edges, like a box or a plane. Can be improved by using a secondary "smooth normal" stored in another vertex data channel (like vertex color, tangent, or a UV set).
    3. Use something like 1 or 2, but use stencils to prevent interior lines from showing. Otherwise same limitations.
    4. Use something like 1 or 2, but use draw order and ZWrite Off on the outline pass to prevent interior lines from showing. Otherwise same limitations.
    5. A mesh with doubled geometry already built into the mesh, no special shader needed for Unlit objects a constant distance from the view, otherwise have some kind of data on the outline part of the mesh to disable or otherwise handle lighting differently, and have pre-smoothed normals so something like option 2. Gets around the limitations of 1 & 2, but requires more setup.
    Those options work really well on mobile because they're quite cheap. This style of outline has been used for literally decades. Probably something like 90% of 3D games that have outlines use some variation on this technique.

    One pass fragment shader methods
    1. Fresnel based edging. Works great on soft shapes, kind of worthless on hard edges again. The roundness of the shape determines the width of the line without significant additional work. Not really an "outline", more like an "in"-line.
    2. Hand drawn lines. Works great of sprites! Good for adding interior lines on 3D objects where there's no actual geometry. Not good for dynamic outlines.
    3. Barycentric based triangle edges. Used for cheap wireframe rendering. Requires geometry shaders, or some special processing of the mesh data before hand. Works really well, but is also an "in"-line. Also assumes any triangle edge should have an outline, not based on discontinuities. Can be modified to handle arbitrary polygons and not just triangles.
    Also super cheap, but each method is somewhat limited. The barycentric wireframe technique might be a good option for you here if you're just dealing with triangles.
    https://catlikecoding.com/unity/tutorials/advanced-rendering/flat-and-wireframe-shading/
    The above link uses geometry shaders to setup the barycentric data, but you could pre setup the mesh by setting the vertex colors appropriately.

    Fin based methods
    Programatically find the edges of the shape and draw additional geometry around the object. This can be done offline in c#, or using a compute shader that you feed the mesh data into beforehand, or using a geometry shader. Unity doesn't provide all of the data needed to properly do this technique for geometry shaders, but neither does any other game engine. There are some really impressive examples online from academia, or for film, this is generally considered too expensive to use for games. So considering this is too expensive to do on desktop, it's definitely not a good option for mobile.

    A similar old school method would be to pre-process your geometry to add fins to it, then manipulate those in a vertex shader or from script. This would actually work on arbitrary single triangles and meshes! But, again, takes quite a bit of setup and vector math know-how.

    Here's the basic technique as used in Doom 3 for glows on lights:
    https://simonschreibt.de/gat/doom-3-volumetric-glow/

    Post process methods
    1. Render out the object to a render texture, blur or edge find, overlay onto scene. This is the technique @xVergilx suggested. Good for cases like highlighting an object, more expensive than a shell or fragment shader method, but also handles arbitrary shapes better with a guaranteed consistent line width outline (as long as the object isn't too small/thin to not get rendered at all). But, it's only a single outline around everything unless you run the whole process multiple times, which gets really expensive fast.
    2. Use scene depth and optionally normals to find discontinuities and draw edges there. This is the technique a lot of modern games use, like Borderlands. Doesn't require any additional setup, works on all kinds of (opaque) geometry, and shows a consistent line width. It also requires you have a depth texture to sample from.
    In general post process effects are expensive to do on mobile, and they both require rendering the geometry multiple times which adds additional performance hints. Unity used to have an edge detection image effect as part of the standard assets, but it's gone now. Now there are multiple for pay and free options on the store and elsewhere.

    This one looks promising for a basic quick and free option:
    https://assetstore.unity.com/packages/tools/particles-effects/outline-toolkit-98020

    And this one is a port of the legacy one to work with the latest official post processing stack:
    https://github.com/jean-moreno/EdgeDetect-PostProcessingUnity


    So, for what you're looking to do, a post process based method may still be the best option, but be warned it may come with a pretty significant performance hit. It also only works on opaque objects, which doesn't seem like it'll be an issue. Alternatively some modification of the barycentric technique may work for you.
     
    xVergilx likes this.
  12. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    Hi bgolus, thanks for elaborate reply! The problem with shell method is that my edges gonna be sharp and the geometry gonna be batched, ugh. However I'm definitely interested in trying a depth-based edge detection approach (this is what I am doing now) or maybe fin based algorithm. My game doesnt have too many elements in it, so its totally affordable to invest some computing power into a decent looking effect.
     
  13. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    Oh no! Just when I implemented my depth based outline I remembered it has to work with polygons with alpha textures lol. Is it a dead end?
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,667
    If you're using alpha blending, then yes, it's a dead end. If you're using an alpha tested cutout shader, it's still possible.

    The depth texture can only store a single depth per pixel, so soft alpha blended shapes are impossible to handle. The only solution for alpha blending is to go with a shader based techniques of some kind, like a combination of the hand drawn edges or barycentric edges for the overall polygon, and then doing in-shader edge detection on the alpha texture you're using. It's possible, but I'd stick with alpha testing.
     
  15. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    I'm using alpha blending unfortunately lol. Gonna be trying to find my way around cutout shader. Is it possible to replace my Camera depth shader pass with the one that cuts out the pixels from my mesh by using alpha texture? Sounds hardcore though..
     
  16. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    6,667
    Yes and no. The depth texture is generated by rendering all opaque objects in the scene using each object's shader's shadow caster pass. It ignores any object using the transparent queue, so you're out of luck there. You could change your transparent shader to use the opaque queue, but that'll have several rendering issues; sorting will be backwards, skybox will render on top of it, etc.

    If you're using an outline post process that depth normal texture, that uses a replacement shader pass. For that you could just tell your shader to use "RenderType"="TransparentCutout"
     
  17. zerotech15

    zerotech15

    Joined:
    Jan 28, 2016
    Posts:
    94
    That would be too much work for one feature. I guess I should move on instead... However thank you a lot for helping me out, I learned many things in the process.