Search Unity

Program a shader that only displays the visible edges

Discussion in 'Shaders' started by elhoblerino, Nov 7, 2019.

  1. elhoblerino

    elhoblerino

    Joined:
    Oct 16, 2019
    Posts:
    5
    Hey everybody,

    for a current project I have to program a shader that only returns the visible edges of the object as a wireframe model. My idea was to create a wireframe model of my object and then delete all edges between triangles with the same normal vector. Since I'm a total beginner in programming, I don't know if this is the right approach or how to implement it in C#.

    I am grateful for any help
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    8,317
    There are a lot of threads on edge finding / wireframe rendering in the forums. The most common solution is to use the camera depth normal texture to do edge finding as a post process, but that doesn't work if you want your objects to be transparent, which it sounds like you want.

    The next solution is usually one that highlights all triangles edges using barycentric coordinates, either using a geometry shader to extract them or prebaking data into vertex colors / uvs, but again that doesn't really help you.

    The solution I came up with for a game I worked on was to generate a texture that was a SDF to the closest edge and using that to be able to render anti-aliased wire frames. The way that worked was the meshes were already auto-UVed based on a low surface angle threshold with a constant-texel size. I exported those UV sheets into Photoshop and applied an inner & outer glow layer style and then used that texture in game. It'd be possible to do something similar inside of Unity if you can't pre-baked the content, but it might be slow.

    Really, your idea of iterating over the surface geometry in C# is fine. Note that the way Unity's meshes are stored, it's already broken up into mesh "islands" where any change in normal across faces will cause shared vertices to be split. It'll also split across UVs and vertex color changes, but as long as those line up with the hard edges of the normals, then it should be pretty easy to find the edges of polygons to draw.

    Just deleting the interior triangles will result in polygon soup or no edges at all to render as GPUs can really only render triangles and not arbitrary sided polygons. So you'd have to construct some kind of new mesh that's just the edges. Or you could use something like Vectrosity to render the edges out as lines manually.
    https://assetstore.unity.com/packages/tools/particles-effects/vectrosity-82
     
  3. Przemyslaw_Zaworski

    Przemyslaw_Zaworski

    Joined:
    Jun 9, 2017
    Posts:
    187
    Geometry shader is my favorite option:

    Code (CSharp):
    1. Shader "Wireframe (Geometry Shader)"
    2. {
    3.     SubShader
    4.     {
    5.         Tags { "RenderType" = "Transparent" "Queue" = "Transparent" }
    6.         Pass
    7.         {
    8.             Blend SrcAlpha OneMinusSrcAlpha
    9.             CGPROGRAM
    10.             #pragma vertex VSMain
    11.             #pragma geometry GSMain
    12.             #pragma fragment PSMain
    13.             #pragma target 5.0
    14.  
    15.             struct Data
    16.             {
    17.                 float4 vertex : SV_Position;
    18.                 float2 barycentric : BARYCENTRIC;
    19.             };
    20.  
    21.             void VSMain(inout float4 vertex:POSITION) { }
    22.  
    23.             [maxvertexcount(3)]
    24.             void GSMain( triangle float4 patch[3]:SV_Position, inout TriangleStream<Data> stream)
    25.             {
    26.                 Data GS;
    27.                 for (uint i = 0; i < 3; i++)
    28.                 {
    29.                     GS.vertex = UnityObjectToClipPos(patch[i]);
    30.                     GS.barycentric = float2(fmod(i,2.0), step(2.0,i));
    31.                     stream.Append(GS);
    32.                 }
    33.                 stream.RestartStrip();
    34.             }
    35.  
    36.             float4 PSMain(Data PS) : SV_Target
    37.             {
    38.                 float3 coord = float3(PS.barycentric, 1.0 - PS.barycentric.x - PS.barycentric.y);
    39.                 coord = smoothstep(fwidth(coord)*0.1, fwidth(coord)*0.1 + fwidth(coord), coord);
    40.                 return float4(0..xxx, 1.0 - min(coord.x, min(coord.y, coord.z)));
    41.             }
    42.             ENDCG
    43.         }
    44.     }
    45. }
    upload_2019-11-9_21-56-15.png
     
  4. elhoblerino

    elhoblerino

    Joined:
    Oct 16, 2019
    Posts:
    5
    First of all, thanks to both of you for the quick response.

    Since my application should run on the Microsoft Hololens, I would prefer the geometry shader solution. In addition, the edges between the triangles don't have to be deleted, but it is sufficient to color them black.

    @Przemyslaw_Zaworski
    thanks for your example, but all triangles are still present here. is it possible to use this example as a basis to compare the normal vectors of neighboring triangles and if the difference is below a threshold, color them black?
    I have already tried to find a solution for this, but without success.
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    8,317
    No. Geometry shaders are explicitly limited to information about a single triangle at a time. There are examples online you'll find in academic papers that show geometry shaders accessing data from neighboring triangles via adjacency information, but this isn't something you can do in Unity, or any other real time game engine I know of. Adjacency information simply isn't used by anyone outside of academia as it's way, way too slow to use generically.

    So instead you need to pre-process the meshes in c#, or plausibly in a compute shader (which still requires you process the mesh in c# to convert it into a data form a compute shader can use). If you're going to pre-process the mesh, there's no real reason to use a geometry shader anymore since any of the data you'd be getting from using one can be pre-baked into the mesh.
     
  6. Przemyslaw_Zaworski

    Przemyslaw_Zaworski

    Joined:
    Jun 9, 2017
    Posts:
    187
    Geometry shader - basic approach : generate triangle normals, blur image and use derivatives for render contours.
    Black aliased edges and visualised normals - code needs to be improved:

    upload_2019-11-13_13-54-1.png

    Code (CSharp):
    1. Shader "Wireframe with Silhouette Outline (Geometry Shader)"
    2. {
    3.     SubShader
    4.     {
    5.         Pass
    6.         {
    7.             CGPROGRAM
    8.             #pragma vertex VSMain
    9.             #pragma geometry GSMain
    10.             #pragma fragment PSMain
    11.             #pragma target 5.0
    12.  
    13.             struct Data
    14.             {
    15.                 float4 vertex : SV_Position;
    16.                 float3 normal : NORMAL;
    17.             };
    18.  
    19.             float3 GenerateNormal(float3 v1, float3 v2, float3 v3)
    20.             {
    21.                 return normalize(cross(v2 - v1, v3 - v1));
    22.             }
    23.  
    24.             void VSMain(inout float4 vertex:POSITION) { }
    25.  
    26.             [maxvertexcount(3)]
    27.             void GSMain( triangle float4 patch[3]:SV_Position, inout TriangleStream<Data> stream )
    28.             {
    29.                 Data GS;
    30.                 float3 trianglenormal = GenerateNormal(patch[0].xyz, patch[1].xyz, patch[2].xyz);              
    31.                 for (uint i = 0; i < 3; i++)
    32.                 {
    33.                     GS.vertex = UnityObjectToClipPos(patch[i]);
    34.                     GS.normal = trianglenormal;
    35.                     stream.Append(GS);
    36.                 }
    37.                 stream.RestartStrip();
    38.             }
    39.  
    40.             float4 PSMain(Data PS) : SV_Target
    41.             {
    42.                 return float4(PS.normal, 1.0);
    43.             }
    44.             ENDCG
    45.         }
    46.        
    47.         GrabPass {"_BackgroundTexture"}
    48.        
    49.         Pass
    50.         {
    51.             CGPROGRAM
    52.             #pragma vertex VSMain
    53.             #pragma geometry GSMain
    54.             #pragma fragment PSMain
    55.             #pragma target 5.0
    56.  
    57.             sampler2D _BackgroundTexture;
    58.             float4 _BackgroundTexture_TexelSize;
    59.            
    60.             struct Data { float4 vertex : SV_Position; };
    61.  
    62.             void VSMain(inout float4 vertex:POSITION) { }
    63.  
    64.             [maxvertexcount(3)]
    65.             void GSMain( triangle float4 patch[3]:SV_Position, inout TriangleStream<Data> stream )
    66.             {
    67.                 Data GS;
    68.                 for (uint i = 0; i < 3; i++)
    69.                 {
    70.                     GS.vertex = UnityObjectToClipPos(patch[i]);
    71.                     stream.Append(GS);
    72.                 }
    73.                 stream.RestartStrip();
    74.             }
    75.  
    76.             float3 blur(float2 uv, float radius)
    77.             {
    78.                 float2x2 m = float2x2(-0.736717, 0.6762, -0.6762, -0.736717);
    79.                 float3 total = 0..xxx;
    80.                 float2 texel = float2(0.002*_BackgroundTexture_TexelSize.z/_BackgroundTexture_TexelSize.w, 0.002);
    81.                 float2 angle = float2(0.0,radius);
    82.                 radius = 1.0;
    83.                 for (int j=0; j<64; j++)
    84.                 {
    85.                     radius += rcp(radius);
    86.                     angle = mul(angle, m);
    87.                     float3 color = tex2D(_BackgroundTexture, uv+texel*(radius-1.0)*angle).rgb;
    88.                     total += color;
    89.                 }
    90.                 return total/64.0;
    91.             }
    92.            
    93.             float4 PSMain(Data PS) : SV_Target
    94.             {
    95.                 float3 color = blur(PS.vertex.xy/_ScreenParams.xy, 0.05);
    96.                 float3 value = smoothstep(0.,50., abs(color)/fwidth(color));
    97.                 return float4(min(min(value.x, min(value.y, value.z)).xxx , abs(color)), 1.0) ;
    98.             }
    99.             ENDCG
    100.         }
    101.     }
    102. }
    upload_2019-11-13_13-57-33.png
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    8,317
    Hololens was mentioned earlier. AFAIK grab pass has problems on Hololens, potentially with only one eye working properly? It's also not great for performance.

    Technically the above method is the same idea as the post processing method I mentioned earlier, just more expensive. Also, not sure why it's using a geometry shader. Seems like overkill to use a geometry shader when you're just outputting the triangle as is, or doing flat shading (which I don't think the OP was looking for).

    If you're going to go with the above approach of getting normal edges like this, you're better off rendering your object to a render texture manually and doing edge detection on that when rendering the object rather than using a grab pass. A grab pass is nice because it means you don't need any scripting, but the performance impact of using them, especially on mobile hardware, is not insignificant.
     
unityunity