Search Unity

Possible to HDRP edge detect each object and show overlapping objects? "Floor plan" effect?

Discussion in 'High Definition Render Pipeline' started by dbillings_thehalseygroupllc, Jan 6, 2020.

  1. dbillings_thehalseygroupllc

    dbillings_thehalseygroupllc

    Joined:
    Feb 15, 2016
    Posts:
    23
    Hello,
    I have no experience with writing render passes and I could really use some assistance with a problem I am having, I would like to have an edge detect type effect to create a "floor plan" look from an orthographic camera looking directly down onto the scene. I was previously pointed to this repository by Alilievr https://octolinker-demo.now.sh/alelievr/HDRP-Custom-Passes

    the two effects that seemed to be possible solutions are the TIPS effect and the Outline Effect. The TIPS effect works GREAT in perspective view but breaks down in orthographic as it can't seem to outline flat edges that are perpendicular to the orthographic viewing angle, which makes some objects not show up on the "floor plan".
    The Outline Effect seems much more promising and almost works the way I need to, however it only outlines layer tagged objects as a whole and not as individual objects ontop of eachother. Example:
    (how it looks)
    upload_2020-1-6_10-26-22.png
    (how it should look, the horizontal bar is ontop of the vertical bars but currently it looks as if they're one object)
    upload_2020-1-6_10-29-16.png

    Any assistance on the matter or ways to achieve the desired effect shown above would be greatly appreciated!
    using Unity Version 2019.3.0f3 and HDRP
     
  2. dbillings_thehalseygroupllc

    dbillings_thehalseygroupllc

    Joined:
    Feb 15, 2016
    Posts:
    23
  3. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    262
  4. dbillings_thehalseygroupllc

    dbillings_thehalseygroupllc

    Joined:
    Feb 15, 2016
    Posts:
    23
    Thank you! Is there a line I can edit to change the color of the black background to another color?
     
  5. antoinel_unity

    antoinel_unity

    Unity Technologies

    Joined:
    Jan 7, 2019
    Posts:
    262
    You can lerp the edgeDepth result between two color, that would be the background and the edge color, or just assign the colors directly after the comparison with the depth threshold
     
  6. dbillings_thehalseygroupllc

    dbillings_thehalseygroupllc

    Joined:
    Feb 15, 2016
    Posts:
    23
    thank you for your help on this, I've almost got it exactly where I want it! would you happen to know if there is any math I can edit to reduce the stepping on slightly angled faces? the circled faces are flat quads and only have about a 7 degree tilt to them but they come out like this instead of a nice outline like the 0 degree tilt faces
    upload_2020-1-9_10-20-59.png
     
  7. dbillings_thehalseygroupllc

    dbillings_thehalseygroupllc

    Joined:
    Feb 15, 2016
    Posts:
    23
    here's the edited shader code incase anyone wants to achieve the similar effect
    Code (CSharp):
    1. Shader "FullScreen/TIPS"
    2. {
    3.     HLSLINCLUDE
    4.  
    5.     #pragma vertex Vert
    6.  
    7.     #pragma target 4.5
    8.     #pragma only_renderers d3d11 ps4 xboxone vulkan metal switch
    9.  
    10.     #include "Packages/com.unity.render-pipelines.high-definition/Runtime/RenderPipeline/RenderPass/CustomPass/CustomPassCommon.hlsl"
    11.     #include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/NormalBuffer.hlsl"
    12.  
    13.     // The PositionInputs struct allow you to retrieve a lot of useful information for your fullScreenShader:
    14.     // struct PositionInputs
    15.     // {
    16.     //     float3 positionWS;  // World space position (could be camera-relative)
    17.     //     float2 positionNDC; // Normalized screen coordinates within the viewport    : [0, 1) (with the half-pixel offset)
    18.     //     uint2  positionSS;  // Screen space pixel coordinates                       : [0, NumPixels)
    19.     //     uint2  tileCoord;   // Screen tile coordinates                              : [0, NumTiles)
    20.     //     float  deviceDepth; // Depth from the depth buffer                          : [0, 1] (typically reversed)
    21.     //     float  linearDepth; // View space Z coordinate                              : [Near, Far]
    22.     // };
    23.  
    24.     // To sample custom buffers, you have access to these functions:
    25.     // But be careful, on most platforms you can't sample to the bound color buffer. It means that you
    26.     // can't use the SampleCustomColor when the pass color buffer is set to custom (and same for camera the buffer).
    27.     // float4 SampleCustomColor(float2 uv);
    28.     // float4 LoadCustomColor(uint2 pixelCoords);
    29.     // float LoadCustomDepth(uint2 pixelCoords);
    30.     // float SampleCustomDepth(float2 uv);
    31.  
    32.     // There are also a lot of utility function you can use inside Common.hlsl and Color.hlsl,
    33.     // you can check them out in the source code of the core SRP package.
    34.  
    35.     TEXTURE2D_X(_TIPSBuffer);
    36.     float _EdgeDetectThreshold;
    37.     float3 _GlowColor;
    38.     float _EdgeRadius;
    39.     float _BypassMeshDepth;
    40.  
    41.     float SampleClampedDepth(float2 uv) { return SampleCameraDepth(clamp(uv, _ScreenSize.zw, 1 - _ScreenSize.zw)).r; }
    42.  
    43.     float EdgeDetect(float2 uv, float depthThreshold, float normalThreshold)
    44.     {
    45.         normalThreshold /= _EdgeDetectThreshold;
    46.         depthThreshold /= _EdgeDetectThreshold;
    47.         float halfScaleFloor = floor(_EdgeRadius * 0.5);
    48.         float halfScaleCeil = ceil(_EdgeRadius * 0.5);
    49.    
    50.         // Compute uv position to fetch depth informations
    51.         float2 bottomLeftUV = uv - float2(_ScreenSize.zw.x, _ScreenSize.zw.y) * halfScaleFloor;
    52.         float2 topRightUV = uv + float2(_ScreenSize.zw.x, _ScreenSize.zw.y) * halfScaleCeil;
    53.         float2 bottomRightUV = uv + float2(_ScreenSize.zw.x * halfScaleCeil, -_ScreenSize.zw.y * halfScaleFloor);
    54.         float2 topLeftUV = uv + float2(-_ScreenSize.zw.x * halfScaleFloor, _ScreenSize.zw.y * halfScaleCeil);
    55.    
    56.         // Depth from camera buffer
    57.         float depth0 = SampleClampedDepth(bottomLeftUV);
    58.         float depth1 = SampleClampedDepth(topRightUV);
    59.         float depth2 = SampleClampedDepth(bottomRightUV);
    60.         float depth3 = SampleClampedDepth(topLeftUV);
    61.    
    62.         float depthDerivative0 = depth1 - depth0;
    63.         float depthDerivative1 = depth3 - depth2;
    64.    
    65.         float edgeDepth = sqrt(pow(depthDerivative0, 2) + pow(depthDerivative1, 2)) * 100;
    66.  
    67.         float newDepthThreshold = depthThreshold * depth0;
    68.         //edgeDepth = edgeDepth > newDepthThreshold ? 1 : 0;
    69.    
    70.         // Normals extracted from the camera normal buffer
    71.         //NormalData normalData0, normalData1, normalData2, normalData3;
    72.        // DecodeFromNormalBuffer(_ScreenSize.xy * bottomLeftUV, normalData0);
    73.        // DecodeFromNormalBuffer(_ScreenSize.xy * topRightUV, normalData1);
    74.        // DecodeFromNormalBuffer(_ScreenSize.xy * bottomRightUV, normalData2);
    75.        // DecodeFromNormalBuffer(_ScreenSize.xy * topLeftUV, normalData3);
    76.    
    77.        // float3 normalFiniteDifference0 = normalData1.normalWS - normalData0.normalWS;
    78.        // float3 normalFiniteDifference1 = normalData3.normalWS - normalData2.normalWS;
    79.    
    80.        // float edgeNormal = sqrt(dot(normalFiniteDifference0, normalFiniteDifference0) + dot(normalFiniteDifference1, normalFiniteDifference1));
    81.        // edgeNormal = edgeNormal > normalThreshold ? 1 : 0;
    82.  
    83.         // Combined
    84.         return edgeDepth;//max(edgeDepth, edgeNormal);
    85.     }
    86.  
    87.     float4 Compositing(Varyings varyings) : SV_Target
    88.     {
    89.         float depth = LoadCameraDepth(varyings.positionCS.xy);
    90.         PositionInputs posInput = GetPositionInput(varyings.positionCS.xy, _ScreenSize.zw, depth, UNITY_MATRIX_I_VP, UNITY_MATRIX_V);
    91.         float4 color = float4(1.0, 1.0, 1.0, 1.0);
    92.  
    93.         // Load the camera color buffer at the mip 0 if we're not at the before rendering injection point
    94.         if (_CustomPassInjectionPoint != CUSTOMPASSINJECTIONPOINT_BEFORE_RENDERING)
    95.             color = float4(CustomPassSampleCameraColor(posInput.positionNDC.xy, 0), 1);
    96.  
    97.         // Do some normal and depth based edge detection on the camera buffers.
    98.         float3 edgeDetectColor = EdgeDetect(posInput.positionNDC.xy, 2, 1) * _GlowColor;
    99.         // Remove the edge detect effect between the sky and objects when the object is inside the sphere
    100.         edgeDetectColor *= depth != UNITY_RAW_FAR_CLIP_VALUE;
    101.         if(all(edgeDetectColor == float3(0,0,0)))
    102.             edgeDetectColor = float3(1,1,1);
    103.             //
    104.  
    105.         // Load depth and color information from the custom buffer
    106.         float meshDepthPos = LoadCustomDepth(posInput.positionSS.xy);
    107.         float4 meshColor = LoadCustomColor(posInput.positionSS.xy);
    108.  
    109.         // Change the color of the icosahedron mesh /////////////////////////////////////// color of background
    110.         meshColor = float4(_GlowColor, 1) * meshColor;
    111.  
    112.         // Transform the raw depth into eye space depth
    113.         float sceneDepth = LinearEyeDepth(depth, _ZBufferParams);
    114.         float meshDepth = LinearEyeDepth(meshDepthPos, _ZBufferParams);
    115.  
    116.         if (_BypassMeshDepth != 0)
    117.             meshDepth = _BypassMeshDepth;
    118.  
    119.         // Add the intersection with mesh and scene depth to the edge detect result
    120.         edgeDetectColor = lerp(edgeDetectColor, _GlowColor, saturate(2 - abs(meshDepth - sceneDepth) * 200 * rcp(_EdgeRadius)));
    121.  
    122.         // Blend the mesh color and edge detect color using the mesh alpha transparency
    123.         float3 edgeMeshColor = lerp(edgeDetectColor, meshColor.xyz, (meshDepth < sceneDepth) ? meshColor.a : 0);
    124.  
    125.         // Avoid edge detection effect to leak inside the isocahedron mesh
    126.         float3 finalColor = saturate(meshDepth - sceneDepth) > 0 ? color.xyz : edgeMeshColor;
    127.  
    128.         return float4(finalColor, 1);
    129.     }
    130.  
    131.     // We need this copy because we can't sample and write to the same render target (Camera color buffer)
    132.     float4 Copy(Varyings varyings) : SV_Target
    133.     {
    134.         float depth = LoadCameraDepth(varyings.positionCS.xy);
    135.         PositionInputs posInput = GetPositionInput(varyings.positionCS.xy, _ScreenSize.zw, depth, UNITY_MATRIX_I_VP, UNITY_MATRIX_V);
    136.  
    137.         return float4(LOAD_TEXTURE2D_X_LOD(_TIPSBuffer, posInput.positionSS.xy, 0).rgb, 1);
    138.     }
    139.  
    140.     ENDHLSL
    141.  
    142.     SubShader
    143.     {
    144.         Pass
    145.         {
    146.             Name "Compositing"
    147.  
    148.             ZWrite Off
    149.             ZTest Always
    150.             Blend Off
    151.             Cull Off
    152.  
    153.             HLSLPROGRAM
    154.                 #pragma fragment Compositing
    155.             ENDHLSL
    156.         }
    157.  
    158.         Pass
    159.         {
    160.             Name "Copy"
    161.  
    162.             ZWrite Off
    163.             ZTest Always
    164.             Blend Off
    165.             Cull Off
    166.  
    167.             HLSLPROGRAM
    168.                 #pragma fragment Copy
    169.             ENDHLSL
    170.         }
    171.     }
    172.     Fallback Off
    173. }
     
  8. ryisnelly

    ryisnelly

    Joined:
    Oct 28, 2012
    Posts:
    7
    Hello,

    I have been playing around with this shader and i can not seem to get the camera and edge detect working together, I to have have little knowledge in writing render passes. I have tried what antoinel said and i can not get it to work properly. do you have any suggestions to where i am going wrong?

    Thanks.
     
  9. shank1993

    shank1993

    Joined:
    Dec 31, 2012
    Posts:
    5
    Hello,
    Thank you for the shader, i got this result but i need just outline, please guide me.
    Below is the result i am looking for: