Search Unity

How to get depth from AVO (Arbitrary Output Variables)?

Discussion in 'High Definition Render Pipeline' started by TOES2, Sep 17, 2021.

  1. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
  2. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Btw, do you know if it's possible to grab this info and apply in real-time? Like every frame.
     
  3. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
    Yes. You can use post processing for that.
     
  4. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
    Noone knows? Where is the documentation, seems we are just left to trial and error with HDRP.......?
     
  5. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
  6. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    You can use the attached script and the following steps to get depth and other AOVs in a render texture of your choice:

    - Create a RenderTexture in the editor that will receive the AOV info.
    - Attach in your camera the AOVOutputBuffer script
    - In the Output texture select the RenderTexture that you created.
    - For depth, select "Depth Stencil" in "AOV Type" and "None" in the other options, like this:
    upload_2021-9-20_16-13-1.png

    And here is the AOVOutputBuffer script:

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4. using UnityEngine.Rendering;
    5. using UnityEngine.Rendering.HighDefinition;
    6. using UnityEngine.Rendering.HighDefinition.Attributes;
    7.  
    8. public class AOVOutputBuffer : MonoBehaviour
    9. {
    10.     public RenderTexture _outputTexture = null;
    11.     public MaterialSharedProperty _materialOutput = MaterialSharedProperty.None;
    12.     public DebugFullScreen _debugOutput = DebugFullScreen.None;
    13.     public AOVBuffers _AOVType = AOVBuffers.Output;
    14.  
    15.     RTHandle _rt;
    16.  
    17.     RTHandle RTAllocator(AOVBuffers bufferID)
    18.     {
    19.         return _rt ?? (_rt = RTHandles.Alloc(_outputTexture.width, _outputTexture.height));
    20.     }
    21.  
    22.     void AovCallback(
    23.         CommandBuffer cmd,
    24.         List<RTHandle> buffers,
    25.         RenderOutputProperties outProps
    26.     )
    27.     {
    28.         if (buffers.Count > 0)
    29.         {
    30.             cmd.Blit(buffers[0], _outputTexture);
    31.         }
    32.     }
    33.  
    34.     AOVRequestDataCollection BuildAovRequest()
    35.     {
    36.         var aovRequest = AOVRequest.NewDefault();
    37.         if (_materialOutput != MaterialSharedProperty.None)
    38.             aovRequest.SetFullscreenOutput(_materialOutput);
    39.  
    40.         if (_debugOutput != DebugFullScreen.None)
    41.             aovRequest.SetFullscreenOutput(_debugOutput);
    42.  
    43.         return new AOVRequestBuilder().Add(
    44.             aovRequest,
    45.             RTAllocator,
    46.             null, // lightFilter
    47.             new[]
    48.             {
    49.                 _AOVType,
    50.             },
    51.             AovCallback
    52.             ).Build();
    53.     }
    54.  
    55.     // Start is called before the first frame update
    56.     void Start()
    57.     {
    58.         GetComponent<HDAdditionalCameraData>().SetAOVRequests(BuildAovRequest());
    59.     }
    60. }

    Edit: Updated the script with additional options.
     

    Attached Files:

    Last edited: Sep 20, 2021
  7. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    For real-time applications I recommend using the custom pass API, as suggested by passeridaepc.
    The AOV API is more suitable for offline capture, since it will setup internally an additional AOV camera, which has some overhead.
     
  8. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Can you elaborate on that, please? I'm not much of a programmer, and I want to output directional specular every frame
    to subtract it from some materials. But when I tried the depth capture script I've mentioned above, it turned out that it doesn't work in real-time. It only updates when a camera is moved in the editor. So I'm looking for some minimal example of how I can output the directional specular in real-time every frame into a rendertexture.
     
  9. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    It's not very clear what you mean by "directional specular": You want to encode the directions of incident light (similar to Directional Lightmaps) ? Or you are just referring to the specular contribution of the directional light?

    Edit: To answer you question regarding the overhead of the AOV API, you should expect than an AOV query will run the HDRP "frame loop" one additional time for every renderer frame. So it can be very expensive, both in terms of CPU and GPU time. With the custom pass mechanism you have full control of the cost, since you can manually do only the rendering steps that you need.
     
    Last edited: Sep 20, 2021
  10. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    I'm referring to the specular contribution of all real-time lights:
    upload_2021-9-20_17-47-34.png
    I want this contribution to be rendered into a rendertexture every frame, so I could use this texture inside shaders/custom passes. Is this possible/practical?

    I've tested your script and it works perfect and does exactly what I want, but it has a serious performance impact, as you said.
    Thanks!
     
    Last edited: Sep 20, 2021
  11. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
    Thank you, this is useful. However, when setting Material Output to None, Debug Output to None and AOV Type To Depth Stencil, my rendertexture remains black..? I tried different kinds of RenderTexture formats... The other render options works as expected (albedo etc)

    On another note, I found a separate way to do this by making a custom post processing shader. There I can get the depth, but now Albedo is missing ... Lol. Or at least I do not know how to fetch it in this way, maybe it is easy?

    Could you advice please?

    Code (CSharp):
    1.         float4 CustomPostProcess(Varyings input) : SV_Target
    2.         {
    3.             UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
    4.    
    5.             float3 sourceColor = SAMPLE_TEXTURE2D_X(_MainTex, s_linear_clamp_sampler, input.texcoord).xyz;
    6.    
    7.             // Apply greyscale effect
    8.             float3 color = lerp(sourceColor, Luminance(sourceColor), _Intensity)*0.5;
    9.    
    10.             float3 albedoColor = //HOW TO????
    11.    
    12.     ...
    13.        
    14.             return float4(color, 1);
    15.     }
    16.  
     
  12. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
    Ah, yes, this is exactly what I started to do. But I could not figure out how to get Albedo this way, which is important for my denoiser. How can I grab this in the shader?

    I used this code to grab depth and normals:

    Code (CSharp):
    1. #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Common.hlsl"
    2. #include "Packages/com.unity.render-pipelines.core/ShaderLibrary/Color.hlsl"
    3. #include "Packages/com.unity.render-pipelines.high-definition/Runtime/ShaderLibrary/ShaderVariables.hlsl"
    4. #include "Packages/com.unity.render-pipelines.high-definition/Runtime/PostProcessing/Shaders/FXAA.hlsl"
    5. #include "Packages/com.unity.render-pipelines.high-definition/Runtime/PostProcessing/Shaders/RTUpscale.hlsl"
    6. #include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/NormalBuffer.hlsl"
    7.  
    8. ...
    9.  
    10.     float3 GetNormalWorldSpace(float2 uv, float depth)
    11.     {
    12.         float3 normalWS = 0.0f;
    13.         if (depth > 0.0f)
    14.         {
    15.             NormalData normalData;
    16.             const float4 normalBuffer = LOAD_TEXTURE2D_X(_NormalBufferTexture, uv);
    17.             DecodeFromNormalBuffer(normalBuffer, uv, normalData);
    18.             normalWS = normalData.normalWS;
    19.         }
    20.         return normalWS;
    21.     }
    22.  
    23.         float3 sourceColor = SAMPLE_TEXTURE2D_X(_MainTex, s_linear_clamp_sampler, input.texcoord).xyz;
    24.         float depth = SampleCameraDepth(input.texcoord);
    25.         float3 normalWS = GetNormalWorldSpace(input.texcoord*_ScreenSize.xy,depth);
    26.         float3 albedo= //?????
    27.  
     
    Last edited: Sep 20, 2021
  13. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    Aside from AOVs, which are not designed for real-time, I guess you could build a custom pass that uses MaterialPropertyBlocks to override the albedo of objects/renderers to black and thus draw only the specular contribution. However, this would still require an extra geometry pass and the CPU setup for such pass could be slow.

    Ideally, you would like the specular contribution to be saved after it is computed in the "lightloop" of the lit shader. For this unfortunately you will have to modify the HDRP code, I don't think there is an easy and also high-performant way to do it.
     
  14. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    The script that I posted will give you all AOV (including depth) in the color channel of the render texture. I guess you were trying to read the depth attachement of the render texture, if you read the color you should find the depth. But if you want depth in custom post-process shader, your other solution is much better/faster than using AOVs.

    Regarding your other question on albedo, depending on your rendering options (deferred vs forward), HDRP might or might not compute internally an albedo buffer. For this reason albedo is not generally available in (post-process) shaders.

    If you want the albedo in a real-time app, you can use a "DrawRenderers" custom pass and override the material with an "Unlit" material that draws only albedo. This should be faster than AOVs.
     
    Last edited: Sep 21, 2021
  15. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
    Thank you for your suggestion. I tried out the shader example in that link:

    Code (CSharp):
    1.    
    2. float4 FullScreenPass(Varyings varyings) : SV_Target
    3.     {
    4.         UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(varyings);
    5.         float depth = LoadCameraDepth(varyings.positionCS.xy);
    6.         PositionInputs posInput = GetPositionInput(varyings.positionCS.xy, _ScreenSize.zw, depth, UNITY_MATRIX_I_VP, UNITY_MATRIX_V);
    7.         float3 viewDirection = GetWorldSpaceNormalizeViewDir(posInput.positionWS);
    8.         float4 color = float4(0.0, 0.0, 0.0, 0.0);
    9.  
    10.         // Load the camera color buffer at the mip 0 if we're not at the before rendering injection point
    11.         if (_CustomPassInjectionPoint != CUSTOMPASSINJECTIONPOINT_BEFORE_RENDERING)
    12.             color = float4(CustomPassLoadCameraColor(varyings.positionCS.xy, 0), 1);
    13.  
    14.         // Add your custom pass code here
    15.         return float4(color.rgb , 1);    
    16. }
    But this only returns black. How to render albedo here?
     
  16. Enigma229

    Enigma229

    Joined:
    Aug 6, 2019
    Posts:
    135
    I’d like to use AOV to render out specific passes (alpha, depth, etc.) and use render image sequence where Unity renders out each frame at run-time.
     
  17. TOES2

    TOES2

    Joined:
    May 20, 2013
    Posts:
    135
  18. alejobrainz

    alejobrainz

    Joined:
    Sep 10, 2005
    Posts:
    288

    Hi PavlosM. I was trying the attached script and it works great for extracting an AOV channel. We want to extract 2 channels, ALBEDO and Normals. I tried adding two instances of the component, but the second channel renders black. Any suggestions on how to extract two channels to separate RenderTextures?

    Thanks in advance,

    Alejandro.
     
  19. alejobrainz

    alejobrainz

    Joined:
    Sep 10, 2005
    Posts:
    288
    Hi @PavlosM please advise.
     
  20. Enigma229

    Enigma229

    Joined:
    Aug 6, 2019
    Posts:
    135
    When are you guys going to make the non beauty passes have anti-aliasing. The Alpha pass in particular is pretty useless because it's so jagged at the edges.