Search Unity

Question [2019.3.3f1+7.1.8] Trying to figure out how to create a CustomPass for UI

Discussion in 'High Definition Render Pipeline' started by StealthyMoose, Mar 11, 2020.

  1. StealthyMoose

    StealthyMoose

    Joined:
    Dec 1, 2016
    Posts:
    6
    So, first the problem statement, and then the story so far:

    In the past I used "camera stacking" for UIs where the UI existed in a screen space camera at world origin w/ layer UI and a depth above the main camera. This allowed me to use non-UGUI elements mixed in with UGUI elements for good effects - MeshRenderers for game visuals and things like maps, particle systems for effects with nice blend modes (wiggly additive glows on buttons and the like). Additionally because the UI was stationary in world space, it avoided regenerating the Canvas geometry every frame if nothing changed. For a new project HDRP is an absolute requirement due to the need for graphical fidelity and postprocessing. In the old image-effect flow the postprocessing on the main camera would apply before the next camera began rendering, so the UI was crisp and could sit over the top of vignettes and be excluded from color grading. When I saw that HDRP supported camera stacking I was pretty excited - however, the caveat is that it uses the Volume of the highest camera - which in my case had a "nothing" layer for volume, because I want it excluded from postprocessing, so all the world DOF, vignette, etc... went away.

    When searching for solutions I found this thread where SebLagarde suggested using the AfterPostprocess + custom full pass in order to copy an offscreen RenderTarget to the current target (and then the thread devolved substantially from there). This is my current solution, however it has a lot of downsides. The existence of the second UI camera within HDRP adds approximately 2ms to the frame time purely as overhead with setup/teardown on my machine, according to the profiler. Because HDRP ignores the bit format set on a target RenderTexture, in order to get alpha support you have to change the entire pipeline over to 16/16/16/16 rather than 11/11/10 - this took me a while to figure out since there's a lot of people who had the same problem before that was supported who ended in a shrug. Additionally, because it's copying a separate full color buffer you lose information about blend modes and only get traditional transparency, which is somewhat limiting.

    So after seeing what CustomPasses could do (which, btw, are real dang neat), I was wondering if I could instead use that to render my UI over the top of the game AfterPostprocess occurred since I don't need any of HDRP's light or other information. Starting from this example is where things got weird. First off - when selecting things to be rendered by pass name (the ShaderTagId) I found that HDRP sneakily renames passes that it doesn't recognize - so the UI material's "Default" pass gets turned in to "SRPDefaultUnlit" - same for any user-defined names, i.e. if you create an unlit shader and name its pass "Hello" the frame debugger will show the pass as named "Hello", but a RenderListDesc will only create an entry for it if you query for the ShaderTagId "SRPDefaultUnlit". The next big problem is that a Screen Space - Overlay camera will only regenerate its geometry if the supplied Camera is enabled - which wouldn't be a problem except that (a) there is no way to manually generate the UI mesh - all the methods that Unity uses to do that with UGUI are private/internal, and (b) the whole point of using this CustomPass was to get around the overhead created by a second camera (and making the UI be screen space to the main camera causes the Canvas to regenerate its mesh every frame). I know there's sneaky ways using reflection to get handles to some of the internal methods, but this project does target consoles so that is not a viable solution. So, in order to get my CustomPass to render the UI, the camera has to be enabled anyways, and either sit below the main camera depth wise or target a RenderTexture - both of which end up with the Camera effectively rendering twice. I tried some low-tech sneaky things like enabling the camera, calling Canvas.ForceUpdateCanvases, and then disabling the camera from within the CustomPass but no dice (and that would provide the same or worse overhead than putting the UI in the main camera screen space.

    In summary: As far as I can tell, there is no way to use a CustomPass in AfterPostprocess to render a UI layer on top of the screen in a more efficient manner because Unity has gated a lot of the UGUI Rendering methods away from public use. Ultimately my goal is to do something like how the "Overlay" UI canvas works, but for a selected LayerMask and not for only UGUI elements. If I am wrong about this, I would like to find a method to do this, and if I'm not I would like to see support for it in the future as this is my (and a lot of other developers) primary use case for "camera stacking".

    PS: I'd love to see the UGUI methods exposed to general API stuff, especially for rendering. I'd also like to see HDRP optionally support non-HDRP unlit only cameras in a step after rendering - once everything for this project stabilizes with Unity versions and HDRP versions if this still isn't supported I'm going to fork HDRP and add that step in myself so I don't have to use an extra render target and FP16 buffer just for UI.

    PPS: I know that the RenderList/CullResults all go deep down in to C++ land to do stuff, but it would make debugging these things WAY easier if there was some way to see what the actual results returned, it makes trying to find problems a huge pain to see if nothing is drawing because culling found nothing, or the RenderListDesc filtered something, and so on and so forth.
     
    foonix likes this.