Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Join us on Dec 8, 2022, between 7 am & 7 pm EST, in the DOTS Dev Blitz Day 2022 - Q&A forum, Discord, and Unity3D Subreddit to learn more about DOTS directly from the Unity Developers.
    Dismiss Notice
  3. Have a look at our Games Focus blog post series which will show what Unity is doing for all game developers – now, next year, and in the future.
    Dismiss Notice

Question Custom Pass to render an object on a RenderTexture or over UI

Discussion in 'High Definition Render Pipeline' started by alexandre-fiset, Aug 17, 2022.

  1. alexandre-fiset


    Mar 19, 2012
    When the player inspects objects in the environment of our game, it centers on the screen like this.


    Then in his inventory, it goes there on the right, on top of all ui, like that:


    Our current approach works, but requires RGBA16 color buffer format, which affects performance and is not ideal for Xbox One and PS4.

    My question is: Is there a way to render a single object on top of all ui, with transparency without RGBA16 color buffer format? Maybe using a custom pass?
  2. cLick1338


    Feb 23, 2017
    Are you sure that RGBA16 is the biggest hit and not just having an additional camera rendering to a texture? In current versions of HDRP it's key to only use a single camera at all times.

    I've used something similar to what I assume you're doing while also limiting the framerate of the second camera but the hit was still too big to be worth it (RGBA8, PC).

    Without knowing anything about your project I would guess that your goal could be achieved with world space UI; Keep text/graphics as overlay and do the object + dark background in world space. Or if time pauses while UI is open, do the old trick of pausing rendering of the background, capture it as a static image.

    Custom pass can be used for tricks around rendering orders, color/depth buffers, material overrides etc.
  3. alexandre-fiset


    Mar 19, 2012
    We already capture the background as a static image.

    My first example is fine with a single camera and without a more precise color buffer format, but the second is problematic. Making the ui world space cannot work as post processing will affect it. We don't want that.

    RGB11 vs RGBA16 might not be a huge difference in performance, but it's noticeable in memory. An alternative would be to assume the opaque black background on the pause menu design, but that's not ideal.

    The ideal scenario would be to have a way to render an object / a camera on top of everything including ui. This way we would render the floating object offset on the right.
  4. cLick1338


    Feb 23, 2017
    I think I hadn't fully understood what you're trying to achieve. I think your hunch was right that custom pass is what you need. I'm 90% sure you'll be able to render one object/culling layer after post processing, override its depth etc.

    I don't have enough experience with it to give you better pointers, maybe someone else can but it should be easy to figure out with some poking around.
  5. seoyeon222222


    Nov 18, 2020
    Maybe you know this.
    There is a case like the FPS example.
    You can exclude the object you want from the PostProcess. (custom pass Before/After PostProcess)
    And you can draw it at the top of the screen using its depth. (custom pass)

    I am also suffering from many problems related to this.
    Most of the answers recommend the above method.
    However, this method is not simple and perfect.
    Several effects that I find difficult to control in the rendering pipeline produce side effects problem

    In built-in or URP, the problem is solved simply by using a render texture for camera stacking.
    But in HDRP (in my experience),
    Using layers and custom passes, it turns into a complex and difficult problem to manage.

    The funny thing is second camera.
    Officially, Unity recommends not to use it.
    Even if all options are off in the custom frame setting of the second camera, its cost is too high.

    Then shouldn't it be used "never"?

    Unity's example uses second camera for camera stacking.
    I'll link you to a related post

    As you said, one camera seems enough to illustrate your game
    If you are worried that the "Screen Space UI - Camera" is affected by PostProcessing,
    Try using this.

    Is there a problem using this method in your case?
    1. Capture the background as a static image like the traditional method
    (Can you also tell me which method you used?)
    2. Configure local volumes in specific areas of the scene so that they are not affected by PostProcessing
    Use this to represent background image + 3D object rendering in separate spaces
    Set the 3d object so that it is not affected by pp using custom pass

    Let me know if there's any progress

    Could you tell me about the part that RGBA16 is not suitable for PSP4 XBox One?
    Are there any official guidelines that you recommend not to use?
  6. alexandre-fiset


    Mar 19, 2012
    Following up on this as I kind of solved it.

    First, when using RGBA16, each render texture used by HDRP camera takes more memory due to the additionnal channel and the more precision per channel. We found that RGB11 equals to ~90mb of savings at runtime (it could be more). There are also performance considerations for writing less render data each frame, but that I don't really have the time to measure the gains. For our game, anything that can save 10 mb or 0.1ms is a godsend on Gen8 hardware.

    The solution:
    1. Upon interacting with an object, I disable the main camera, render the frame on a render texture the size of the screen
    2. I assign that texture on a material on a plane, using a custom shader that renders "after post processing" and that writes on depth, and on which I have control on the tint to darken it
    3. I scale and move the plane so it fills the width and height of my object camera
    4. I render the object + the darkened background plane on a render texture.
    5. I use a volume with a depth of field to blur the background
    6. I use the texture on a RawImage to compose my ui.
    I initially thought that doing things too fast would be annoying or not smooth enough, but the illusion is quite good. In the end I no longer have two camera rendering at the same time during transitions, and I can move to RGB11 to save some memory and performance. It also allows objects with refractions to see the background image, whereas my old method was incompatible with it. I can also use that technique across render pipelines with some minor tweaks.
    seoyeon222222 likes this.