Search Unity

Feedback Camera stacking for URP

Discussion in 'Universal Render Pipeline' started by LeGiangAnh, Aug 29, 2019.

  1. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    488
    Performance will never become a secondary priority in URP :)
    The explicit stacking system allows to stack all rendering into a single working texture. No extra bandwidth is taken from resolving and reading back from main memory if you don't switch camera render target. (which you have to resolve / unresolve in any case)
     
    Lars-Steenhoff likes this.
  2. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    488
    From reading some threads here it seems there are two major concerns:

    1) How stacking works across multiple scenes / dynamic setups
    We would like to hear what you are trying to achieve when you put cameras into different scenes. Based on that we can discuss what approach to take to help you.

    2) Increased complexity of the current stack system
    It's true that the system is more restrictive now however it is restrictive to avoid developers getting into workflows that set them into a performance pitfall. Solving performance problems usually takes a lot of investment (sometimes even specialized developers). Providing a system that's more explicit, more bandwidth friendly and more thermal friendly in favor of some extra one time cost to setup a camera stack explicit seems like a good trade off, especially in mobile platforms that bandwidth and thermal is so important to save.

    More over, the new stacking system is more powerful than the built-in one. One example that was shared here was to have a world space UI carousel around a character. That was not possible before. The new system allows you to stack stencil and have a setup that you render a spaceship in base camera writing to stencil and then render a 3D Skybox only in the pixels not touched by the ship, thus saving a lot of overdraw. You can do this without any code, just set stencil configuration on base and overlay, and voila. If you were going to do this in built-in, you would be required to customize all of your shaders to write and test stencil.


    (this example is available at: https://github.com/Unity-Technologies/UniversalRenderingExamples)
     
  3. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    43
    For us, it was more about workflow first and cameras second. Chunking things off into separate scenes both fit our workflow quite well and structurally matched what we were trying to achieve with out project.
    In terms of workflow, we work in a large team and separate scenes help to avoid merge conflicts, as it is rare that more than one person will be working on a scene at a time.
    In our project, different scenes are loaded based on different criteria at boot. There are several configurations of the game that can occur, and there are both scenes common between configurations and unique to each configuration. It is extremely easy to drag all the scenes from a particular configuration from the Project window to the Hierarchy window and see the game in that configuration.
    We also load in scenes through an asset bundle that contains cameras. All that was required here in the default render pipeline was setting the appropriate camera depth on the camera that would be loaded at runtime.
    With the default render pipeline, it was very easy to just set camera depth and everything would work as expected.

    We've spoken about this in my team a bit and we agree that hypothetically we could redo what we've done to use many prefabs that all live in the same scene and are dynamically turned on and off based on configuration. However, this would be a large amount of effort at this time in the project and doesn't solve the issue of needing to hook any cameras that are loaded at runtime into the base camera's overlay stack.

    After spending more time comparing the two pipelines, my more refined complaint is that URP requires cameras to have explicit knowledge of other cameras in a way that was not required in the default render pipeline. This makes cameras in multiple scenes impossible as well as requiring that any cameras not present until runtime need to be deliberately hooked into the camera stack by some means.
     
    Last edited: Feb 14, 2020 at 3:57 PM
    Immu and Lars-Steenhoff like this.
  4. Arkki

    Arkki

    Joined:
    Apr 4, 2013
    Posts:
    1
    How does the camera stacking handle post processing right now? Can I have a combined stack of game-camera with one set of post processing and overlay ui-camera with another post-processing config? When I tried this, I noticed that the post processing for overlay cameras can change the exposure, bloom and tone mapping type, but that's it. If you set the camera as base then the post processing works as expected.
     
  5. arbt

    arbt

    Joined:
    May 22, 2013
    Posts:
    9
    We are trying to update a project to work in 2019.3 where we have cameras in different scenes, for the most part we were able to fix the camera issues in our complex setup but we still need a way to set camera stacks at runtime since cross scene stacking it's not currently possible.

    For example, each scene has a Main Camera to which all other cameras in that scene are added as overlay stacks but we also have a Main scene with a camera that renders global modal windows. If that modal window camera is set as Overlay it renders even if it's not added the Main Camera stack, all good there. The issue is that our global modals require to use canvases in Screen Space - Camera render mode because they have particle effects (that's currently the only way to mix and sort particle systems with the UI system) but unless they are set to Screen Space - Overlay they don't render if the canvas camera is not set to Base or to Overlay added to the stack of the Base Main Camera which in our case lives in a different scene.

    We could really use a way to set camera stacks at runtime or that the Base camera could have an option to automatically render all overlay cameras found in all loaded/active scenes. As someone else mentioned above, there are workarounds such as using 1 scene and turn the others into prefabs but in large projects such as ours it could mean a huge task.

    A bit unrelated but it would be wonderful if the UI system, particle system and sprite system worked better together, even better, that they felt as 1 intuitive unified system. Even though they have improved over the years, we still spend a lot of time struggling with all those different systems when making 2d games and even more when mixing 2d with 3d.
     
    Lars-Steenhoff likes this.
  6. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    43
    You can currently do what you are asking, but it is a bit of a pain. The Universal Additional Camera Data component that automatically gets added to every camera when using URP actually contains the overlay camera stack. You can easily access this and modify it at runtime. It is up to you to figure out a way for the two cameras to locate one another at runtime.
    Using this method it is possible to create cross scene references between your base and overlay cameras, it's just that they cannot be saved when set up like this in the Editor.
     
  7. arbt

    arbt

    Joined:
    May 22, 2013
    Posts:
    9
    Thanks, it works! Definitely not the most user/dev friendly method.
     
    Immu likes this.
  8. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    45
    I sort of gave up on waiting for improvements (if they will ever arrive) and started implementing my own UI canvas components, sprite renderers, and particle shaders that works with SRP batching.
     
  9. NotEvenTrying

    NotEvenTrying

    Joined:
    May 17, 2017
    Posts:
    11
    After the 2 weeks delayed update, I am extremely disappointed to find that the "camera stacking" doesn't work with the 2d renderer... either it needs to be supported, or Unity needs to stop locking all the new things that are actually useful and not a detriment (like 2d lights) to the extremely limited 2d render pipeline, and make them available on URP as well. It feels like the Unity team keeps taking one step forward and two steps back with every update... :(
     
  10. NotEvenTrying

    NotEvenTrying

    Joined:
    May 17, 2017
    Posts:
    11
    Being the glitchy mess that it is, I managed to get it to work by setting my default pipeline to URP and manually setting the cameras to use 2D. Although the scene view doesn't play nice with the 2d renderer this way, I guess this will do as a workaround for now where I switch the default pipeline whether I'm in scene view or play mode.

    I don't see why the Unity team keeps insisting on denying us flexibility and only trying to fix it after people have been frustrated and turned off by their forced decisions when there's no reason to; it clearly seems to work perfectly fine with the 2d renderer by getting around the editor disabling. If its not tested thoroughly enough to consider it stable, you guys have some options (like 2d lights itself) marked with "(experimental)" and other warnings - use those more often and don't outright disable these options, please. Let us make the decisions, don't make biased assumptions on what we need and don't need.
     
unityunity