Search Unity

Feedback Camera stacking for URP

Discussion in 'Universal Render Pipeline' started by MetaDOS, Aug 29, 2019.

  1. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    Performance will never become a secondary priority in URP :)
    The explicit stacking system allows to stack all rendering into a single working texture. No extra bandwidth is taken from resolving and reading back from main memory if you don't switch camera render target. (which you have to resolve / unresolve in any case)
     
  2. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    From reading some threads here it seems there are two major concerns:

    1) How stacking works across multiple scenes / dynamic setups
    We would like to hear what you are trying to achieve when you put cameras into different scenes. Based on that we can discuss what approach to take to help you.

    2) Increased complexity of the current stack system
    It's true that the system is more restrictive now however it is restrictive to avoid developers getting into workflows that set them into a performance pitfall. Solving performance problems usually takes a lot of investment (sometimes even specialized developers). Providing a system that's more explicit, more bandwidth friendly and more thermal friendly in favor of some extra one time cost to setup a camera stack explicit seems like a good trade off, especially in mobile platforms that bandwidth and thermal is so important to save.

    More over, the new stacking system is more powerful than the built-in one. One example that was shared here was to have a world space UI carousel around a character. That was not possible before. The new system allows you to stack stencil and have a setup that you render a spaceship in base camera writing to stencil and then render a 3D Skybox only in the pixels not touched by the ship, thus saving a lot of overdraw. You can do this without any code, just set stencil configuration on base and overlay, and voila. If you were going to do this in built-in, you would be required to customize all of your shaders to write and test stencil.


    (this example is available at: https://github.com/Unity-Technologies/UniversalRenderingExamples)
     
  3. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    48
    For us, it was more about workflow first and cameras second. Chunking things off into separate scenes both fit our workflow quite well and structurally matched what we were trying to achieve with out project.
    In terms of workflow, we work in a large team and separate scenes help to avoid merge conflicts, as it is rare that more than one person will be working on a scene at a time.
    In our project, different scenes are loaded based on different criteria at boot. There are several configurations of the game that can occur, and there are both scenes common between configurations and unique to each configuration. It is extremely easy to drag all the scenes from a particular configuration from the Project window to the Hierarchy window and see the game in that configuration.
    We also load in scenes through an asset bundle that contains cameras. All that was required here in the default render pipeline was setting the appropriate camera depth on the camera that would be loaded at runtime.
    With the default render pipeline, it was very easy to just set camera depth and everything would work as expected.

    We've spoken about this in my team a bit and we agree that hypothetically we could redo what we've done to use many prefabs that all live in the same scene and are dynamically turned on and off based on configuration. However, this would be a large amount of effort at this time in the project and doesn't solve the issue of needing to hook any cameras that are loaded at runtime into the base camera's overlay stack.

    After spending more time comparing the two pipelines, my more refined complaint is that URP requires cameras to have explicit knowledge of other cameras in a way that was not required in the default render pipeline. This makes cameras in multiple scenes impossible as well as requiring that any cameras not present until runtime need to be deliberately hooked into the camera stack by some means.
     
    Last edited: Feb 14, 2020
  4. Arkki

    Arkki

    Joined:
    Apr 4, 2013
    Posts:
    2
    How does the camera stacking handle post processing right now? Can I have a combined stack of game-camera with one set of post processing and overlay ui-camera with another post-processing config? When I tried this, I noticed that the post processing for overlay cameras can change the exposure, bloom and tone mapping type, but that's it. If you set the camera as base then the post processing works as expected.
     
  5. arbt

    arbt

    Joined:
    May 22, 2013
    Posts:
    10
    We are trying to update a project to work in 2019.3 where we have cameras in different scenes, for the most part we were able to fix the camera issues in our complex setup but we still need a way to set camera stacks at runtime since cross scene stacking it's not currently possible.

    For example, each scene has a Main Camera to which all other cameras in that scene are added as overlay stacks but we also have a Main scene with a camera that renders global modal windows. If that modal window camera is set as Overlay it renders even if it's not added the Main Camera stack, all good there. The issue is that our global modals require to use canvases in Screen Space - Camera render mode because they have particle effects (that's currently the only way to mix and sort particle systems with the UI system) but unless they are set to Screen Space - Overlay they don't render if the canvas camera is not set to Base or to Overlay added to the stack of the Base Main Camera which in our case lives in a different scene.

    We could really use a way to set camera stacks at runtime or that the Base camera could have an option to automatically render all overlay cameras found in all loaded/active scenes. As someone else mentioned above, there are workarounds such as using 1 scene and turn the others into prefabs but in large projects such as ours it could mean a huge task.

    A bit unrelated but it would be wonderful if the UI system, particle system and sprite system worked better together, even better, that they felt as 1 intuitive unified system. Even though they have improved over the years, we still spend a lot of time struggling with all those different systems when making 2d games and even more when mixing 2d with 3d.
     
    Lars-Steenhoff likes this.
  6. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    48
    You can currently do what you are asking, but it is a bit of a pain. The Universal Additional Camera Data component that automatically gets added to every camera when using URP actually contains the overlay camera stack. You can easily access this and modify it at runtime. It is up to you to figure out a way for the two cameras to locate one another at runtime.
    Using this method it is possible to create cross scene references between your base and overlay cameras, it's just that they cannot be saved when set up like this in the Editor.
     
    Infenix likes this.
  7. arbt

    arbt

    Joined:
    May 22, 2013
    Posts:
    10
    Thanks, it works! Definitely not the most user/dev friendly method.
     
    Immu likes this.
  8. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    192
    I sort of gave up on waiting for improvements (if they will ever arrive) and started implementing my own UI canvas components, sprite renderers, and particle shaders that works with SRP batching.
     
    Rich_A and Lars-Steenhoff like this.
  9. NotEvenTrying

    NotEvenTrying

    Joined:
    May 17, 2017
    Posts:
    43
    After the 2 weeks delayed update, I am extremely disappointed to find that the "camera stacking" doesn't work with the 2d renderer... either it needs to be supported, or Unity needs to stop locking all the new things that are actually useful and not a detriment (like 2d lights) to the extremely limited 2d render pipeline, and make them available on URP as well. It feels like the Unity team keeps taking one step forward and two steps back with every update... :(
     
    rboerdijk likes this.
  10. NotEvenTrying

    NotEvenTrying

    Joined:
    May 17, 2017
    Posts:
    43
    Being the glitchy mess that it is, I managed to get it to work by setting my default pipeline to URP and manually setting the cameras to use 2D. Although the scene view doesn't play nice with the 2d renderer this way, I guess this will do as a workaround for now where I switch the default pipeline whether I'm in scene view or play mode.

    I don't see why the Unity team keeps insisting on denying us flexibility and only trying to fix it after people have been frustrated and turned off by their forced decisions when there's no reason to; it clearly seems to work perfectly fine with the 2d renderer by getting around the editor disabling. If its not tested thoroughly enough to consider it stable, you guys have some options (like 2d lights itself) marked with "(experimental)" and other warnings - use those more often and don't outright disable these options, please. Let us make the decisions, don't make biased assumptions on what we need and don't need.
     
    Viole, rboerdijk and Lars-Steenhoff like this.
  11. lloydhooson

    lloydhooson

    Joined:
    Apr 7, 2008
    Posts:
    77
    Hi I know I'm not the original poster of this my apologies, but I'd like to share my multi scene setup as well.

    I use Multiple scene generally for splitting up work flow and controlling scene management. There is a general data holding scene (no camera here), then there is the level scene(game view cameras) possible multiples of (also a possibility of a UI as well not always (not unity UI)), then there is a UI scene (UI camera) - I don't use the unity in built UI (I need a much more powerful and complete solution). From a development view, this hold benefits as the UI can be implemented once and managed easily with game level data passing data to the UI with out any human scene setup on the level scene. Also simple things like animated loading screens can be achieved through this solution.

    [EDIT] Also in the past I have made project that had stacked frustums , getting around Zclip fighting, these where also in multiple scenes.
     
    Lars-Steenhoff likes this.
  12. NotEvenTrying

    NotEvenTrying

    Joined:
    May 17, 2017
    Posts:
    43
    Thought I'd mention this for anyone else trying to use camera stacking with the 2d renderer (or any non-URP pipeline asset really), there's actually a really simple fix for it; if you go to the URP package folder, then the Runtime folder inside it and open up ScriptableRenderer.cs, you'll find a sub-class at the beginning called RenderingFeatures, and inside it a "cameraStacking" bool (should be in line 36 of the file). If you set this bool to be true by default instead of false - problem solved!
     
    rboerdijk and lingondricka like this.
  13. marcarasanz

    marcarasanz

    Joined:
    Aug 23, 2019
    Posts:
    20
    Am using the 2d renderer in the URP , but i can't find the line of code you sugest.
     
  14. lingondricka

    lingondricka

    Joined:
    Oct 5, 2019
    Posts:
    3
    It works after upgrading to version 7.2.1, I still can't get post processing to work per camera though i.e it affects the whole screen minus the UI.
     
    Last edited: Feb 20, 2020
  15. TheGreenBeret

    TheGreenBeret

    Joined:
    Feb 8, 2017
    Posts:
    2
    Unity 2019.3.1 with URP 7.2.1 - Stereo Rendering Mode: Single Pass (NOT instanced).
    Although the documentation reports that Camera Stacking supports VR, it seems that by applying the post process to the Base Camera, something in the render pipeline breaks, and the Base Camera viewport is doubled by eye. I'm currently using a base camera and a single overlay camera (both have different layers).
     

    Attached Files:

    zanq likes this.
  16. marcarasanz

    marcarasanz

    Joined:
    Aug 23, 2019
    Posts:
    20
    Yep, the problem is that i can not update to 7.2.1, am in 7.1.8 and the package manager says it's up to date. Am on Unity 2019.3.0f3...weird
     
  17. lingondricka

    lingondricka

    Joined:
    Oct 5, 2019
    Posts:
    3
    In the package manager press the arrow to the left of the name of the package.
     
  18. marcarasanz

    marcarasanz

    Joined:
    Aug 23, 2019
    Posts:
    20
    Yep, i know, nothin happens, take a look :
     

    Attached Files:

  19. yougotdirked

    yougotdirked

    Joined:
    Nov 2, 2017
    Posts:
    5
    Hi,

    Im trying to switch the camera that draws on top by setting its priority by code. However, this property does not seem to be accessable by code, or am I missing something? I can't find anything in the documentation except for this:

    https://docs.unity3d.com/Packages/c...amera-component-reference.html#base-rendering

    Is this possible with the new system or is it only changeable in the inspector?

    Edit: To clarify: I'm using Unity 2019.3.0f6 and URP 2.7.1
     
    Last edited: Feb 20, 2020
  20. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    48
    When using URP, the overlay camera stack is contained in the Universal Additional Camera Data component that gets automatically added to each camera. You can access and modify the stack at runtime through this component.
     
  21. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    792
    I think you need at least Unity 2019.3.0f6
     
  22. marcarasanz

    marcarasanz

    Joined:
    Aug 23, 2019
    Posts:
    20
    I thought that, but it's not available for me to download this version from unity hub. Maybe from some link on the web
     
  23. yougotdirked

    yougotdirked

    Joined:
    Nov 2, 2017
    Posts:
    5
    Yes I know. But my issue is that I need to overlay base cameras so i can't use the overlay stack. There is a "Priority" property in base cameras which decides which one renders on top, but I can't seem to find how to modify this through my script. It's not accessable through the Universal Additional Camera Data.. :(
     
  24. marcarasanz

    marcarasanz

    Joined:
    Aug 23, 2019
    Posts:
    20
    Solved, now unity hub let me download version 2019.3.2f1
     
  25. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    48
    Digging into the URP code a bit shows that the "Priority" field is simply what the camera's depth field is displayed as now. You can change it by accessing your camera component and changing "depth".
    Curious as to what you're trying to achieve here, as the topmost base camera will always fill the screen and cover whatever cameras are underneath.
     
  26. yougotdirked

    yougotdirked

    Joined:
    Nov 2, 2017
    Posts:
    5
    Ah, i tried digging through the code but couldnt find anything. Thanks for info, I hadn't found it myself. Where did you find it? It might be useful for me to take a look at it :)

    I want to make a seamless portal with a 3rd person camera. Which means that i need to do some weird render stuff whenever there is a portal between the player and the main camera.
    The portals themselves use render textures and proxy cameras as to create the illusion that the world continues seamlessly through the portal. Only base cameras can render to render textures apparently.
    So when there's a portal between the player and the main cam, the portal's proxy camera needs to overlay the main camera, and the main camera needs to render to the corresponding render texture (if that makes sense).
    Overwriting the main camera's position would cause input issues as they are dependent on the camera's rotation with regards to the player character.

    Anyhow, I had it all figured out in my head before I found out that the camera stack got removed temporarily and it was already quite difficult to get my head around it, and i forgot half of it by now :p
     
  27. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    48
    It's in DrawPriority method of UniversalRenderPipelineCameraEditor in the URP packge.
     
  28. enzofrancesca

    enzofrancesca

    Joined:
    Mar 7, 2015
    Posts:
    3
    Same problem here, also with the 7.2.2 update. Base Camera viewport is doubled, and is dependent on post processing on Base Camera. Any news on this?
     
    zanq likes this.
  29. abuki

    abuki

    Joined:
    Nov 14, 2013
    Posts:
    40
    Hello, can anyone please point me in the direction on how PhysicsRaycaster (specifically Physics2DRaycaster) should work with camera stacking? I need to raycast from the base camera to some layers and from the foreground camera to some other layers. Putting each Raycaster on each camera is not working for me.
     
  30. enzofrancesca

    enzofrancesca

    Joined:
    Mar 7, 2015
    Posts:
    3
    I'm really sorry to "bump" this topic, but it's quite important (for me at least) to understand if the issue that me and TheGreenBeret reported it's a bug or just something due to particular conditions.
    Just to summarize:

    Unity Version: 2019.3.1
    URP Ver: 7.2.1 or 7.2.2
    Technique: Camera Stacking + VR
    Stereo Rendering Mode: Single pass (NOT instanced)
    Condition: post process is applied to the Base Camera
    Results: the viewport is doubled per eye

    Thanks in advance for any hint you can give us.
     
    zanq likes this.
  31. GunLengend

    GunLengend

    Joined:
    Sep 24, 2014
    Posts:
    54
    I'm facing Camera Stack double its render and calculation times (ms) even the overlay camera just render a gun. Can anyone explain ?
     

    Attached Files:

  32. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    Catching up on the thread:

    Camera Stacking and 2D Renderer
    We are working on it. Will give an update once we have the work almost done.

    Camera Stacking and Multiple Scene
    Is it correct to assume most workflows here are to draw Screen Space UI camera? If so, we can discuss a solution for this case that makes everyone life easier.

    Camera Stacking and Priority
    Priority is same as camera.depth. We changed in the UI as it would really be confusing with the overlay camera depth which means depth buffer.

    Additional Camera Data
    This is something we want to fix. It's not nice to have this as component and have two API standpoints. We acknowledge there's a pain here to figure out which settings come from Camera or Additional Camera Data component. I believe there's a path now to fix this with Polymorphic Serialization. There's planned work to investigate and improve this.
     
  33. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    584
    @enzofrancesca could you submit a report for this?
    @GunLengend I've replied to you on the other thread. If you can submit a bug for the performance we will put priority into it (as we do for any performance issue)
     
    GunLengend likes this.
  34. KMacro

    KMacro

    Joined:
    Jan 7, 2019
    Posts:
    48
    We have a few World Space UI elements, but I'm sure we can retool if it gives us URP and multi-scene cameras.

    EDIT: Sorry, I read this incorrectly. While our workflow does include screen space UI elements, it is not primarily what we are using the multi-scene setup for.
     
    Last edited: Mar 2, 2020
    MadeFromPolygons likes this.
  35. enzofrancesca

    enzofrancesca

    Joined:
    Mar 7, 2015
    Posts:
    3
    @phil_lira I've just sent a report with a very simple scene attached. By enabling or disabling the Post process checkmark on the base camera you can see / don't see the issue.
    The case is 1224444.
     
    TheGreenBeret likes this.
  36. Kriszo91

    Kriszo91

    Joined:
    Mar 26, 2015
    Posts:
    181
    Hello!

    I just checking the 3DSkybox demo and after i turned on the ship afterburner particle systems, its overlay the planets, stars and not rendering well, any solution? its a little bit hard to use camera stack without any particle system effects.

    Thanks!
     

    Attached Files:

  37. zanq

    zanq

    Joined:
    Jan 13, 2020
    Posts:
    1
    I'm having the same issue. I'm trying to add a Post Process effect in the Base camera (lower the saturation) while the Overlay camera renders a GameObject without any effects on it, for it to be colored in a grayscaled scene.

    I tried to change the Stereo Rendering Mode to Single Pass Instanced and the Left Eye stops rendering when I play the scene.

    Is there some other way to achieve this effect without Camera Stacking? Can someone please help me? :(
     
    ryzeonline likes this.
  38. GunLengend

    GunLengend

    Joined:
    Sep 24, 2014
    Posts:
    54
    FYI, about the camera stack the bug case is 1224525.
     
  39. EyeMD

    EyeMD

    Joined:
    Feb 28, 2015
    Posts:
    12
    I just upgraded my project from BuiltInRenderer to URP. I am using 2019.3.1f1. I previously had two cameras, one using a culling mask to only display UI, the other using culling mask to display everything else.
    Both now have a Universal Additional Camera component attached - the UI camera set to a priority of 1 to draw over the base camera.
    I am using a forward renderer.
    The Stack option is not showing up in the inspector for my cameras. Additionally, the Render Mode only offers Base - overlay is not an option.
    With both of them being Base cameras, the UI camera totally overdraws the base camera and nothing can be seen.

    1) How can I go about getting my UI camera to be an overlay
    2) Or, how can I get cameras with different priority to not overdraw lower priority? I've looked at the documentation, its scarce. Any help would be appreciated, thanks
     
  40. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    192
    What is your version of URP? "Camera Stacking" is only available from 7.2.0
    In short, if you want to achieve a similar effect on previous versions. Create your own render features that does a post-process blit onto a persistent rendertexture on the base camera and a pre-render blit from the rendertexture to the UI camera.
     
  41. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    192
    My sample post-render blit:
    Code (CSharp):
    1.     public class BlitPostProcessRenderPass : ScriptableRenderPass
    2.     {
    3.  
    4.         public RenderTexture rt = null;
    5.         Rect screenRect;
    6.         Rect viewPortRect;
    7.         Vector2 scale = Vector2.one;
    8.         Vector2 offset = Vector2.zero;
    9.         RenderTargetIdentifier dst;
    10.         RenderTargetIdentifier src;
    11.  
    12.         public void Setup(Rect viewPortRect)
    13.         {
    14.             this.renderPassEvent = RenderPassEvent.AfterRenderingPostProcessing + 1;
    15.             this.viewPortRect = viewPortRect;
    16.         }
    17.  
    18.         public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
    19.         {
    20.             screenRect = new Rect(viewPortRect.x * cameraTextureDescriptor.width, viewPortRect.y * cameraTextureDescriptor.height, viewPortRect.width * cameraTextureDescriptor.width, viewPortRect.height * cameraTextureDescriptor.height);
    21.             scale = viewPortRect.size;
    22.             offset = viewPortRect.position;
    23.             rt = RenderTextureHolder.GetRenderTexture(screenRect);
    24.             dst = new RenderTargetIdentifier(rt);
    25.             src = BuiltinRenderTextureType.CurrentActive;
    26.         }
    27.  
    28.         public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    29.         {
    30.             var cmd = CommandBufferPool.Get("BlitPost");
    31.             cmd.Blit(src, dst, scale, offset);
    32.             context.ExecuteCommandBuffer(cmd);
    33.             CommandBufferPool.Release(cmd);
    34.         }
    35.     }
    36.  
    37.     public class RenderTextureHolder
    38.     {
    39.         static Rect screenSizeRect = new Rect(0, 0, 1f, 1f);
    40.         public static Rect CurrentScreenRect => screenSizeRect;
    41.         static RenderTexture customBlitTarget = new RenderTexture(256, 256, 1, DefaultFormat.LDR);
    42.         public static RenderTexture GetRenderTexture(Rect screenRect)
    43.         {
    44.             if (screenSizeRect != screenRect)
    45.             {
    46.                 screenSizeRect = screenRect;
    47.                 customBlitTarget.Release();
    48.                 customBlitTarget = RenderTexture.GetTemporary((int)screenSizeRect.width, (int)screenSizeRect.height, 1);
    49.             }
    50.             return customBlitTarget;
    51.         }
    52.         public static RenderTexture ActiveTexture => customBlitTarget;
    53.     }
    54.  
    55.     public class BlitPostProcessRenderFeature : ScriptableRendererFeature
    56.     {
    57.         public Rect referenceViewPortRect = new Rect(0f, 0f, 1f, 1f);
    58.  
    59.         BlitPostProcessRenderPass pass = null;
    60.  
    61.         public override void Create()
    62.         {
    63.             pass = new BlitPostProcessRenderPass();
    64.         }
    65.  
    66.         public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    67.         {
    68.             pass.Setup(referenceViewPortRect);
    69.             renderer.EnqueuePass(pass);
    70.         }
    71.  
    72.     }
    My sample pre-render blit:
    Code (CSharp):
    1. public class BlitPreProcessRenderPass : ScriptableRenderPass
    2.     {
    3.         RenderTargetIdentifier src;
    4.         RenderTargetIdentifier dst;
    5.         public void Setup()
    6.         {
    7.             // renderPassEvent = RenderPassEvent.BeforeRenderingOpaques - 1;
    8.             renderPassEvent = RenderPassEvent.AfterRenderingSkybox - 1;
    9.         }
    10.  
    11.         public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
    12.         {
    13.             RenderTexture srcTex = RenderTextureHolder.ActiveTexture;
    14.             src = new RenderTargetIdentifier(srcTex);
    15.             dst = BuiltinRenderTextureType.CurrentActive;
    16.         }
    17.  
    18.         public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    19.         {
    20.             var cmd = CommandBufferPool.Get("BlitForBack");
    21.             cmd.Blit(src, dst);
    22.             context.ExecuteCommandBuffer(cmd);
    23.             CommandBufferPool.Release(cmd);
    24.         }
    25.     }
    26.  
    27.  
    28.     public class BlitPreProcessRenderFeature : ScriptableRendererFeature
    29.     {
    30.         BlitPreProcessRenderPass pass;
    31.         public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    32.         {
    33.             renderer.EnqueuePass(pass);
    34.         }
    35.  
    36.         public override void Create()
    37.         {
    38.             pass = new BlitPreProcessRenderPass();
    39.             pass.Setup();
    40.         }
    41.     }
     
  42. EyeMD

    EyeMD

    Joined:
    Feb 28, 2015
    Posts:
    12
    Ahh I had 7.1.8 of the URP, not 7.2, thank you!
     
  43. Rheonry

    Rheonry

    Joined:
    Jan 26, 2014
    Posts:
    2
    Hello all!

    Is there a way to not have local post-processing affect both the base camera and the overlay, when volume masks and the likes are set to only work for the base camera?

    In my scene I have an overlay camera with a slight amount of bloom (ortho 2D game foreground) and a base camera with other effects such as white balance / color adjustments (perspective 3D background). I was able to achieve the desired looks by using the blend weight, but this seems far from efficient for mobile devices.

    Any help would be appreciated!
     
    Last edited: Mar 26, 2020
  44. AHFontaine

    AHFontaine

    Joined:
    Aug 3, 2017
    Posts:
    19
    Hi! I don't really know if it's related to this topic but I might have a use case that needs to be considered for camera stacking.

    Here's what I want to achieve :
    I use a cam rig to render 5 different (perspective) cameras that will render to textures to a mesh. This mesh is then seen by an orthographic camera to display a 360 degrees image. A 210 degrees fulldome output.

    Beforehand, in built-in renderer, one trick I found was to create two different Post-Processing Volumes on two different objects : Render.Texture Cameras would use a Post-Processing Volume on a layer for Depth Of Field. Everything color related, Bloom and all that was handled by the orthographic camera on another layer. This way, the DoF changes were accurate to the perspective. Adding DoF to orthographic will not work at all since it's rendering a mesh that renders images.

    Right now, URP doesn't seem to allow culling the post-processing stack per layer. I suppose that's what we're talking about in this topic, am I correct?
     
  45. beardedrangaman

    beardedrangaman

    Joined:
    Jul 22, 2017
    Posts:
    9
    You previously mentioned stencil on layers. I currently had a UIRenderer in my render list, the UI camera is set to overlay and in the stack for the base. I can't however seem to figure out to get the UI to respect depth as its in world ui I want it to render correctly behind objects if they are in front of the of in world UI. How do I achieve this? Thanks!
     
  46. beardedrangaman

    beardedrangaman

    Joined:
    Jul 22, 2017
    Posts:
    9
    If I do a render objects pass on just my UI for after rendering transparents with depth testing less equal then the UI is culled correctly, however if I select after post processing (I don't want PP on my UI elements) then my UI isn't culled and artifacts appear?
     
  47. beardedrangaman

    beardedrangaman

    Joined:
    Jul 22, 2017
    Posts:
    9
    I seem to have a temporary fix for my problem after reading through the forward renderer. Changing the AA on the base camera to FXAA the overlay UI camera which renders the UI with no PP is able to use the depth buffer and correctly prevents drawing over the top of objects behind the UI (Its in world UI). So if anyone stumbles across this use a camera stack, 2nd cam as overlay with a culling mask for UI (a renderer assign set to layer mask UI), clear depth set tof alse on the overlay cam and FXAA and PP turn on on the base camera.
     
  48. rxmarccall

    rxmarccall

    Joined:
    Oct 13, 2011
    Posts:
    353
    @phil_lira
    Any update on Camera Stacking and 2D Renderer?
     
    harusame- and rboerdijk like this.
  49. elhongo

    elhongo

    Joined:
    Aug 13, 2015
    Posts:
    47
    Hi,

    I am using Unity 2019.3.5f1 + URP 7.2.0 + ARFoundation 3.0.0 for an AR project on Android/iOS
    I am using PostProcessing for some 3D objects, and I want the postprocessing not affecting the camera feed from ARFoundation. How would I accomplish that?
    Suggestions here didn't resulted to me: https://forum.unity.com/threads/urp...essing-exclude-the-arcamerabackground.811254/

    I see we could do something similar to this (https://github.com/Unity-Technologies/UniversalRenderingExamples/wiki/3D-Skybox), rendering cameras in the anti natural order. But would that work with the ARFoundation's ARBackground RendererFeature?
    [EDIT]
    It seems it doesnt work, in mobiles ARBackground renderer feature (in Overlay Camera) renders everything and overwrites the result of Base camera (no matter what the Stencil Buffer has and the ForwardRenderer does with it).

    Thanks!
    Pablo
     
    Last edited: Apr 10, 2020
  50. Ofx360

    Ofx360

    Joined:
    Apr 30, 2013
    Posts:
    155
    Hey,

    So using the camera stacking, was caught off guard seeing that UI sorting is respected per stack, and you need to manage the order of the stack to get your UI to show correctly if you're doing anything weird with your UI + stack setup.

    Is there anything coming down the line that'll help fix this issue? Respecting sorting between all your canvases would be nice.

    If that's not possible, would it be feasible to be able to change the direction the stack renders? Instead of the main camera being rendered first and everything else stacked on top of it, could we have the main camera render last with everything stacked below it?