Search Unity

Question URP on Quest 2 Depth Texture enable Destroys the framerate

Discussion in 'Universal Render Pipeline' started by HRDev, Aug 5, 2022.

  1. HRDev

    HRDev

    Joined:
    Jun 4, 2018
    Posts:
    58
    Hello everybody. I am developing a VR game with quest 2 using Vulkan as Graphics api. I need to activate the Depth Txture in the URP settings to do some fog effects. But when I activate this Depth Texture the framerate goes from 90 to 45/50... without doing anything else, without adding anything to the scene, just activating DepthTexture... it's normal? is there a way to optimize it?

    I noticed that with OpenGLES3 it is better, should I make the switch to OpenGLES3? I prefer to avoid...
     
    FaberVi likes this.
  2. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,043
    There indeed is an issue with vulkan and depth which is really annoying. I assume you are using Unity 2021? (2020 doesn't have this issue afaik)
    If so, have you updated to the latest release already?
    OpenGLES can definitely have better performance, so if your game does run with all your requirements on it definitely use it. Oculus is pushing vulkan with new features like phase sync, but it's not required
     
  3. rjonaitis

    rjonaitis

    Unity Technologies

    Joined:
    Jan 5, 2017
    Posts:
    115
    The issue is known and already being worked on, the Vulkan's performance drop is caused by URP's depth copy pass if MSAA is enabled.
     
    FaberVi and DevDunk like this.
  4. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,043
    I think in the past someone already found the PR causing the issue. Is there any ETA for it? Will it be fixed in the 2021 LTS or will ot just be 2022?
     
  5. rjonaitis

    rjonaitis

    Unity Technologies

    Joined:
    Jan 5, 2017
    Posts:
    115
    The issue was caused by optimization change preferring depth copy instead of depth prepass, fix already passed testing and waiting for code merge. I can't give you ETA or backport promises, I'm not working on this case.
     
    Koboct-Denis and DevDunk like this.
  6. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    726
    So the testing shows that just regenerating the depth buffer is faster than copying it in the general cases the team tested?
     
  7. Koboct-Denis

    Koboct-Denis

    Joined:
    Jul 20, 2022
    Posts:
    22
    Hello, any news on the fix?
     
    FaberVi likes this.
  8. funkyCoty

    funkyCoty

    Joined:
    May 22, 2018
    Posts:
    727
    On quest 2, the gpu is using a tiled deferred rendering architecture. To copy the depth buffer you need to resolve this texture. Depending on your resolution and MSAA level this can take several ms per copy. If generating a new one costs less than that, it's worth it. Really though, you should avoid needing the depth buffer at all on quest.
     
    Last edited: Oct 2, 2022
  9. FaberVi

    FaberVi

    Joined:
    Nov 11, 2014
    Posts:
    146
    Same problem.
     
  10. trojant

    trojant

    Joined:
    May 8, 2015
    Posts:
    89
    @rjonaitis
    Hi,
    Has this 2021 URP MSAA problem been solved? Is there a link? Where is the progress now? Thanks
     
  11. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,043
    I think it has been fixed yes
     
  12. trojant

    trojant

    Joined:
    May 8, 2015
    Posts:
    89
  13. trojant

    trojant

    Joined:
    May 8, 2015
    Posts:
    89
  14. trojant

    trojant

    Joined:
    May 8, 2015
    Posts:
    89
    I tested 2021.3.16 and didn't feel any performance improvement,With 2021 URP and MSAA 4X problem.
     
  15. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,043
    Yes, it was a noticeable improvement for me.

    If you have a project which performs more than 5% faster on the latest 2020lts vs 2021lts with the same settings etc make a bug report including both projects so they can fix it.
     
  16. Rustamovich

    Rustamovich

    Joined:
    Sep 5, 2014
    Posts:
    36
    Same here. Let's stay in touch. I want to have a nice fog in the game. But I can't. Maybe 2022 version solved the problem?
     
  17. funkyCoty

    funkyCoty

    Joined:
    May 22, 2018
    Posts:
    727
    You can have fog on quest 2 hardware, you just need to do it in your fragment shader.

    upload_2023-1-20_10-11-47.png

    Here's a screenshot of our recent game (minus the shadows and bloom, it looks the same on quest). Fog and a few other "post process"-like effects are possible, but you'll need to be writing HLSL.
     
  18. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    110
    if you are having performance issues on Quest 2 due to the extra Copy Depth pass which was introduced in 21, replacing the old default depth prepass, the suggested setup to still force the prepass (which is cheaper on that specific platform most of the time, depending on your scene complexity) is to select "force prepass" as "Depth Texture Mode":

    upload_2023-1-23_10-7-13.png
     
    Rustamovich and DevDunk like this.
  19. Rustamovich

    Rustamovich

    Joined:
    Sep 5, 2014
    Posts:
    36
    I've tried this one, but it didn't help me. But thanks anyway.

    My setup is as follows:
    Unity 2021.3.16f1
    URP 12.1.8
    XR Plugin 4.2.1

    I would love to hear your thoughts about my setup
     
  20. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    110
    Unfortunately without knowing the details of your application it's not possible to figure out what the problem is.

    My previous post is specifically for users that upgraded to 21 and found an extra copy depth pass in the frame.

    The reason for that is that, before 21.2, URP always did a depth prepass when a depth texture was required. This caused performance issues in vertex bound applications and was reported by users, so we decided to remove the costly depth prepass and copy the depth instead.
    But while removing the depth prepass improved perf in vertex bound situations (lot of geometry to draw), it caused regressions in fragment bound ones.
    Quest 2 games are more often fragment bound than vertex bound, so this is the most likely cause of regressions when upgrading to 21.

    Because of the wide range of platforms and possible application types URP supports, we decided to now give the option to force the prepass vs the copy depth, because only users will know after profiling what is best for their specific use case. So that is the suggested best practice for (most) Quest 2 and VR games, unless they are vertex bound because of unoptimized content.

    So to go back to your issue: without profiling and frame captures it is impossible to know what is your bottleneck. It could be a depth prepass vs depth copy issue, too many post processing passes, wrong quality settings, etc

    If you know what the specific issue is and if URP doesn't provide ways to optimize your frame then we would be happy to know what the issue is and consider a fix if possible
     
    CodeRonnie likes this.
  21. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    Hmmm, I seem to be having a very noticeable performance hit when turning depth texture on and off in my URP settings. Vulkan performance is especially abysmal. My application might be unique in that I use camera stacking extensively (4 cameras in the camera stack + 1 camera for render to texture for a total of 5 cameras in a given scene)

    Could this cause perf issues in VR?
     
  22. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    110

    Camera stacking is very expensive on mobile GPUs, you are basically doing a full screen store + load for each camera. Minimizing the number of cameras and using render features/custom passes instead is recommended
     
  23. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,043
    Is there any reason for the difference a lot of people mention between OpenGLES and Vulkan?
     
  24. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    Manuele, I was wondering how I should best handle some of my camera stacking use-cases:

    - Base camera (skybox): 3d elements in the skybox: I guess I can replace this with a flat texture for every map, but its rather laborious to create a 2d skybox for every map.... This exists mainly to render far off objects without needing to have a massive draw distance.
    - Avatar camera: This is the camera attached to the spaceship which is always sync'd with a camera in the cockpit
    - Target box camera: This camera renders the target boxes so they always appear in front of the avatar camera
    - Cockpit camera: This is a camera attached to the XR rig itself. Its motion is always sync'd with the avatar camera
    - Display camera: This renders a small display inside the cockpit showing a readout of the enemy. I use a render pass here with a fresnel shader to both stylize and reduce rendering strain.


    I suppose it might be possible to do the target box camera as a render pass, but I'm not sure exactly how since each box has a ui sprite on it which must need some kind of custom shader? Can render passes fix my problem here? Wouldn't they take even more resources to achieve the same effect?

    I do feel like I need all these cameras, and the setup (just barely) works on the Quest 1, and can work on the Quest 2. I can tune down my graphics to get 72 fps on the latter platform, but it is difficult to say the least.


    A picture is worth a thousand words. Perhaps this is helpful:



    Edit: After investigating a little, it seems like a separate camera for targetbox is unnecessary. I can actually set the layer for target box rendering to -2 and ditch the camera, it seems
     
    Last edited: Jun 7, 2023
  25. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    110
    @RogueStargun looks like your use case is very similar to the upcoming cockpit VR demo, which should be a good implementation example from the VR performance point of view. It does very similar things, like having a custom pass for rendering a 3D skybox, offscreen HUD etc.

    Should be released soon (in 23.2) as an effort to improve documentation and samples
     
    zenbin3d and DevDunk like this.
  26. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    Upcoming cockpit VR demo? I hope ya'll didn't build it off my debug sample from 2 years ago!

    Edit: Can ya'll please not release a cockpit VR demo? I haven't released my game yet...
     
  27. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    Followup: I created a shader for the targetboxes, and it seems like sprite layering does not work at all with custom sprite shaders implemented in URP/shader graph, so I had to go back to adding a camera for target box camera stacking.
     
  28. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
     
  29. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    110
    you can see a quick preview in the GDC talk by @Jonas-Mortensen at around 1.40 min:



    tagging Jonas here as he might be able to give some suggestions on how to avoid using multiple cameras
     
  30. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    Ok, that cockpit demo looks quite nice. I can also see now that you guys are doing a great job in toning down the gfx for the demos (I don't say this sarcastically...that thing looks like it can actually run on lightweight SoCs)
    I look forward to swiping the those skybox assets and that cool capital ship for Rogue Stargun VR.
     
  31. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    That demo looks visually so much better than my solo developed VR starfighter game. I'm getting a bit self conscious even. I kinda hope y'all hold off releasing that demo for another few months although I am curious about the techniques the team came up with for lighting, shading, and camera synchronization

    I'm actually shocked such a complicated shader with edge detection runs on the quest 2 smoothly. Is it expected to do better than simple lit.
     
    Last edited: Jun 10, 2023
  32. wwWwwwW1

    wwWwwwW1

    Joined:
    Oct 31, 2021
    Posts:
    766
    AFAIK, Unity sample scenes are not commercial games. They're provided to demonstrate new features of the render pipeline.
     
    DevDunk likes this.
  33. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,043
    Don't worry. Instead learn from it, improve your own game, and use your own creative input.
    Just how something looks won't sell a game
     
  34. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    My concern is someone will take the demo, add like 2 levels to it, and there will be many clones of this on the applab...

    But in either case kudos. When I started with URP in 2020, camera stacking didn't even work properly for VR on the Quest 1 (the camera stack would disappear in the left eye). The tech has come a long way (although may still need improvement)

    Edit: looking at the presentation, it also looks like the nodes that are used in that demo are not currently in URP. Anyways, I definitely look forward to trying out these shaders. I wanted to use an edge detection shader for the targeting display in my game, but ended up going with fresnel (which is probably computationally cheaper)
     
    Last edited: Jun 10, 2023
  35. RogueStargun

    RogueStargun

    Joined:
    Aug 5, 2018
    Posts:
    296
    Also, I wanted to follow up on the sprite layering issue with URP/shadergraph. When you create a custom sprite shader, it does not respect layers and I believe this is a bug. I want targetboxes in my hud to always be in front, and right now the only way to do that is to have a dedicated camera to force the rendering order.