Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

MSAA / Image Effects / Clarification

Discussion in 'General Graphics' started by plmx, Jul 28, 2017.

  1. plmx

    plmx

    Joined:
    Sep 10, 2015
    Posts:
    308
    Hi all,

    this is an assertion/question combined into one.

    Assertion: In VR development (Vive/Rift) we are dependent on MSAA (1), and thus, on Forward Rendering (2). As image effects in Unity are applied after MSAA (3), their output might replace existing AA which again leads to aliasing (such as happens in this otherwise great asset).

    Question: a) Are image effects, in general, not intended for forward mode, and thus, as a user, should I not be using any of them when in forward/MSAA mode? If not, how do I tell ? b) Seeing the importance of VR development: As an image effect developer, what are my options, if any, for re-engineering my image effect for MSAA use?

    Thanks.

    (1) According to various sources also in here as well as my own tests, MSAA is not only the best, but pretty much the only AA solution for VR. That includes setting renderScale to 3f.

    (2) MSAA and deferred are exclusive, as this screenshot from 2017.1 in camera shows:
    msaa deferred.png

    (3) Own research. If you have any other information (I think its not in the official docs, or is it?) please let me know.

    Edit: Actually added "various" sources ;-)
     
    Last edited: Jul 29, 2017
  2. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,002
    Not really (not sure what you mean, but "replace" is for sure the wrong word).

    Some depend on Deferred, others not really.
     
  3. plmx

    plmx

    Joined:
    Sep 10, 2015
    Posts:
    308
    Hm, I am probably missing something here. I was quoting from https://docs.unity3d.com/Manual/PostProcessingWritingEffects.html, which says
    What I was trying to say was that MSAA happens before any post-processing effect. Then I am further saying that any image effect may indeed replace some of the MSAA-influcenced pixels that are in the RenderTexture source which is provided. If that is incorrect please let me know!

    For example: Let us assume that an effect simulates a fog, which occurs behind some object A. Object A is rendered with MSAA in front of the background. Then, the image effect places a fog behind it. Without MSAA, since that already happened. This means AA is not applied anymore to the combo, which looks bad. I guess, or isn't this the problem?

    I would be interested in the cutoff point between the two. Can you provide more information on which do depend on the Deferred rendering mode, and which do not, and why?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    Since all of those "various" sources are links to responses by the same person ... me ... I'll reiterate that the problem is the depth texture and depth normals texture don't use MSAA. Any effect that relies on these will inherently not be MSAA friendly, including real time directional shadows. There are plenty of image effects that do not rely on either of those, like bloom or color grading, and those work without any issues. Unfortunately effects like screen space ambient occlusion, fog, and several volumetric effects rely on these, and there's not really a good solution.

    Super sampling is kind of the only answer here for using these effects "out of the box". I've considered for future VR work in Unity, assuming they don't offer support for sampling MSAA textures directly (which the lack of support for is part of the reason why this is still a problem), doing several of these "post process" effects before rendering the main view and using something like my directional shadow hack to fix up the results.

    https://forum.unity3d.com/threads/fixing-screen-space-directional-shadows-and-anti-aliasing.379902/
     
  5. plmx

    plmx

    Joined:
    Sep 10, 2015
    Posts:
    308
    Thanks, that is helpful. So the issue is not forward mode vs. deferred mode, but rather compatibility with MSAA, and (internal) dependence on depth texture/depth normals texture? So it really depends on the image effect implementation. Ok.

    Yes, that is what I ended up doing with multiple layered cameras in my specific case which prompted this post, in which the effect to be used was behind everything in the scene (so an easy fix there).

    Interesting, thanks for the link!
     
    Last edited: Jul 29, 2017
  6. DmitryAndreevMel

    DmitryAndreevMel

    Joined:
    Mar 7, 2017
    Posts:
    14
    this is not correct.. in some way. during rendering the depth buffer is also MSAA (otherwise MSAA won't provide antialiasing in case of intersecting polygons), but before you try to sample it it will be resolved to a non-MSAA texture where each pixel will contain lowest depth value of all subsamples.
    So depth texture does use MSAA but you cannot benefit from it in any effects/rendering techniques
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    The main scene's depth buffer is using MSAA. The camera depth texture is rendered as a pre-pass of the scene to a non-MSAA buffer that is resolved to a texture.

    By default a resolved MSAA depth texture would contain the average depth, not the lowest, unless using a custom resolve. Even if the default resolve used the lowest depth, Unity wouldn't be able to use that since they use an inverted depth buffer for DX11. However Unity does not use or have support for custom resolves as it does not support directly sampling a multi sampled buffer pre-resolve. As you noted, all MSAA buffers are resolved to a non-MSAA texture before a shader gets a chance to sample it.
     
  8. DmitryAndreevMel

    DmitryAndreevMel

    Joined:
    Mar 7, 2017
    Posts:
    14
    Thanks for clarification!

    seems like it's also variable depending on the hardware. On Google Pixel smartphone depth buffer pixel depth is resolved to the value of one (upper left) of the subsamples.

    By the way on those smartphones device depth buffer get resolved after render target switch during command buffer execution between opaque and transparent rendering... it makes it really hard to implement some special effects in VR with MSAA on. Unfortunately, according to documentation, render target switch can be treated as a signal to resolve previous render target, it's strange though than on other Daydream devices buffer resolve does not happens until the end of frame
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    That's new! That was a huge pain point for mobile VR that anything that required depth required a second pass. It effectively halved the polycount you could use if you needed that depth. I've been asking for post-opaque depth resolve for a few years now, so I'm happy it got implemented for Daydream at least. I've strictly avoided anything that required the depth texture on mobile VR as it was such a significant hit early on. Generally speaking if I can't do it with blend modes, destination alpha, and/or draw order I don't do it for mobile VR. Anything that would normally require depth I faked. I'm unlikely to work on mobile VR again for a while (though the last title I worked on for Daydream is still in production), but that's good news it's a plausible option to use now!

    I'm assuming Unity is still relying on the built in hardware resolve, and every device manufacturer is going to do whatever they want for resolves. Picking a single subsample is definitely better than averaging for depth, and faster too. But I don't know of any specifications for OpenGL on how the resolve is handled (it could, I just don't know of any), so there's no guarantee all hardware will do that. My memory of my experience on the PC was it did an average for color and depth. This was prior to using Unity, so it's been several years and my memory could be wrong.
     
  10. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    I don't fully agree with this assertion. TAA combined with the high framerates required by VR offer the option to do Deferred Rendering. It's not the holy grail of course, because of the special care required for anything with transparency. But MSAA with Forward Rendering is not the only option. (It is if you want it out of the box, because the TAA in the Unity Post Process Stack still doesn't work correctly with VR.)
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    I loath TAA in VR. There are some amazing looking games out there for VR that use deferred that look great in screenshots, but the artifacts of TAA are super painful for me. Ringing and / or aliasing drifting behind everything. Many games just become an constant aliased mess when you start doing anything significant.

    There's some research over at Nvidia where they're doing VR at 1000 fps which removes the need for TAA, or really any anti-aliasing, but of course then you have to render and display at 1000 fps. ;)
     
  12. DmitryAndreevMel

    DmitryAndreevMel

    Joined:
    Mar 7, 2017
    Posts:
    14
    Hmm.. seems you get it wrong: the depth texture that you can use in shaders is still rendered in a separate pass before the frame. I was talking about hardware depth buffer resolve: in my case I've added a command buffer (with RT switch commands and off-screen MSAA-RT resolve command) between opaque and transparent rendering and in this and only this case the hardware depth buffer of the current frame (which is only rendered opaque queue at the moment) is resolved! Just before rendering the transparent queue! So as a result I get an ugly aliased mess in places where transparent objects intersect opaque geometry. And also seems like for all further rendering during this frame the non-MSAA depth buffer is used, because if I render transparent objects with high alpha in front of opaque ones - I get aliased edge of transparent objects. It's very sad, but I did not dive into the problem deep enough to be 100% sure it works as I described an if it a bug or a feature :)
     
  13. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    Oh, yeah, I totally read that wrong. That's no good at all. Now I'm curious if a grab pass shader causes the same issue. If not that sounds like a bug ... if it does then that also sounds like a bug, though a different one.
     
  14. DmitryAndreevMel

    DmitryAndreevMel

    Joined:
    Mar 7, 2017
    Posts:
    14
    no, grab pass works fine on the same device. But it's a good question... seems I do something wrong in my command buffer. I'll investigate a little further the next week