Search Unity

Receive Shadow flag in deferred mode

Discussion in 'Shaders' started by beatdesign, Jun 2, 2020.

  1. beatdesign

    beatdesign

    Joined:
    Apr 3, 2015
    Posts:
    137
    Could someone explain me why mesh renderer component in deferred mode has to have "receive shadows" flag turned on?

    Thanks
     
  2. Bordeaux_Fox

    Bordeaux_Fox

    Joined:
    Nov 14, 2018
    Posts:
    589
    It's because of the architecture of Deferred Rendering:

    "Deferred shading requires GPU support, and has some limitations. It does not support semi-transparent objects (Unity renders these using forward rendering), orthographic projection (Unity uses forward rendering for these Cameras), or hardware anti-aliasing (although you can use a post-process effect to achieve similar results). It has limited support for culling masks
    , and treats the Renderer.receiveShadows flag as always true."

    https://docs.unity3d.com/Manual/RenderingPaths.html
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    To further explain, it's because deferred rendering has no knowledge of individual game objects at the time it's doing the rendering.

    The short overview of deferred rendering is this: Opaque objects render out their surface data to several full screen textures: Albedo Color, Specular Color, Smoothness, World Normal, AO, Depth, and emission / lightmaps. Those are called gbuffers. When doing lighting, instead of lighting the original objects, deferred is lighting those screen space textures and not the original objects. It reconstructs the world position from the depth, does the dot product against the world normal, and then applies the lighting calculations to the color texture values and adds that to the screen buffer. This has a ton of advantages which I'm not going to talk about. But there is one big disadvantage in that rendering out 4-5 gbuffers greatly increases the memory bandwidth and usage on the GPU compared to more traditional rendering. So a big part of deferred rendering is minimizing how many values you have to store, and finding a balance between quality and precision to squeeze out as much as you can. This means dropping support for things like shadow receiving, which would have required one more thing to render out and store. It's also why deferred only supports the Standard shading model. The legacy shaders when used with deferred rendering are actually approximated using the Standard shading model and don't exactly match what they look like when forward rendered!

    Now, there is actually a little bit of free space in the existing gbuffers to store an extra flag or two of data, and some people make use of that with custom shaders and replacing the internal deferred shading shader. However I'm not sure anyone has made the "receive shadows" option one of the features they support. Even if they did, it'd have to be implemented as a custom shader property and wouldn't use the check box on the renderer.
     
  4. beatdesign

    beatdesign

    Joined:
    Apr 3, 2015
    Posts:
    137
    Thank you very much for your explanation bgolus.
    A few more questions:
    • You said that there is actually a little bit of free space in existing gbuffers. A single bit would be enough to know if that fragment should receive shadows or not. Why this is not implemented by default?
    • Transparent objects are not rendered in deferred mode, because they could not write their depth in the depth buffer: forward rendering is used in this cases. But how can forward rendering handle this? I mean, if I think about the depth buffer problem, even in forward rendering there should be that problem. Why this is not true?
    • Why deferred can't handle orthographic cam? Isn't it just a matter of different MVP matrix to use in the vertex shader?
    • There is no MSAA. This is why, to perform MSAA, forward rendering path renders the frame buffer at a larger scale, and then downsample it. In Deferred rendering it would be impossible to render every Render Target at a larger scale. Is this correct?
    • In the The Depth buffer is considered a Render Target? If yes, the Render targets rendered in deferred in Unity are 5: Albedo, smoothness+Specular, Normals, lightmap+emission+reflection probes, depth. Is this correct?
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Because it wasn’t considered an important feature? Because at one point in development there wasn’t any space? Because they forgot? Who knows!

    The gbuffers can only hold a single set of values per pixel. Transparent objects would mean at possibly two sets of values. The opaque object and the transparent above above it. And that’s just one layer of transparency. So yeah, transparent objects are rendered using forward rendering.

    This is also why transparent objects don’t support receiving shadows in Unity’s default renderer. Unity’s main directional shadows are always rendered in a “deferred” style, even when not using deferred rendering. Opaque forward rendered objects are rendered to the camera depth texture, and the directional shadows cast on the depth texture to create a screen space shadow texture. And you’re actually right in a way, even opaque forward rendered objects set to not receive shadows still receive shadows in that screen space shadow texture. The difference is when the object renders again for the forward rendering pass it’s rendered with a keyword that changes the shader that simply doesn’t sample that shadow texture when calculating the lighting.

    Deferred can’t do that since the object was already rendered, and the keyword isn’t applied to deferred objects when they’re rendering to the gbuffer, hence why it has to be a custom shader with a material property.

    Rendering the objects into the gbuffer isn’t the issue. Unity just never added support for deriving the world position from an orthographic depth texture. Doing so would create more variants, or slow down the deferred rendering. It’s totally possible to support, they just ... didn’t.

    Nope. Not even close. That’s now how MSAA works, and rendering at a higher resolution and scaling down (aka super sampling) was the standard way of doing anti-aliasing for deferred rendering before post process based AA became standard. Unreal Engine even still defaults to using super sampling along with their TAA since TAA isn’t enough on its own.

    MSAA is explicitly not super sampling, it’s why it’s faster than super sampling. It only renders an object’s color once per pixel, but has coverage samples (ie: depth buffer) rendered at a higher resolution to detect geometry edges. But sampling the individual samples from a multi-sample texture is pretty slow. Usually MSAA is resolved to a non-MSAA texture by the hardware for sampling, but that doesn’t work for deferred since blended values would be wrong. This is the same problem as transparent objects.
    https://mynameismjp.wordpress.com/2012/10/24/msaa-overview/
    https://docs.nvidia.com/gameworks/c.../d3d_samples/antialiaseddeferredrendering.htm

    https://docs.unity3d.com/Manual/RenderTech-DeferredShading.html