Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

Unity 5 Custom Deferred shader

Discussion in 'Shaders' started by MaT227, Dec 14, 2015.

  1. MaT227

    MaT227

    Joined:
    Jul 3, 2012
    Posts:
    628
    Hi there,

    I am setting the base of my new surface shader to have access to everything. In forward rendering, there is no problem but in deferred I have some difficulties concerning the lighting and the shadows. I am using the half4 LightingFunction_Deferred function but where the shadows are applied ?

    How can I control the way to shadows and the lighting are applied for a specific shader ?
    I've seen that everything is applied in the Internal-DeferredShading.shader file but this is common to the deferred pipeline right ?

    How can I control the way the shadows and the light is applied in the deferred rendering pipeline for a specific shader ?

    Thank you very much.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,248
    You don't, that's explicitly not how the Unity deferred pipeline works. Part of the deferred-ness is the lights (and shadows) are completely separate from the shader that draws the object, so each shader cannot do any custom lighting and still be deferred. If you want it to be different it has to be forward.

    Note: Other deferred pipelines will encode some additional information about which lighting model to use in the gbuffers, then the lights read that and change which calculation to use on each pixel. If you really want this functionality and want to keep things deferred you'll have to modify large parts of the Unity deferred pipeline to accommodate it as right now there's no space in the gbuffer layout to store extra data.
     
  3. MaT227

    MaT227

    Joined:
    Jul 3, 2012
    Posts:
    628
    Hmm right, thank you for you precisions. :)
     
  4. boundingbox

    boundingbox

    Joined:
    Mar 31, 2013
    Posts:
    30
    If you want you can store a value in the alpha of GBuffer2 ( the world normal buffer ) and use it as a mask in Internal-DefferedShading to apply different lighting effects, like transmissive foliage or something. GBuffer2 is a 10,10,10,2 buffer so the alpha only has 4 values but it works for masks.

    You could also re-arrange things a little bit, you can pack the metallic and smoothness into the red and green channels of GBuffer1 instead of converting metallic to specular in the pixel shader which takes up 3 channels. this would free up the blue and alpha channels for other things. I used them for screen space motion vectors.

    The documentation also says that the alpha of GBuffer0 is unused so you could use that out of the box.

    It's not that massive an undertaking but it is more to keep track of.
     
    Last edited: Dec 15, 2015
  5. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
    Oh hey i'm also interested to utilize built in g-buffer to render custom brdf. Currently there's only small free channel in built in buffer (if we want to keep the current configuration intact)
    I'm not understand with this part, what do you mean by the alpha only has 4 values?
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,248
    The Gbuffer0 alpha is actually used, the documentation is unfortunately inaccurate.
     
  7. MaT227

    MaT227

    Joined:
    Jul 3, 2012
    Posts:
    628
    Thank you for your answers everyone.

    In a more general way, I think that the shader documentation and examples are a bit out of dates excepts for the available variables.
     
  8. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yep @Aras was working on new shader docs recently (Or adding a bit of less suck) - so maybe he will want to hear more about where everyone is having problems.
     
  9. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
    Hmm we really need for a way to extend deffered shading to support additional brdf. . . .
     
  10. MaT227

    MaT227

    Joined:
    Jul 3, 2012
    Posts:
    628
    @hippocoder, From what I remember, the custom lighting documentation for surface shader is a bit outdated. There are nice examples for the vertex and fragment shaders.
    I don't know if something like explaining the default standard shader is possible but it might be interesting.

    Just an of topic question, is there a great coding tool for shader ? Visual Studio is nice but shader programming should be really improved.
     
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Nope no coding tool. There are visual node based ones as you're no doubt aware of.
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,248
    To extend a deferred rendering system you generally have to update all of the shaders on both sides to respect alterations to the gbuffer layout. There's definitely space in the existing gbuffer layout to pack more data in. The alpha in RT2, as already mentioned, is free, but it can likely only store the values 0.0, 0.33, 0.67, and 1.0. It's the only value you can use without doing extensive changes elsewhere as all of Unity's shaders appear to write 1 to that channel, if you write 0 you could use that. RT3 also has a free alpha channel, but since this is the texture lighting operations write to it can't be read from when doing lighting (generally you can't read and write to the same texture within a shader).

    Other fat to trim would be to only store world x and y normals and extrapolate out the Z (like normal maps already do), possibly using that low precision alpha to store the Z sign (it pointing up or down). This would leave a full 10 bit channel free to do with as you want. Other deferred renderers I've used have removed support for specular color and store only the specular brightness, specular power (or roughness / glossiness), and maybe a single bit for "metalness" which would multiply the specular by the albedo color. For a deferred system I worked on I packed that extra metalness bit into the roughness as it doesn't need a ton of precision. Any of these would require modifying all of the deferred shaders used to know to write and read from the gbuffers in a different way. Another method to increase the apparent number of lighting models is some deferred renderers will bend and tweak the normals that are written out to mimic the intended lighting model without having to do branches in the lights.


    Every deferred renderer is about trading some amount of flexibility for speed. Unity's deferred system is fairly straight forward and "fat", but allows for a decent amount of flexibility in the one lighting model they support. Others choose to give up a lot of precision for greater flexibility or just go even fatter. Destiny for example uses a specially packed Gbuffer layout that only uses 96 bits per pixel (vs Unity's 128 for the same information, Bungie doesn't include the a) by packing nearly all of the material information into a single 8 bits and the normals. Some late generation Xbox 360 games stored the different gbuffers at different resolutions to save space, or packed all of the color information into only two channels by converting to YCoCg and only storing Y at full resolution.
     
    brokenm and hippocoder like this.
  13. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
    @bgolus didn't we just need to changes the internal-deffered shading and UnityStandardCore to fit the new gbuffer layout?
     
  14. boundingbox

    boundingbox

    Joined:
    Mar 31, 2013
    Posts:
    30
    There's only 2 bits in the alpha channel which only allows it to store 4 values 00, 01, 10, 11 or 0.0, 0.33, 0.67, 1.0 when they get mapped to a float. Right now I'm using 0.0 as a background mask, 0.33 as foliage (battlefield style transmission), 0.67 is unused for now, and 1.0 is solid opaque geometry.

    If you used himi-octo normals you could store the vertex or soft normal in 2 channels for uncharted style skin rendering.
    Gbuffer0 rgb: diffuse
    Gbuffer0.a: occlusion
    Gbuffer1.r: metallic
    Gbuffer1.g: smoothness
    Gbuffer1.ba: hemi-octo soft normal
    Gbuffer2.rgb: world normals
    Gbuffer2.a: brdf chooser mask (background, foliage, skin, opaque)

    However, if you do anything like this you will not be able to use surface shaders. You will have to write full custom shaders afaik. You can show the full generated code on a surface shader to see how much more work it is.
     
    brokenm likes this.
  15. boundingbox

    boundingbox

    Joined:
    Mar 31, 2013
    Posts:
    30
    rea asked for more info on this but maybe it'll help other people too. Here's a simple deferred fragment shader I made for foliage. notice the value on the last line.
    Code (csharp):
    1.  
    2. void frag (v2f IN, out half4 outDiffuse : SV_Target0, out half4 outSpecular : SV_Target1, out half4 outNormal : SV_Target2, out half4 outEmission : SV_Target3) {
    3.            
    4.     float2 Tex1UV = IN.uv.xy;
    5.                
    6.     half4 TexDiffuse = tex2D(_MainTex,Tex1UV);
    7.     clip( TexDiffuse1.w - 0.3 );
    8.     half4 TexNormal = tex2D(_NormalTex,Tex1UV);
    9.     half4 TexProp = tex2D(_PropertyTex,Tex1UV);              
    10.                
    11.     half4 FinalDiffuse = TexDiffuse;
    12.     half3 FinalNormal = unifyNormals( TexNormal ); // custom normal decoder
    13.     half4 FinalProp = TexProp;
    14.                
    15.     half3 FinalNormalWorld;
    16.     FinalNormalWorld.x = dot(IN.tSpace0.xyz, FinalNormal.xyz);
    17.     FinalNormalWorld.y = dot(IN.tSpace1.xyz, FinalNormal.xyz);
    18.     FinalNormalWorld.z = dot(IN.tSpace2.xyz, FinalNormal.xyz);  
    19.                
    20.     half3 FinalEmission = half3(0,0,0);
    21.                
    22.     ReturnOutput ( FinalDiffuse, FinalProp, FinalNormalWorld, FinalEmission, outDiffuse, outSpecular, outNormal, outEmission );
    23.     // set the brdf type
    24.     outNormal.w = 0.33;
    25. }
    26.  
    and here is an excerpt from Internal-DeferredShading. this replaces the last 2 lines of the CalculateLight function.

    Code (csharp):
    1.  
    2.    half4 res = 0;
    3.  
    4.     // map the values 0,0.33,0.67,1.0 to 0,1,2,3
    5.     int mask = gbuffer2.w * 3;
    6.  
    7.     // if the normal buffer.w value is foliage value shade it some other way
    8.     if( mask == 1 ){
    9.         res = float4(1,1,1,1); // do some other lighting calculation
    10.     }else{
    11.         res = UNITY_BRDF_PBS (baseColor, specColor, oneMinusReflectivity, oneMinusRoughness, normalWorld, -eyeVec, light, ind);
    12.     }
    13.  
    14.     return res;
    15.  

    attached is the result, it renders white whenever touched by a light.
     

    Attached Files:

    Voodoocado, marcb152 and Reanimate_L like this.
  16. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
  17. MaT227

    MaT227

    Joined:
    Jul 3, 2012
    Posts:
    628
  18. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
    Finally i can make the Dice Translucency to run in deferred, thank a lot @boundingbox
    upload_2015-12-17_18-8-34.png
     
  19. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,511
    Sorry to tag along the old thread, but I was also recently wondering the ways to optimise / or repack the deferred buffers.

    My usage is HDR turned on without using any lightmaps. I wonder not using lightmaps can lead to some savings , producing some performance benefits overall. I think packing tightly for my specific needs can lead to the better performance due to the platform's limitation I am working on. (bandwidth weakness)
     
  20. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,248
    Light maps certainly can put additional pressure on bandwidth, but they also have little bearing on deferred rendering. If you're on a bandwidth constrained system, like mobile, you likely shouldn't be using deferred rendering in the first place. It's explicitly designed to use extra bandwidth to reduce computation.

    Also, while you can modify Unity's gbuffer layout in terms of what buffers hold, the actual number and format of the render textures can't be changed (with the exception of the emissive buffer which is changed by the HDR setting on the camera). So while you can repack the textures to fit more / different data into them, the bandwidth usage is effectively fixed.

    An alternative would be to write your own deferred renderer from scratch in which case you could pack as much or as little as you need. But that means replacing almost all of Unity's rendering systems. You wouldn't be able to use Unity's lights or any built in shaders for example.