Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Full Specular Color in Deferred Rendering Concept

Discussion in 'Shaders' started by sonicether, Nov 8, 2013.

  1. sonicether

    sonicether

    Joined:
    Jan 12, 2013
    Posts:
    265
    There is one fatal flaw for deferred rendering in Unity3D for games that make heavy use of dynamic lights, and that is the fact that specular highlight color is only stored in one channel (monochrome) and then reconstructed with diffuse light color. This color reconstruction from monochromatic specular highlights works well when scenes are lit with one color of light, but causes quite horrendous improper specular colors when multiple light colors are affecting any given pixel in the scene. I have taken a screenshot so you can see what I'm talking about http://i.imgur.com/yTM2fV5.jpg

    Notice how the specular highlights are all just purple instead of being blue and red/orange.

    If you take a look at half4 CalculateLight (v2f i) in Internal-PrePassLighting.shader (in the built-in shaders) which handles the calculation of lighting in deferred rendering, you'll notice that the RGB components of diffuse color are stored in channels 1, 2, and 3 of the limited 4-channel output. That leaves only one channel left for specular highlights.

    How can this issue be solved without having a need for more render textures costing more memory bandwidth?

    I was reading a presentation on Crysis 3's rendering pipeline, and indeed it uses deferred rendering. I was fascinated when I read that they stored albedo color in only 2 channels. How did they do this? They put albedo color into YCbCr space (http://en.wikipedia.org/wiki/YCbCr), stored the Y (luminance) channel in one channel, and combined the Cb and Cr channels by interleaving them horizontally into the second channel. This method exploits Chroma Subsampling (http://en.wikipedia.org/wiki/Chroma_subsampling) which takes advantage of the fact that the human visual system has lower acuity for color differences than for luminance differences.

    Thus, putting diffuse and specular highlights into YCbCr space and interleaving the Cb and Cr channels into a single channel, 4 channels is just enough to store full color information for diffuse and specular highlights.

    This is my concept. However, since I am very new to programming shaders with CG/ShaderLab in the context of Unity3D, I need help executing this concept!

    Here's a basic outline of the proposed procedure

    1. Put diffuse and specular color into YCbCr space. This is pretty straightforward.

    2. Interleave the Cb and Cr channels horizontally. This will be done in CalculateLight in Internal-PrePassLighting.shader. Meaning, every other column of pixels will alternate between storing Cb and Cr information. I'm guessing this may be possible by reconstructing the screen-space position of each pixel and then doing something like

    float interleave = mod(floor(screenSpacePosition.y * renderWidth), 2.0);

    Cb *= interleave;
    Cr *= 1.0 - interleave;

    float CbCr = Cb + Cr;


    where float2 screenSpacePosition is the normalized screen-space position of each pixel, and renderWidth is the width in pixels of the render window.

    3. The return statement in CalculateLight should look somewhat like this

    return half4(diffuseY, diffuseCbCr, specularY, specularCbCr);

    4. This is where I have no clue where to start. This may be something that is impossible to change based on how fixed Unity is in its rendering pipeline. Basically, the full YCbCr channels of both specular and diffuse lighting need to be reconstructed by de-interleaving the Cb and Cr channels with a simple pixel shader before moving on to combining the lighting information with the rest of the scene. Is this possible in Unity? Is there anyone out there who can point out any source files that handle the rendering step that is performed directly after Internal-PrePassLighting.shader?


    This may cause some color bleeding, but I think it's worth a shot. If this could be pulled off, it would be a great workaround to--in my opinion--the biggest drawback of using deferred rendering in Unity. Is there anyone out there who can point me in the right direction? Thank you in advance for your assistance.


    EDIT: Yeah, after a couple of hours of messing around with this, I just don't think this is going to work with what Unity provides for adjusting.

    Is there any other way of getting full proper specular color in deferred rendering?
     
    Last edited: Nov 8, 2013
  2. WGermany

    WGermany

    Joined:
    Jun 27, 2013
    Posts:
    78
    Hey you're the SonicEther from Minecraft?, great to see you in Unity! I've read through their paper and and I haven't a clue what they have done to achieve albedo color in 2 channels, you seem to be onto something interesting. Your concept seems to be possible in Unity. While I'm not much of help because I've also just recently started with Unity I'm sure Aras will poke his head around soon enough along with a few mentionable others who are experienced Users of Unity's graphics pipeline. I wish you the best of luck my friend!

    Here are some links I've dug up while doing a little research, I dont know how much of a help they are to you:
    http://forum.unity3d.com/threads/134412-Specular-color-based-on-light-color
    http://www.m4x0r.com/blog/2010/05/specular-color-in-light-pre-pass-renderer/
     
  3. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    I actually think what you said is possible to implement... Unity compiles surface shaders into classic vert/frag shaders (which you can view by adding the "#pragma debug" line). So.. It should be easy to modify the code to support your encoding, but you'd have to convert every single surface shader in the scene to vert/frag shader and change it, which is quite a tedious task... Let's try it by modifying the surface shaders instead... This is the relevant code in a compiled surface shader's PrePassFinal pass:
    Code (csharp):
    1.   SurfaceOutput o;
    2.   surf (surfIN, o);
    3.   half4 light = tex2Dproj (_LightBuffer, UNITY_PROJ_COORD(IN.screen));
    4.   return LightingLambert_PrePass (o, light);
    Now, you need to access IN.screen in the surface shader (surf). You can get it by defining screenPos in your Input structure. It would make more sense to sample just the neighboring pixel there, but it would be a hell to get it together with the one unity samples outside of the surface shader. So lets just sample both. As long as the coordinates are exactly the same, it should get compiled out. So you decode the diffuse and specular color, but what now? Where to pass it to? I'd suggest computing the lighting by yourself and outputting the color in the emission channel. To prevent the lighting from applying twice, you also have to overwrite the lighting model function to make it return nothing.
    Untested code:
    Code (csharp):
    1. #pragma surface surf Blank
    2.  
    3. sampler2D _MainTex;
    4.  
    5. struct Input {
    6.     float2 uv_MainTex;
    7.     float4 screenPos;
    8. };
    9.  
    10. inline half4 LightingBlank_PrePass (SurfaceOutput s, half4 light) {
    11.     return half4(0.0, 0.0, 0.0, s.Alpha);
    12. }
    13.  
    14. void applyLightBuffer(SurfaceOutput s, float4 screenPos) {
    15.     #ifdef UNITY_PASS_PREPASSFINAL
    16.         float4 projPos = UNITY_PROJ_COORD(screenPos);
    17.         float4 light1 = tex2Dproj(_LightBuffer, projPos);
    18.         // float4 light2 = tex2Dproj(_LightBuffer, projPos + something);
    19.         // Meh, too lazy... I'll just leave this math to you, good luck :)
    20.         float3 diffuseLight;
    21.         float3 specLight;
    22.  
    23.         s.Emission += s.Albedo * diffuseLight + specLight * s.Gloss;
    24.     #endif
    25. }
    26.  
    27. void surf (Input IN, inout SurfaceOutput o) {
    28.     half4 c = tex2D (_MainTex, IN.uv_MainTex);
    29.     o.Albedo = c.rgb;
    30.     o.Alpha = c.a;
    31.    
    32.     applyLightBuffer(o, IN.screenPos);
    33. }
     
  4. sonicether

    sonicether

    Joined:
    Jan 12, 2013
    Posts:
    265
    Yep, that's me, "Sonic Ether" of "Sonic Ether's Unbelievable Shaders". :p

    Dolkar, wow, this really points me in the right direction!

    I spent some time last night simply converting diffuse RGB into YCbCr and seeing if I could, in my lighting model, convert back into RGB from YCbCr. I ran into a few problems.

    CalculateLight seems to be applied multiple times additively to the scene. That's all fine and as expected, but the resulting color buffer is thrown off. I have to divide the light buffer by how many lights are affecting any pixel in the scene. The YCbCr information must be normalized before trying to convert back to RGB, and this causes problems. It'd be straightforward to check how many lights are in the scene and divide the light buffer by that variable, but Unity masks off light contributions geometrically, so if I reduce the radius of any of my four lights, things get crazy.

    http://i.imgur.com/L94KIA5.jpg Here, the conversion works perfectly, all lights are set to a large range (I have adjusted the falloff to obey the inverse-square law) and rendering is in HDR. I have to divide the light buffer by 4 before doing the conversion or things go horribly wrong (because each pixel in the scene is being affected by four point lights, again, the light information must be normalized).

    http://i.imgur.com/2o1CaoM.jpg Things get a little wacky when in LDR mode. It definitely has something to do with how Internal-PrePassLighting.shader handles CalculateLight. "Lighting encoded into a subtractive ARGB8 buffer". Whoa... Subtractive? The return line is as follows:

    return exp2(-CalculateLight(i));

    I won't lie, I'm a self-taught programmer and may have a hard time converting from this exp2(-x) thing.

    Since the editor renders in LDR mode, it plagues the editor which is not pleasant.

    http://i.imgur.com/WlvuTTx.jpg If I reduce the radius of one or more of the lights in the scene, you can see how it throws off the conversion because the YCbCr color information is no longer normalized across all pixels. Here I have reduced the radius of the yellow light on the right. You can see how Unity's additive light passes affects the normalization of the information in the light buffer. Left of the line there, the light buffer is only affected by three lights, and right of the line it's being affected by four. I have absolutely no idea how this would be addressed.

    Dolkar, I'll play around with the information you provided me and see if I can get this working in HDR with all lights set to enormous ranges so none of the above problems are present. Once I get this working, I guess I'll have to figure out a way to tackle the LDR and local light buffer normalization. Thanks for your help so far!
     
  5. WGermany

    WGermany

    Joined:
    Jun 27, 2013
    Posts:
    78
    Great to see progress I've just read through a bunch of slides that deal with deferred lighting and the types of situations other people have had to deal with. I'm a self taught programmer myself but I can't do all the incredible things you have done yet! For example I literally just learned about inverse square law a few hours ago today after reading these slides:

    http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_slides.pdf

    I'm actually in the 11th grade still so I think i'm making lots of progress. People think im some kind of math genius :p. Just to mention a few of the things I have implemented

    -HDR rendering from the DirectX SDK
    -Kawase's light streak filter
    -Physically based rendering model using cook torrance specular

    What are some of the books or sources of information that you use to acquire such knowledge? You are my inspiration to even start programming graphics :)
     
  6. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    From what I understand from articles about YCbCr color model, it should support additive blending without normalization. Y component is naturally additive, but so are Cb and Cr components, though they need to be stored in (-0.5, 0.5) range instead of (0, 1). I don't think I ever tested it, but as long as the render target is in floating point format, or HDR, it should be possible to return negative values as well.
    I doubt you can make it work in LDR, though... I think the best thing to do there is to fall back to standard Unity's lighting buffer:
    In the PrePassLighting shader, it's second pass runs only with HDR buffer. So paste something like #define HDR_SPEC_ENCODE after the pragmas in there and then just wrap your code inside CalculateLight in #ifdef HDR_SPEC_ENCODE
    For the decoding part, Unity defines a keyword named HDR_LIGHT_PREPASS_ON for you that you can use the same way.

    By the way, they seem to be decoding the subtractive buffer by simple -log2(light)
     
  7. sonicether

    sonicether

    Joined:
    Jan 12, 2013
    Posts:
    265
    Thanks for the additional help, Dolkar! Yeah, you're right. I was a little surprised, but you can definitely write a negative value to the light buffer in HDR mode. That makes the YCbCr color space naturally additive and solves the conversion issues. I ran into problems because I thought it would be necessary to convert Cb/Cr from -0.5-0.5 to 0-1.

    I must be doing something wrong, though, with trying to use _LightBuffer. If I don't declare uniform sampler2D _LightBuffer, the compiler tells me "undefined variable "_LightBuffer". Okay, that's fine. When I declare it, it tells me it was redefined as if it were declared twice... I'm a little confused by this. I setup a very simple shader that should just show me the light buffer as emissive, and I get errors whether I declare _LightBuffer or not.

    Code (csharp):
    1. CGPROGRAM
    2. #pragma surface surf Blank
    3. #pragma target 3.0
    4. #define SAMPLE_INTERPOLATION
    5.  
    6.  
    7. sampler2D _MainTex;        
    8. //uniform sampler2D _LightBuffer;    
    9.  
    10.  
    11. struct Input
    12. {            
    13.     float2 uv_MainTex;            
    14.     float4 screenPos;            
    15. };        
    16.  
    17.  
    18. inline half4 LightingBlank_PrePass (SurfaceOutput s, half4 light)
    19. {            
    20.     return half4(0.0, 0.0, 0.0, s.Alpha);            
    21. }
    22.  
    23.  
    24.  
    25.  
    26. void surf (Input IN, inout SurfaceOutput o) {            
    27.     half4 c = tex2D (_MainTex, IN.uv_MainTex);            
    28.     o.Albedo = c.rgb;            
    29.     o.Alpha = c.a;            
    30.    
    31.     half4 light = tex2Dproj(_LightBuffer, UNITY_PROJ_COORD(IN.screenPos));
    32.    
    33.     o.Emission = light.rgb;
    34. }
    35. ENDCG
    36.  
    What is going on??
     
  8. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Uff.. that's a hard one. This is how the compiled shader looks like for PrePassFinal:
    Code (csharp):
    1. <Your code>
    2.     sampler2D _LightBuffer;
    3.     ...
    4. <Unity code>
    5.     ...
    6.     sampler2D _LightBuffer;
    7.     ...
    So if you define the light buffer in your code, the compiler will raise an error when it encounters Unity's second declaration. I first thought this can be fixed by wrapping the code using the light buffer in #ifdef UNITY_PASS_PREPASSFINAL, which would make it run only in this pass where the light buffer declaration is already present. But that didn't work either. To use a variable, it has to be declared before the code that uses it, even though it's a function. You can't though, because that would lead to redeclaration. The only way is to somehow invalidate Unity's declaration later in the code.

    Let's hack then! You can declare macros in shaders, which replace all the occurences of a string with another one, kind of like in a text editor: #define find replace replaces "find" with "replace". The joke is that it replaces only the occurences AFTER it's definition. So if you write something like this in your surface shader:
    Code (csharp):
    1.  
    2.     sampler2D _LightBuffer;
    3.     ...
    4.     #define _LightBuffer _NonexistingBuffer
    5.  
    It gets turned into:
    Code (csharp):
    1. <Your code>
    2.     sampler2D _LightBuffer;
    3.     ...
    4. <Unity code>
    5.     ...
    6.     sampler2D _NonexistingBuffer;
    7.     ...
    Now, that makes all the Unity's lighting code useless... but who cares? :D

    You see.. this is why I don't use surface shaders :)
     
    Last edited: Nov 9, 2013
  9. Ikaros

    Ikaros

    Joined:
    Nov 7, 2013
    Posts:
    7
    I suggest moving away from the surface shader compiling entirely, and simply writting out a prepass base and prepass final, manually :)
    That way you are in full control, and no magic will get you down.

    The prepass base naturally needs to write world space normals in xyz, and spec glossiness in w.
    the final pass just needs to decode said light, like you want to do it.

    pasting #pragma debug into your surface should help a lot into getting a manual vert/frag shader up and running.

    How are you encoding/decoding right now? Still interleaving? If it weren't for blending, you could simply pack your channels, 8 bits for each Cb and Cr, or 16, depending on HDR being float or half (I don't know of a 16-bit signed format). Does the added option of signed data change this limitation?

    Asking because I'm also in the middle of making a better light/general shading model.
     
  10. kru

    kru

    Joined:
    Jan 19, 2013
    Posts:
    452
    That #define trick is gold...