Search Unity

Amplifying the light probe contribution

Discussion in 'Shaders' started by llavigne, Jun 6, 2012.

  1. llavigne

    llavigne

    Joined:
    Dec 27, 2007
    Posts:
    977
    I use only light probes, no direct light. They are computed by lightmapper so they match the environment ligthing.
    Now for stylistic reasons I want the lightprobes to contribute way more than what they do and also to have more contrast.
    I could not find in surface shader, a way to change how much light gets mixed, are their easy way to do this ?
     
  2. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    Indeed there is!

    I've ripped a bunch of stuff not related to the sh light out, and this is a two pass shader (which may not be necessary) but you'll have what you need here. I got a lot of inspiration/stuff to copy from the Shadowgun assets (love you guys!), so you might want to check those out for further tidbits.

    Unreal Engine 4 is proof this sort of lighting model is here to stay in the future. I hope to see the lightprobes system improved to match it some day (octree datastructure for sh probe locations / dynamic update plz kk thx!)

     
  3. llavigne

    llavigne

    Joined:
    Dec 27, 2007
    Posts:
    977
    This works but doubles the pass count and on ipad it's not that great.
    I know nothing of this CG stuff, we do everything with surface shader, do you have that in SS ?
     
  4. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    Try to add this and multiply it with your shader properties.
    Or maybe you can adding it in albedo output.
    Code (csharp):
    1.  o.Emission = ShadeSH9 (fixed4(worldN,1.0));
     
  5. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    If you want this change to be global (i.e. affect all the objects using light probes) you can modify the coefficients directly and no change in shaders will be needed:
    http://unity3d.com/support/documentation/ScriptReference/LightProbes-coefficients.html (that also shows you which coefficient corresponds to which color, in case you'd like to affect them differently).

    If you'd like to affect only certain objects, the easiest would be to create a new surface shader.
    First you need to make sure that the shader does not evaluate SH the default way, since you don't have access to the evaluated value. Providing the noambient param will remove the ShadeSH9() call from the shader.

    Then you can specify a custom vertex function (which will be called by the generated vertex shader) that will evaluate SH lighting and pass it to the surface function (called from the fragment shader).

    Note: the interpolated coefficients that ShadeSH9() accesses contain ambient, so it will not be lost.

    You can modify SH lighting in the vertex shader (if the modification is a simple function that's easy to describe with ALU operations) or in the surface function (e.g. if you need to sample a look-up texture), as it's done in the example below.

    Code (csharp):
    1. Shader "Custom/ModifyLightProbes" {
    2.     Properties {
    3.         _Color ("Main Color", Color) = (1,1,1,1)
    4.         _MainTex ("Base (RGB)", 2D) = "white" {}
    5.         _Amount ("SH scale", Float) = 1
    6.     }
    7.     SubShader {
    8.         Tags { "RenderType"="Opaque" }
    9.         LOD 200
    10.        
    11.         CGPROGRAM
    12.         #pragma surface surf Lambert noambient vertex:vert
    13.         #pragma debug
    14.  
    15.         sampler2D _MainTex;
    16.         fixed4 _Color;
    17.         float _Amount;
    18.  
    19.         struct Input {
    20.             float2 uv_MainTex;
    21.             float3 shLight;
    22.         };
    23.        
    24.         void vert (inout appdata_full v, out Input o) {
    25.             // evaluate SH light
    26.             float3 worldN = mul ((float3x3)_Object2World, SCALED_NORMAL);
    27.             o.shLight = ShadeSH9 (float4 (worldN, 1.0));
    28.         }
    29.  
    30.         void surf (Input IN, inout SurfaceOutput o) {
    31.             half4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    32.             o.Albedo = c.rgb;
    33.             o.Alpha = c.a;
    34.            
    35.             // modify the SH lighting anyway you want,
    36.             // here's just simple scaling
    37.             float3 shLight = _Amount * IN.shLight;
    38.            
    39.             // emission is just added to the final color,
    40.             // so SH light needs to be multiplied by albedo
    41.             o.Emission = o.Albedo * shLight;
    42.         }
    43.         ENDCG
    44.     }
    45.     FallBack "Diffuse"
    46. }
    Cheers!
     
  6. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    1: Awesome support Kuda, thanks!

    2: I see a whole bunch of lightprobe script stuff! Is this new? How did I miss it?

    Expect rad real time light probe octree experimentation as soon as I have time! ;D
     
  7. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    Er, Kuba, not Kuda...

    I noticed the lightprobe latest isn't in the Flash export support yet ( I noticed when I broke the Flash build... ;D ) Breaking Flash is for sad pandas, but I think it shows that this is a feature definitely undergoing active dev and I'm sure support will come along soon enough.

    But, as dev is active, can I ask some questions to someone who might know? =D

    1. directional lightmaps + animated lightprobes + magic = realtime indirect lighting?
    2. how is lightprobe information handled by the system (especially if you know how this works in Flash...) Are they read and calculated cpuside and then uploaded as constants, lights, or texture, to the gpu? Or is it gpu side? I assume constants, but how's that go down exactly?
    3. how do they compare performance wise - are there cases to avoid or is the cpu side probe traversal trivial and having hundreds is okay bc you still upload only a few constants?
    4. can animated lightprobes somehow factor into a "nolight" shadowing system?
    5. can the lightprobes be doubled up - could I be putting arbitrary information on them rather than SH (maybe I want wind or transparency info...) If I, for example, baked probes with a directional light and then took in the vectors+intensity on cpu side to act as a force I could see wind happening.
    6. Are lightprobes (outside of the text file) stored in a particular datastructure (b, kd, oct?) and is this traversal gpu or cpu?

    Etc etc... =)
     
    Last edited: Jun 15, 2012
  8. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    The lighting baked into directional lightmaps is static. The normal maps used when sampling those lightmaps can be dynamically changed, but that doesn't change the fact that the information about the incoming light direction and intensity is static.

    Animated lightprobes -- what you mean perhaps is Light Propagation Volumes or Deferred Radiance Transfer Volumes. If so, then yes, those are real-time GI techniques, but they have quite serious constraints and are not really similar to the approach we're using for static lightprobes.

    An interpolated lightprobe is calculated on the CPU and passed down to the shader as constants. See ShadeSH9() in UnityCG.cginc to see how is the lighting evaluated in the shader.

    The algorithm that figures out which tetrahedron the position (for which you want the interpolated lightprobe) is in is really fast and it's best case is if the object either stays within the same tetrahedron between the frames or just moves to the neighbouring one. In other words, if the object moves "normally" then the evaluation is quick, if the object teleports then the evaluation is slightly slower and depends on the distance (the amount of tetrahedrons between the old and the new position).

    I'm sorry, but I don't understand the question.

    You can write arbitrary coefficients into lightprobes, but remember that each coefficient will be interpolated linearly.

    Light probe positions are tetrahedralized when you move/add/remove light probes in a Light Probe Group and also at the beginning of a bake. The data structure is quite specific and you can read more about it in Robert's GDC'12 Light Probe Interpolation Using Tetrahedral Tessellations talk.
     
    xVergilx likes this.
  9. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I think it's worth travelling down the road where light probes can individually be moved around during gameplay, perhaps with a co-routine to minimise computation time. For example, a light comes on. I guess the coefficients modification is realtime?
     
  10. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    Kuba, thank you so much for all the answers, they are greatly appreciated!

    Although the directional lightmap information is static, it IS a lot of information about light propagation and the space. I have had success with exploiting that in ways you guys are not (that I know of) and think there is a lot more potential left to explore. Will you be in Amsterdam for Unite? It'd be an honor if you might have some time to discuss it.

    //

    Hippo, I think you're definitely on to something...
     
    Last edited: Aug 5, 2012
  11. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    Yes, I'll be there at Unite. It's probably easiest if you booked a time slot at the Hands-On Labs. :)

    Cheers!
     
  12. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    It just occurred to me that the lightmaps store their coefficients as rgbrgbrgb...

    Is this ordering storing light intensity/color in the rgb and then the direction in their ordering (ie, rgb(top left corner)rgb(top mid corner), rgb(top right corner)?

    Further, are the probes themselves in an order regarding space (ie, top left most probe in worldspace, then top most, second left most, top most, third left most, etc.)? If so, I find this to be a curious alignment. I suppose you're aware of the potential - have you been exploiting it?
     
  13. Kuba

    Kuba

    Moderator

    Joined:
    Jan 13, 2009
    Posts:
    416
    I don't think I fully understand the question. Could you try rephrasing?


    No, they are not stored in any particular order.
     
  14. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    To clarify my first question, I'm trying to understand how the light information is applied to the spherical harmonics. In the math it's clear to see what is multiplied into which component, but I'm trying to visualize the object. Any ideas?

    The latter is whether you're implementing morton ordering for the data.
     
  15. Lulucifer

    Lulucifer

    Joined:
    Jul 8, 2012
    Posts:
    358
    How does probe sample darkness, I have tried to put all 8 probes which form a cube in the shadow area of another Object,but it seems always recieve full light from the bake light,there is no darkness clue at all, cant probe only sample darkness formed by light's attenuation,not shadows?