Search Unity

How do I incorporate per-vertex lighting data into a surface shader?

Discussion in 'Shaders' started by OswaldHurlem, Oct 7, 2018.

  1. OswaldHurlem

    OswaldHurlem

    Joined:
    Jan 6, 2017
    Posts:
    40
    Hello, I've posted a question on Unity Answers. I'm reposting it here since the two sites have somewhat divergent communities.

    Context

    I am working on a game for modern PCs but which has stylized graphics somewhat reminiscent of older 3D games. One goal for my team is to perform rendering which computes lighting on a per-vertex basis, but which leverages Unity's Global Illumination, baked lights, etc.
    I want to write this as a surface shader, and as a starting point, I've written one which exposes most of the opportunities for customization. There's custom per-vertex data, a custom lighting model, and a custom function to return the final color of a shaded surface.
    https://hastebin.com/cakodidinu.cs
    One way to accomplish my visual goal might be to compute the UnityGI value on a per-vertex basis, pass that into the Lighting Model, and use these per-vertex GI values in lieu of the per-pixel values provided. This would create a very artifact-heavy look which varies depending on how detailed the geometry is. That's something that would be good for this game.
    Code (csharp):
    1.  
    2.     struct custom_per_vert_data {
    3.         // Other fields like UVs
    4.         UnityGI vertexInterpolatedGI;
    5.     };
    6.     struct custom_surface_output { /* Albedo, Normal, etc */ };
    7.     void CustomVert(inout custom_input_data v, out custom_per_vert_data o)
    8.     {
    9.         // Assign other fields
    10.         o.vertexInterpolatedGI = GET_GI_AT_VERTEX(v);
    11.     }
    12.     void CustomSurface(custom_per_vert_data v, inout custom_surface_output o)
    13.     {
    14.         o = GET_SURFACE_PROPERTIES(/* UVs and such from v */);
    15.     }
    16.     half4 LightingCustomModel(
    17.         custom_surface_output s,
    18.         custom_per_vert_data v,
    19.         float3 viewDir, UnityGI gi)
    20.     {
    21.         // Ignore gi param as it is not per-vertex
    22.         return PerformSomewhatStandardLighting(s, viewDir, v.vertexInterpolatedGI);
    23.     }
    24.  
    However, this doesn't work because the function signature for custom lighting models cannot have the custom_per_vert_data parameter. I'll have to use a less elegant solution instead.

    Question
    What is the best way to incorporate per-vertex lighting data into a surface shader? Something as close to the above "ideal" solution would be nice. I have identified two options, though there may be others.
    The first solution is to transport the per-vertex lighting data to the lighting model function via fields in the custom_surface_output struct.
    Code (csharp):
    1.  
    2.     struct small_light_properties { /* Color, maybe one other field */ }
    3.     struct custom_per_vert_data {
    4.         // Other fields like UVs
    5.         small_light_properties vertexLight;
    6.     };
    7.     struct surface_properties { /*Albedo, Normal, Etc*/ }
    8.     struct custom_surface_output
    9.     {
    10.         surface_properties surfaceProperties;
    11.         UnityGI vertexLight;
    12.     };
    13.     void CustomVert(inout custom_input_data v, out custom_per_vert_data o)
    14.     {
    15.         // Assign other fields like UVs
    16.         o.vertexLight = GET_GI_AT_VERTEX(v);
    17.     }
    18.    
    19.     void CustomSurface(custom_per_vert_data v, inout custom_surface_output o)
    20.     {
    21.         o.surfaceProperties = GET_SURFACE_PROPERTIES(/* UVs and such from v */);
    22.         o.vertexLight = v.vertexLight;
    23.     }
    24.     half4 LightingCustomModel(custom_surface_output s, float3 viewDir, UnityGI gi)
    25.     {
    26.         // gi paramter is per-pixel and is ignored
    27.         return PerformLessStandardLighting(s.surfaceProperties, viewDir, s.vertexLight);
    28.     }
    29.  
    This isn't great because it makes custom_surface_output -- a struct which is ostensibly just for surface-related data -- also carry lighting-related data. I'm not sure what the consequences of this would be. Additionally, the number of interpolators allowed in custom_surface_output is limited by Shader Model 3.0 -- this means I can't have all the GI-related data in it.
    ----------
    Another solution is the compute the lighting in the CustomColor function, which executes last. The lighting model then only serves to pass forward the view direction.
    Code (csharp):
    1.  
    2.     struct small_light_properties { /* Color, maybe one other field */ }
    3.     struct custom_per_vert_data {
    4.         // Other fields like UVs
    5.         small_light_properties vertexLight;
    6.     };
    7.     struct surface_properties { /*Albedo, Normal, Etc*/ }
    8.     struct custom_surface_output
    9.     {
    10.         surface_properties surfaceProperties;
    11.     };
    12.     void CustomVert(inout custom_input_data v, out custom_per_vert_data o)
    13.     {
    14.         // Assign other fields like UVs
    15.         o.vertexLight = GET_LIGHT_AT_VERT(v);
    16.     }
    17.     void CustomSurface(custom_per_vert_data v, inout custom_surface_output o)
    18.     {
    19.         o.surfaceProperties = GET_SURFACE_PROPERTIES(/* UVs and such from v */);
    20.     }
    21.     half4 LightingCustomModel(custom_surface_output s, float3 viewDir, UnityGI gi)
    22.     {
    23.         return viewDir;
    24.     }
    25.     void CustomColor(
    26.         custom_per_vert_data v,
    27.         custom_surface_output o,
    28.         inout fixed4 colorFromLightingModel)
    29.     {
    30.         half4 viewDir = colorFromLightingModel;
    31.         color = PerformLessStandardLighting(s.surface_properties, viewDir, v.vertexLight);
    32.     }
    33.  
    The downsides to this is that it makes it so that the lighting model is totally phony... again I'm not sure what the ramifications of this are.
    ----------
    Anyway, I'm interested to hear which of these solutions is better, and if there's another one I haven't thought of.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    No consequences really. Shaders don’t really support structs within the body of the shader, they’re a construct of the high level shader language for the purposes of code organization. In the real shader assembly it’s just writing values to statically assigned registers and then reading from them. It doesn’t matter if you have 100 different structs, or none, in the end it’s the same registers.

    The real thing you need to be mindful of is the amount of data being passed from the vertex shader to the fragment shader. This can have a surprising amount of impact on the performance of the shader. Passing per light information from the vertex to the fragment for example is likely overkill if the same information can be calculated in the fragment shader. The cost of a single float4 worth of data passed from the vertex to the fragment can be the equivalent of a surprising amount of math done in the fragment shader. I worked out the approximate equivalent cost a while ago, but I can’t remember anymore for sure. It was something like 10 ALU instructions for every float4 you don’t pass... 10 years ago. It’s likely only more these days as GPU’s have gotten faster at math, but their memory speeds have not increased as quickly.
     
    OswaldHurlem likes this.
  3. OswaldHurlem

    OswaldHurlem

    Joined:
    Jan 6, 2017
    Posts:
    40
    Yeah I think it's still the same vertex->fragment bandwidth no matter which way I do it. But maybe it trips up the lightmap computations or some of my debugging capabilities?? Well, I'll try it out I think.