Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

I wrote Surface Shaders 2.0 so you don't have to deal with SRPs anymore

Discussion in 'General Graphics' started by jbooth, Jan 20, 2021.

  1. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    This is really well done. My use cases are mostly grounded in laziness: I want to make fast URP shaders that use URP lighting without any extra work.

    But importantly I need to access the final pixel before it gets written. You can do this in code, but not in any shader graphs, so in VR it's impossible to do proper dither or tone mapping in shader rather than post, and so on for perf reasons.

    These glaring omissions from Unity in shader graph and the confusing bloat of their lit shader source code really just adds a huge amount of work for any dev.

    So much so one dev decided to build surface shaders again but better. Nice one Jason!

    Edit: actually probably could just make a graph generate code for the places you define in this system and voila, (with expression nodes) a better shadergraph with fecking ace performance is born.
     
  2. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    5,994
    So lighting is whatever PBR math runs on the target RP the rest is hack and slash. Seems like a sensible scope and it's good to know project boundaries.
    What is that packing option you mention in your video?
     
  3. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,445
    It's released already:

    https://github.com/slipster216/ShaderPackager
     
  4. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    5,994
  5. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,445
    Oh, just packing metal/ao/detailmask/smoothness into a texture.. That's not something specific to the system, just how I setup that particular shader..
     
    Prodigga likes this.
  6. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    I really like the idea of stacking. To think of it, most of our level shaders might fit into this system, if I'm understanding it right:

    - Root: Most shaders need to interpret position/rotation/scale passed the same way from the instancing system and output at least basic placeholder values to all surface outputs
    - Root/Prop: Subset used only for props: needs to interpret prop-specific packed vector containing things like prop HP, progress of destruction animation, visibility multiplier, etc (applying dither for visibility, applying random vertex offset + dissolve based on destruction property and so on). Every prop also uses 1 simple set of textures (albedo, normal, packed metalness/smoothness/emission/occlusion map) and just outputs that directly to standard surface output. Albedo color is modified by two HSB offset properties that are applied in areas masked by albedo alpha (which stores 2 tint masks going away from 0.5). Used directly by simple small props.
    - Root/Prop/Crushable: Some props like cars need additional vertex effects and flatten themselves based on a new property (useful when a unit runs over a car etc.)
    - Root/Prop/Vegetation: Vegetation props need other additional vertex effects (for wind, for flattening when they fall to the ground etc.)
    - Root/Level: Subset used only for blocks making up the level. All blocks are on a regular voxel grid and need to respond to the voxel grid being damaged, so this set of shaders needs to interpret 2 additional properties packing 8 corner states and add noise, ramping burn overlay, clipping etc. accordingly.
    - Root/Level/Terrain: Natural terrain tiles use one set of PBR textures but need to sample it stohastically, or use several arrays of textures, sampled based on simple noise splatmap for variety.
    - Root/Level/Manmade: Manmade tiles use texture arrays to emulate multi-materials (e.g. concrete + metal + ground + brick + detail atlas with windows and doors), sampled based on material index available per vertex.

    This is vastly simplified but even with basic differences like these, it's very tempting to think of everything as of a stack where each subsequent shader never has to repeat what was defined in a higher level shader. Right now we get by with includes, but the idea of a stack feels a fair bit cleaner and easier to maintain. If I want to give all my props and level shaders support for snow and rain, I can build it into the Root, or make all of them inherit from Root + Weather stack.
     
    Last edited: Jan 25, 2021
  7. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,445
    Here's a stacked shader with a Lit shader, moss shader, and puddles shader. Basically I just created a new stacked shader and dragged the three shaders into the list.



    This works wonderfully for surfaces, but tessellation gets a bit tricky. In this case, we want the tessellation offset to be based on both shaders - we want to sample the terrain, then compute the puddles, and blend clamp at the puddle height. In the current system, that means you have to know things about what the other module is doing, which you can do, but is not ideal. So yeah, trying to think of ways to make the API support this kind of stuff without making it very complex. That said, you can always do this via more traditional code techniques, and it's even easier in this system than it would be normally.

    For instance, for a specular version of my Lit shader, all you have to do is:

    Code (CSharp):
    1. // suck in the lit shader and make it specular..
    2.  
    3. BEGIN_SUBSHADERS
    4.    "Lit.surfshader"
    5. END_SUBSHADERS
    6.  
    7. BEGIN_OPTIONS
    8.    Workflow "Specular"
    9. END_OPTIONS
    10.  
    11. BEGIN_PROPERTIES
    12.    [NoScaleOffset]_Specular("Specular", 2D) = "black" {}
    13. END_PROPERTIES
    14.  
    15. BEGIN_CODE
    16.    sampler2D _Specular;
    17.     void SurfaceFunction(inout LightingInputs o, ShaderData d)
    18.     {
    19.       o.Metallic = 0;
    20.       o.Specular = tex2D(_Specular, d.texcoord0.xy * _AlbedoMap_ST.xy + _AlbedoMap_ST.zw).rgb;
    21.     }
    22. END_CODE
    23.  
    Essentially subshader in all the properties, cbuffers, and code from the main shader, and my surface function gets sequenced after it's surface function, so I sample the specular and that's it.

    Now, if we look at the HDRP shader, it lets you set most of this stuff via properties. This is super nice for the user, but means your tied to having a custom editor (or having a bad UI), and your code gets littered with a bunch more #if _SPECULAR blocks everywhere, which when you get to really complex shaders makes everything hard to reason about and test.

    Note that there is no difference between a subshader and a regular shader- they both compile fine. In fact, you can put nothing into the file and it will compile into a shader. Or just some defines, or properties, or code.

    So overall I'm really liking this- but it does have some limits, especially when the code is completely modular and cannot easily know about the other bits in the stack. Note though that you can set defines in the stack, such that when you add the "Foosle" shader into a stack, it sets _HASFOOSLE or whatever you want at the top of your shader. (also, it doesn't take keywords to do that, since it's all compile time).
     
    Last edited: Jan 25, 2021
  8. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,445
    Also, further out I'd want to consider how we could write custom material editors for each shader and have them get combined. This would mean the shader itself would have to be set to use a custom editor, but then it could figure out which editors are needed based on whats in the stack and call the various functions on them.
     
  9. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,445