Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Making SRP shaders easier to write..

Discussion in 'Shaders' started by jbooth, Nov 9, 2019.

  1. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    So, surface shaders have been discontinued, and Unity has shown no interest in writing a system to abstract shaders between render pipelines unless they go through a shader graph. While I could rant for hours about how short sighted this is, this post is about how to make the current situation for hand writing shaders better.


    Ideally what we want is to be able to write shader code which is:

    • portable across pipelines (URP/LWRP, HDRP, Standard)
    • Upgradable across changes unity makes (SRP/Unity versions, new lighting features)
    • Abstracts complexity of managing all the passes required
    • Is reasonably well optimized
    • Hides the code we don’t care about

    I recently finished writing an adapter for both URP (aka LWRP) and HDRP, allowing my product MicroSplat to compile its shaders for all three pipelines. MicroSplat generates shader code, similar to a shader graph, and was modified to support an interface so that each pipeline could decide how to write that code. I wrote the URP adapter first, and since it was similar to the standard pipeline it was relatively easy to understand what was needed. What the adapter does is:
    • Write out the bones of the shader (properties, etc)
    • For each pass
      • Write out the pass header, includes, pragams and such
      • Write out macros and functions needed to abstract the differences between URP and standard, such as the WorldNormalVector function, or defining _WorldSpaceLightPos0 as _MainLightPosition
      • Write out my code and functions in surface shader format
      • Write out the URP code for the vertex/pixel functions
        • This code packs it’s data into the structs that I use and then calls my code

    With this, my surface shader code runs in URP. Given that shaders do not actually have structures, all this copying of data into new structures essentially compiles out, making it equivalently efficient in most cases.

    I tried to take this same approach porting to HDRP, and after many false starts came to the conclusion that actually understanding the HDRP code in this manner would not only be extremely difficult, but make it a compatibility nightmare when changes occurred. I wanted something that could be easily updated when new versions of the HDRP changed things, so instead went with another approach.

    Rather than writing out all of the passes and such, I’d export a shader from Unity’s shader graph which contains various insertion points for my code. The same basic issues arise- I need to add functions and macro’s to reroute missing surface shader functions and conventions, like:

    Code (CSharp):
    1. #define UNITY_DECLARE_TEX2D(name) TEXTURE2D(name);
    2. #define UNITY_SAMPLE_TEX2D_SAMPLER(tex, samp, coord)  SAMPLE_TEXTURE2D(tex, sampler_##samp, coord)
    3. #define UnityObjectToWorldNormal(normal) mul(GetObjectToWorldMatrix(), normal)

    Then copy their structs to mine:

    Code (CSharp):
    1.       Input DescToInput(SurfaceDescriptionInputs IN)
    2.       {
    3.         Input s = (Input)0;
    4.         s.TBN = float3x3(IN.WorldSpaceTangent, IN.WorldSpaceBiTangent, IN.WorldSpaceNormal);
    5.         s.worldNormal = IN.WorldSpaceNormal;
    6.         s.worldPos = IN.WorldSpacePosition;
    7.         s.viewDir = IN.TangentSpaceViewDirection;
    8.         s.uv_Control0 = IN.uv0.xy;
    9.  
    10.         return s;
    11.      }
    And in each pass on the template, call my function with that data.

    While doing this I learned a lot about the internal way Unity is abstracting these problems in HDRP and allowing them to write less of the code in each shader graph shader. I think this approach could be improved to make hand written HDRP shaders much easier to write. With a bit more work, you could have a .surfshader file type which uses a scriptable importer to inject the code inside of it into a templated shader like the one I use and essentially have a large chunk of what surface shaders provide. Further, if LWRP were to follow the same standards, then porting from one pipeline to the other could also be automatic. To understand this, let’s look at how and HDRP shader graphs code is written:



    A series of defines are used to enable/disable things needed from the mesh:

    Code (CSharp):
    1. #define ATTRIBUTES_NEED_TEXCOORD0
    Then code in a shader vertex shader can use this define to filter what attributes are used in the appdata structure. The same trick is used for things needed in the pixel shader:

    Code (CSharp):
    1. #define VARYINGS_NEED_TANGENT_TO_WORLD
    Then any code which works with these things can check these defines to see if they exist.

    This allows them to abstract many internals and determine if various chunks of code need to be run. However, the graph does not take this code far enough if you ask me. For instance, the graph writes out various packing functions to pack data between the vertex and pixel shader, which if this convention was fully followed could be entirely #included instead of written. It also writes functions to compute commonly needed things in the pixel shader, such as the tangent to world matrix, but writes these functions out each time instead of relying on the defines to do the filtering. For instance:

    Code (CSharp):
    1.  
    2. SurfaceDescriptionInputs FragInputsToSurfaceDescriptionInputs(FragInputs input, float3 viewWS)
    3.             {
    4.                 SurfaceDescriptionInputs output;
    5.  
    6.                 ZERO_INITIALIZE(SurfaceDescriptionInputs, output);
    7.  
    8.                 output.WorldSpaceNormal =            normalize(input.tangentToWorld[2].xyz);
    9.                 // output.ObjectSpaceNormal =           mul(output.WorldSpaceNormal, (float3x3) UNITY_MATRIX_M);           // transposed multiplication by inverse matrix to handle normal scale
    10.                 // output.ViewSpaceNormal =             mul(output.WorldSpaceNormal, (float3x3) UNITY_MATRIX_I_V);         // transposed multiplication by inverse matrix to handle normal scale
    11.                 output.TangentSpaceNormal =          float3(0.0f, 0.0f, 1.0f);
    12.                 output.WorldSpaceTangent =           input.tangentToWorld[0].xyz;
    13.                 // output.ObjectSpaceTangent =          TransformWorldToObjectDir(output.WorldSpaceTangent);
    14.                 // output.ViewSpaceTangent =            TransformWorldToViewDir(output.WorldSpaceTangent);
    15.                 // output.TangentSpaceTangent =         float3(1.0f, 0.0f, 0.0f);
    16.                 output.WorldSpaceBiTangent =         input.tangentToWorld[1].xyz;
    17.                 // output.ObjectSpaceBiTangent =        TransformWorldToObjectDir(output.WorldSpaceBiTangent);
    18.                 // output.ViewSpaceBiTangent =          TransformWorldToViewDir(output.WorldSpaceBiTangent);
    19.                 // output.TangentSpaceBiTangent =       float3(0.0f, 1.0f, 0.0f);
    20.                 output.WorldSpaceViewDirection =     normalize(viewWS);
    21.                 // output.ObjectSpaceViewDirection =    TransformWorldToObjectDir(output.WorldSpaceViewDirection);
    22.                 // output.ViewSpaceViewDirection =      TransformWorldToViewDir(output.WorldSpaceViewDirection);
    23.                 float3x3 tangentSpaceTransform =     float3x3(output.WorldSpaceTangent,output.WorldSpaceBiTangent,output.WorldSpaceNormal);
    24.                 output.TangentSpaceViewDirection =   mul(tangentSpaceTransform, output.WorldSpaceViewDirection);
    25.                 output.WorldSpacePosition =          GetAbsolutePositionWS(input.positionRWS);
    26.                 // output.ObjectSpacePosition =         TransformWorldToObject(input.positionRWS);
    27.                 // output.ViewSpacePosition =           TransformWorldToView(input.positionRWS);
    28.                 // output.TangentSpacePosition =        float3(0.0f, 0.0f, 0.0f);
    29.                 // output.ScreenPosition =              ComputeScreenPos(TransformWorldToHClip(input.positionRWS), _ProjectionParams.x);
    30.  
    31.                 output.uv0 =                         input.texCoord0;
    32.                 // output.uv1 =                         input.texCoord1;
    33.                 // output.uv2 =                         input.texCoord2;
    34.                 // output.uv3 =                         input.texCoord3;
    35.                 // output.VertexColor =                 input.color;
    36.                 // output.FaceSign =                    input.isFrontFace;
    37.                 // output.TimeParameters =              _TimeParameters.xyz; // This is mainly for LW as HD overwrite this value
    38.  
    39.                 return output;
    40.  
    41.             }

    If instead of commenting and uncommenting these functions, they were simply wrapped in the define checks, this function would also not need to exist in the top level pass, and could instead be #included from some file:

    Code (CSharp):
    1. #if VARYINGS_NEED_WORLD_SPACE_POSITION
    2.  
    3. float3x3 tangentSpaceTransform =     float3x3(output.WorldSpaceTangent,output.WorldSpaceBiTangent,output.WorldSpaceNormal);
    4. output.TangentSpaceViewDirection =   mul(tangentSpaceTransform, output.WorldSpaceViewDirection);
    5. output.WorldSpacePosition =          GetAbsolutePositionWS(input.positionRWS);
    6.  
    7. #endif
    The same would be true of things like the structure definitions:

    Code (CSharp):
    1. struct SurfaceDescriptionInputs
    2. {
    3.    #if VARYINGS_NEED_WORLD_SPACE_POSITION
    4.        float3 WorldSpacePosition; // optional
    5.    #endif
    6.    #if VARYINGS_NEED_UV0
    7.        float4 uv0; // optional
    8.    #endif
    9. };
    If this was done very little code would have to exist in the actual output shader, only some defines that say what you are using from the included code and the code you actually care about.

    There some squirminess about if we even have to #if around any of these- if the data is only computed in the pixel shader, then any of these values we don't use would get stripped by the compiler. So really we don't need a "VARYINGS_NEED_WORLD_SPACE_POSITION" define at all, since the compiler will strip those values and calculations if we don't use them. In reality, we really only need to define what goes across the vertex->pixel stages (hull, domain, etc too), but Unity seems to output code that's super specific here, so I'm following that pattern.

    With that, a pass might look something like this:

    Code (CSharp):
    1.  
    2. #define ATTRIBUTES_NEED_POSITION          // allow position in AttributesMesh struct
    3. #define ATTRIBUTES_NEED_UV0                    // allow uv0 in AttributesMesh struct
    4. #define VARYINGS_NEED_UV0                       // Allow/copy to SurfaceDescriptionInputs struct
    5. #define VARYINGS_NEED_WORLD_SPACE_POSITION
    6. #define HAS_MESH_MODIFICATIONS           // Call my custom vertex function
    7.  
    8. AttributesMesh ApplyMeshModification(AttributesMesh input, float3 timeParameters)
    9. {
    10.    Input.uv0 += timeParameters.x;
    11. }
    12.  
    13. TEXTURE(_MainTex);
    14. SAMPLER(sampler_MainTex);
    15.  
    16. SurfaceDescription SurfaceDescriptionFunction(SurfaceDescriptionInputs IN)
    17. {
    18.      IN.Albedo = SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, IN.uv0);
    19.  
    20. }
    That now looks really manageable for a pass, right? There’s nothing about the code we have written that could not be run in URP as well has HDRP. There's not pages of code that exists around it, slightly modified for every shader. Just what we care about. And none of that requires anything but some refactoring of the existing code that the shader graph writes out.

    Where it gets really interesting:

    So if we take this a bit further, we could write a ScriptableAssetImporter which takes this code and inserts it into each pass of a templated shader file, very similar to what the graph does anyway, but without all the commenting and uncommenting of code and structure declaration. The one issue here is that some passes don’t require computing all of the code. For instance, if you're doing a shadow caster pass, you don’t care about albedo/normals/etc, unless those components have something to do with if that pixel should be clipped or not.

    Luckily in many cases the shader compiler strips most of this code for us, so it doesn’t matter so much if it’s in there. The internal functions could provide dummy data to these structures when they generally aren’t needed with defines available to override these behaviors when needed. Something like PASSSHADOWCASTER_NEED_TANGENT, if for some reason you really want a real tangent in your MeshAttributes and SurfaceFragmentInput structures, instead of dummy data the compiler can use and strip.

    So at this point, if both the LWRP and HDRP shaders followed these semantics, we have a shader that gives us most of the benefits we want. We can write something simple not thinking about passes and such, we can tell it what we need in terms of mesh and pixel data and be efficient about it, and we have something compatible with both pipelines assuming we’re not using things which don’t exist in both pipelines. Additional defines could be used to select which template is used (SSS, decals, etc) and enable/disable attributes of the structure and packing routines accordingly. You’d have to wrap your assignment of those in the same checks, but that seems reasonable. You lose the ability to name things in your structures, but that honestly seems like a win to me..
     
    Last edited: Feb 26, 2020
    riveranb, A132LW, Wattosan and 40 others like this.
  2. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,666
    I would also note that it's different for every case. For one case you may want to replace material properties, but leave lighting/fogging/etc untouched. For other case you want essentially a fully working original Standard/Lit shader, but replace e.g. forward fogging code or even just apply some fancy transformation on final color (or final GBuffer values).
    That's why in my opinion Surface shaders only fixed one part of the problem, not making the other easier.

    This is actually extremely similar to the chunk system I'm currently using (I wouldn't call it a pass, because this word already means too many things in gfx).

    So in the chunk system you have templates and chunks.
    Template for a forward renderer can look like this (pixel shader body only to keep it short):

    Code (CSharp):
    1.                 getAlpha(IN);
    2.  
    3.                 getPerPixelWorldNormal(IN);
    4.                 getPerPixelViewDir(IN);
    5.                 getAlbedo(IN);
    6.                 getEmission(IN);
    7.                 getSpecular(IN);
    8.                 getGlossiness(IN);
    9.  
    10.                 getAmbientLighting(IN);
    11.                 getDirectLighting(IN);
    12.                 getReflection(IN);
    13.  
    14.                 combineColor(IN);
    15.                 addFog(IN);
    Template defines general flow and order of execution. For a deferred renderer, for example, it can be different. They are high-level, they must be easy to read and understand, so you don't have to jump over 30 include files to get the general idea. You should be able to implement one for a custom RP as well.

    Each function is included from a separate "chunk" file, and you can override chunks. If you only care about overriding albedo, you write an albedo chunk, leaving other parts as default:

    Code (CSharp):
    1. //@V_float2_TexCoord0
    2. //@PARAM _Color ("Color", Color) = (1,1,1,1)
    3. //@PARAM _MainTex ("Texture", 2D) = "white" {}
    4.  
    5. float4 _Color;
    6. sampler2D _MainTex;
    7.  
    8. void getAlbedo(in VSData IN)
    9. {
    10.     pAlbedo = tex2D(_MainTex).rgb * _Color.rgb;
    11. }
    12.  
    13.  
    Here I'm using these "//@" comments as shader generator hints (same as #pragma usage in Surface shaders). "//@V" means it requires a specific varying (VS->PS) parameter. "//@param" adds stuff to UI.
    Shader generator also does some minimal "smart" stuff, like removing duplicate variable declarations.

    Now if you have a chunk interface like this, you could plug it into any RP, without caring for their tech details. Any forward/deferred/whatever pipeline has albedo, so it could work. Some types of chunks, however, are not universal, like the "fog" one, which won't apply in a deferred pipeline*.

    Vertex shader chunk template code is slightly trickier, because it depends on the pixel shader "demand":

    Code (CSharp):
    1.                 #ifdef V_Normal
    2.                     getWorldNormal(IN);
    3.                 #endif
    4.  
    5.                 #ifdef V_Tangent
    6.                     getWorldTangentAndBinormal(IN);
    7.                 #endif
    8.  
    9.                 #ifdef V_TexCoord0
    10.                     getTexCoord0(IN);
    11.                 #endif
    It is basically the same, but there are these V_ ifdefs which correspond to "//@V" mentions in PS (which are conceptually similar to your ATTRIBUTES_NEED_TEXCOORD0).

    Each vertex shader chunk can be replaced, just like the pixel shader chunk:

    Code (CSharp):
    1. //@ATTRIB float2 TexCoord0 : TEXCOORD0;
    2.  
    3. void getTexCoord0(in MeshData IN)
    4. {
    5.     vUV0 = IN.TexCoord0;
    6. }
    7.  
    8.  
    And they also use the "//@ATTRIB" hint that extends VS input structure.

    For this kind of shader generation I currently have some very minimalistic UI:

    upload_2020-2-7_0-35-21.png


    Here I just set the chunks, the states and optional toggleable defines. I hit Generate and I get this:


    upload_2020-2-7_0-37-28.png



    * now when I think about it, deferred renderers can be fully chunk-driven as well, we just have different templates for materials and the full-screen lighting shader, but they can still accept compatible chunks (just reuse your fog chunk in the full-screen shader instead of the material)
     
    Last edited: Feb 6, 2020
  3. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Rarely am I concerned about albedo without being concerned about the other inputs to the lighting equation. The data setup and needs of, say, triplanar texturing can be shared between all components, but I guess if you can easily extend your VSData struct with new data then you can compute and pass along what's needed in later stages. The biggest issue with this is that templates are order defining- for instance if you were doing POM you'd want to sample your height maps before you get to albedo, so adding POM to an existing shader requires producing a new set of templates with the new ordering. In MicroSplat I do something similar to this, but it's much more function specific since it's layering splat maps, snow, global texturing, dynamic streams, etc, and those all have huge ordering dependencies.

    Anyway..

    For me this issue is less about extending an existing shader with a bit of custom code, and more about abstracting the parts of the code you rarely want to change. And even more important than that, it's making it future compatible. During the 5.0 -> 5.6 cycle, unity changed the lighting systems specular response (to GGX), changed how shadows worked, added enlighten for realtime GI, added all the different modes for VR, added new platforms, etc. Each of these caused existing vertex/fragment shaders to break, while a surface shader in 5.0 still renders perfectly in 2019.3 today and automatically inherits all these existing features. That's massive, especially for my use cases where I'm supporting users on many different versions of Unity and across multiple pipelines.

    For access to the extra interpolators in the VS->PS (or VS->Hull->DS->etc) stages, I'd just predefine a bunch of ones which you can #define _NEEDS_VS_TO_PS0 into.

    The fact that we even need to have this conversation (and that Unity doesn't seem to engage in it, really. Once they decided twitter wasn't the place for it and said to move it to the forums, then never showed up on the forums to talk about it.) always astounds me. This is basically like if Unity introduced a new Visual Scripting system and declared that you'd have to write all your C# in a unique language per platform (swift for iOS, Java for android, C++ for PC) because they didn't want to maintain an abstraction layer anymore. It's just silly, could have been easily avoided since they are already doing a lot of this work in their shader graph, and after 4 years of telling them I'm beyond frustrated.. This type of work is far more important than adding a shader graph, as it's the work which people license Unity to avoid doing.
     
    fherbst, grizzly, protopop and 4 others like this.
  4. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,666
    In this case what I do is:
    - template: always call heightmap chunk before other map-reading chunks.
    - (before any PS code at all) always read varyings into temporary static variables which chunks are instructed to use.
    - heightmap chunk can alter the variable. if it's not altered, compiler may as well omit it.

    Having worked in an engine company for some time, I assume the reason is there is already way too much work to do :D
    But hopefully the priority of this issue will become more apparent.
     
  5. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Having done the same, I actually think it’s a combination of existing momentum and lack of a vision holder who can make the call being intimate with the issue. There is always too much to do, but every day they don’t address this they are digging a deeper hole that will be harder to get out of. Having been in numerous situations like this, very rarely has there ever been a time where not correcting architecture issues as soon as possible was a mistake (only close to shipping a game, really). Compared to writing their own compiler for C# this is a tiny piece of work, and is mostly just a text based parser for the work they are already doing with the shader graph.
     
    Noisecrime and TheSmokingGnu like this.
  6. nsxdavid

    nsxdavid

    Joined:
    Apr 6, 2009
    Posts:
    476
    I think it's more than just "too much work to do"... sometimes it takes someone to step back and to give something like this the cognitive priority to sort it all out. The whole scriptable rendering pipeline has been difficult waters to navigate and their incompatibilities make for distinctly hard decisions for user of Unity.

    Jason (@jbooth ) is definitely the person to mentally corral all of this. In more than one instance, Unity has recognized when they need to bring in outside resources to make something happen. This seems like an ideal situation for such a thing. I'd feel a lot better about the future direction of Unity if this particular part of the puzzle was more under control, like proposed here. It remains but one part, but a very important part, to get right.
     
    LooperVFX, NotaNaN, grizzly and 5 others like this.
  7. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Unity seems to have organizational structure that make them stuck in bad decision for a long time, which then they have to patch hastily, and make the whole idea crumble in the end. A whole lot of organizational inertia. I see sign of them trying to process that, but they aren't there yet.
     
  8. nsxdavid

    nsxdavid

    Joined:
    Apr 6, 2009
    Posts:
    476
    I think it's more that they are moving in so many directions at once. Compatibility between the different streams has taken a back seat.
     
  9. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    The fact that you cannot switch between pipelines mid-project without throwing away all your shaders is a major flaw that will, IMO, heavily stunt SRP adoption for years and cause a major ruckus when Unity inevitably tries to "fix" it by suddenly removing built-in in a hasty decision (as we've seen they do multiple times already).

    Even shader graph itself doesn't properly work across pipelines. They have their minds set into matching UE4's material graphs, but are failing to to replicate its reusability (use the same material all the way from mobile to ray tracing) while at the same time throwing away the flexibility Unity's shaders and rendering had over UE4's.
     
    Noisecrime, fherbst, protopop and 2 others like this.
  10. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    I'd love to see a response from Unity on this as well. The approach outlined on top of this thread feels like the next best thing to surface shaders and doesn't seem to require fundamental changes to what's already been done in the new pipelines. As a small team, we reaped enormous benefits from surface shaders over the years, with a ton of our custom effects surviving with zero issues all the way from Unity 5.2 to Unity 2019.2. If we have started with lower level shaders, we would have likely never made the jumps to each new Unity release or at the very least would've had to compromise on engineering time elsewhere, missing critical milestones and opportunities.

    This level of convenience and future proofing also has incredibly important chaining effects. We adopted first iteration of instanced rendering back in 5.x as soon as it came out, jumped to each new Unity release that was required for updated DOTS, heavily leveraged DOTS, recently started leveraging new terrain system etc., directly provided feedback on them - we wouldn't have had a chance to engage with those technologies if the migration to each subsequent Unity release involved reworking many of our shaders. I would even argue that surface shader framework is one of the most important parts of Unity that enabled us to realize our vision and get the development started - without the ease of iteration it provided, we wouldn't have been able to prototype as quickly, wouldn't have stumbled on an effective way of achieving some very specific functionality we needed, and would have likely failed to secure the future of the project.

    And of course, this isn't just about teams making games directly. I'd hate to see the Unity ecosystem become more and more sparse over the years due to lack of attention to this area. Over the course of the development, we have relied on a great number of third-party assets, including ones from @jbooth. As far as we're concerned, ease of asset development is as important as ease of game development, because third-party assets can allow small teams to punch far above their weight - be it with an amazing terrain shading (as if you had another full-time tech artist) or with great inspection and validation systems (as if you had a full-time tools developer), or with postprocessing, etc.

    I hope this gets the attention it deserves. The things that have been developed by Unity over the past couple of years are incredibly impressive, but sometimes it's hard not to be frustrated and impatient when small changes like ones proposed above could push that work so much closer to greatness.
     
    riveranb, Wattosan, Turniper and 15 others like this.
  11. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    779
    @phil_lira or @Tim-C Can you please advise if Jason's suggestions above are too much to ask. Maybe throw us a bone in light of all the other pain we're going through with URP?
     
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    I wonder if it's not time to start a community plugable render pipeline project. There seems to be enough need for the project to self sustain itself, and we can probably salvage current's unity rp sources to kickstart it.
     
    Gekigengar likes this.
  13. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,666
    I'm currently implementing a custom SRP, integrating it with the shader chunk system described. I wasn't planning to make it "for everyone", I just want it to be:
    - Mostly forward.
    - Not complicated.
    - Extremely flexible.
    - Optimized to render from multiple viewpoints in a frame.
    Basically being able to bend the pipeline to implement any weird effect (non-linear camera projections, portals, unique lighting responses for different objects, lots of custom shadows, etc) is important to me. Shader system needs to be extremely flexible as well to combine various global/per-material/per-view effects without rewriting shaders.
    Maybe some day it will also useful for someone also, but I'm not sure.
     
  14. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    I have been thinking about this. Make our own SRP, with blackjack, hookers, and surface shaders.

    I think a truly flexible SRP needs to be done using something like a render graph, so passes and dependencies are organized dynamically instead of manually. Otherwise the complexity will get out of hand due to feature permutation. Same for shader generation, like the chunk system @guycalledfrank proposed.

    There's the nasty problem of shader graph being hard-coded to URP and HDRP, tho.
     
    Last edited: Mar 2, 2020
    neoshaman likes this.
  15. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Graph based systems are inherently limited by both UI constraints and abstraction constraints. You couldn’t write MicroSplat in any shader graph invented; you might be able to write a system that could do individual permutations is what MicroSplat can create, but it would require a lot of custom nodes.

    regardless, an abstraction around lighting would allow shaders to be written once and work on any SRP that still uses Diffuse/Normals/etc..
     
  16. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    No, no, no. Not that kind of graph! This kind:
    https://ourmachinery.com/post/high-level-rendering-using-render-graphs/

    And this:
    https://docs.unrealengine.com/en-US/Programming/Rendering/RenderDependencyGraph/index.html

    Basically, a dependency graph based on the current frame that is used to determine what and in which order passes are to be rendered and async tasks are executed.
     
    Last edited: Mar 2, 2020
    neoshaman and guycalledfrank like this.
  17. guycalledfrank

    guycalledfrank

    Joined:
    May 13, 2013
    Posts:
    1,666
    There was recently a discussion in graphics community about pros and cons of render graphs, started with this post:
    https://twitter.com/longbool/status/1219438349527724032
    http://alextardif.com/RenderingAbstractionLayers.html

    I did something similar but simpler while working at PlayCanvas: a "layer" system, where each layer was basically a render pass, and they could be partially managed in UI (data-driven way). Similar to render graphs it had in some cases to analyze the whole structure to know resource dependencies and where which temporary RTs go, etc. Even though it was finished, it left me with doubts. Too much logic around user's graphs can bring more bugs/limitations. Today I'm leaning towards the SRP approach - single scripted function where you can do ANYTHING - and I think, in case of "flexible SRP" it'd be cool to not build a rendergraph system on top of it, but still allow writing simple render scripts, just maybe add some helper include files with boilerplate to call.
     
    Last edited: Mar 2, 2020
    neoshaman likes this.
  18. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Yes, there's nothing stopping you from building dynamics into an SRP. I think SRPs are the right approach and wrong execution, much the same as the shader graph. What should have happened is that they built a shader abstraction layer which takes input (code, properties, etc) from the user and compiles it into the current SRP- this layer should have been part of the SRP itself. Then the shader graph outputs to this layer, as does some parser which parses a surface shader representation. That layer shouldn't care where those user functions come from, it should only care about abstracting the platform/lighting/passes/etc system details away from the user. Instead they built that entirely into the shader graph and closed it so no one else can extend it.

    Ironically that is exactly how surface shaders came about - Aras built a text based version for a graph to be built on top of, then the graph was never completed. A graph could be built on top of a custom SRP..
     
  19. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    I just decouple my render passes to separate scriptable objects, and put them in to the list. So i can change passes order, add new, replace entire specific pass(i.e. post process or shadow passses). For me it's works fine.
     
    Last edited: Mar 3, 2020
    guycalledfrank and neoshaman like this.
  20. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    I agree. The idea of making it easier to fully customize your rendering pipeline is great. Having no way to re-use high-level shader logic across customizations, not great at all. SRP needs a robust shader abstraction/generation system that can take user-provided snippets of high-level shader logic (graph or written) and insert them in the correct places in the SRP, while providing implementations for functions that said snippets can use without having to know SRP-specific details.

    This way user-provided shaders could have a higher chance of working across SRPs, to the point a game could be built using different SRPs for different platforms, selected at build time, without having to maintain separate projects.
     
    weiping-toh, bac9-flcl, cxode and 2 others like this.
  21. oobartez

    oobartez

    Joined:
    Oct 12, 2016
    Posts:
    163
    In one form or another, SRP needs tu support writing shaders (or fragments of shaders) in code. In their current state URP and HDRP are unusable in any real-world project and Unity is increasingly becoming a nice little tool for prototyping.
     
  22. oobartez

    oobartez

    Joined:
    Oct 12, 2016
    Posts:
    163
  23. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,891
    Thing is, you're supposed to pick a pipeline for each project and stick to it. Jumping between SRPs mid-development is supposed to be hard, even unfeasible. So once you've picked URP, HDRP, or written your own SRP, you can write shaders for it manually just fine, like you do for the built-in pipeline.

    The main issue is that content/asset developers that need to support all SRPs and the built-in pipeline have been thrown in shader hell. Shader Graph is just not a solution in its current state (I doubt it will ever be). So if you meant there must be a way to write shaders not just for one SRP, but shaders that are compatible across all SRPs, I wholeheartedly agree.

    Unfortunately most people expect all assets to work across all pipelines seamlessly, so jbooth's concerns and his quest for solutions remain valid.
     
    A132LW likes this.