Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Can Unity please try documenting changes to the shader system? This is absurd.

Discussion in 'Universal Render Pipeline' started by jbooth, Feb 24, 2020.

  1. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    I've likely already made myself known on this, but for posterity I'll answer.

    Examples:

    Examples should be both minimal and complete. For instance, the current examples for standard will show you how to do something like a basic rim light shader, but will leave out all the other crap you need to make a shader work across a real production environment. It won't have the various hooks you need for VR, shadows, etc. It's fine to have stripped down examples to show just what you're talking about, but not if the larger context isn't also made clear some how. And that larger context is what changes constantly. A complete shader that works in in Unity 5.2 with all lighting models and devices is totally different in 2019.3, and no where is this documented. At no time do I want to write a shader that doesn't support VR, doesn't work with light mapping, etc, etc. I want that stuff to Just Work. Also, I want to be able to diff the changes between URP 7.18 and 7.21 and know exactly what needs to be done to my shader template to support the new features.

    Surface Shaders

    Surface shaders are about forward compatibility and support management. If a user asks me "Hey, shadows aren't working on your shader", I basically know it's something with their project and not my shader. Why? Because all of that code is handled by the surface shader system, and if they are on Unity 5.6, or 2019.3, I know that whatever features those versions of Unity have in there are supported automatically. I don't have to chase down some minor change to the vertex function, a missing macro, or a new pass that was added for raytracing. Everything just works, and I can count on it.

    Surface shaders are about knowing that my code runs on hardware I don't have access to. This isn't always the case, of course, as the PS3 doesn't like non-square matrix's, etc. But for the most part I don't need to own 7 VR devices and 8 console platforms to test my code, and it's not even possible for me to get access to them anyway.

    Surface Shaders are about not having to understand complex and ever changing internals, spread across many different passes and Unity versions. Having them free's Unity to make radical changes to the lighting model, support new modes, new shadow types, new devices, without me having to know or understand all the stuff that comes along with them.

    Surface Shaders are about having more capability in a better workflow. I love graphs, they have a place in any production, but they hide complexity and limit your ability to write performant code. Say I want to branch around some texture samples - a real branch using Flow Control. Nope. Say I want to add some code to take data from a compute shader? Nope. Redirect some existing code with a Macro undef/def? What about those new fangled mesh shaders? Even if you add all the stuff needed, you will never catch up to what being able to type the code in directly will do. And right now, your shader graph is non-extensible anyway. Also, as an interface, it sucks for large code bases, and certain types of things just don't collapse into node graphs well.

    Surface shaders are about focusing on what I want to do with my shader, and I think this is a big area Unity can improve with them. Separating concepts like "How to I compute my albedo, normal, etc" from "How do I light that pixel". Being able to write a new template, which defines all the passes and lighting model, separate from writing a new "shader" which computes albedo/normal/etc, would allow lighting models to be plugged into existing shaders, a kind of mix and match concept which is difficult in todays surface shaders. Got a new SRP you want to try? Just define the new template and all the shaders just work. While these concepts of lighting and shading are intimately linked, they often are not in actual practice of how you write the code. The difference between URP and HDRP, for the most part, is about how you light a pixel, not what data do you need to compute for lighting.

    Shader Graph

    You will never achieve what surface shaders provided in a shader graph. So please stop trying to figure out how to do that, because it's not only impossible but an increasingly smaller return for the amount of work as you go further down that route. Development time is always limited, and you should focus the graph on things that graphs do well. In my view, all graph based languages are about domain specific problem solving. A shader graph is about being able to quickly develop shaders which tie closely to the art- custom shaders for custom stuff. They are not great at producing shader systems - there's a reason you didn't develop the Lit shader in them. Note, however, that you could develop something like the Lit shader in a surface shader just fine.

    Each of these - SG, Surface Shaders, and vertex/fragment shaders, serve a different audience and use case, from high level to low level. If you want to increase the capability of a higher level system, you allow it to be augmented by knowledge of the lower level system. For instance, if you wanted to make the current surface shader system more powerful, you'd expose the code which generates the vertex/fragment in C#, and allow it to be extended with new templates, such that new lighting model pragma's could be defined, etc. If you want to make shader graph more powerful, you open up it's API so that coders can easily write new nodes for it, modify it's templates, and extend it's capabilities. Unity has an incredibly smart and powerful user base, and increasingly Unity seems to want to lock them out- making the SG and VFX graphs non-extensible, making it hard to modify SRPs settings without using the editor (I'd like to add this custom pass, and now the user of my package has to do it all in the editor? Support request nightmare).

    Unity's power comes from it's ability to be easily extended, modified, and maintained - having the ability to download the package from GitHub and change it is great - but that's not easy to extend, modify or maintain - it's signing yourself up to own a million lines of code, and not being able to share those changes easily. Unity needs to get back to those original concepts that made it what it is today.
     
  2. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    Oh, and speaking of breaking changes between URP versions, why doesn't supplying occlusion to the UniversalFragmentPBR do anything in URP 7.2.1? It used to, AO works when I generate a shader graph shader, but doesn't in my custom one. Wish I had some documentation so I didn't have to spend the rest of the day crawling through your files to figure out what you changed, or god forbid an abstraction layer so I didn't have to worry about this kind of stuff breaking constantly.
     
  3. Le_Tai

    Le_Tai

    Joined:
    Jun 20, 2014
    Posts:
    430
    The most important thing for me is a comprehensive list of everything I need to do to make my shaders compatible with *all* of Unity features, such as single/multi pass VR, different render pipelines, paths and so on, and keep being compatible for as long as possible.

    It is important that these compatibility features are in form of code macro or function, as shader graph is:
    - unergonomic (what mgear said)
    - doesn't allow for low-level optimization
    - does not scale well with complexity

    I understand graph will improve, but as the project nature is complex, it would take way too long, if ever, to be competitive with code. I need to write the shaders now, not digging through your code base after every update for the next two years.
     
    neoshaman and arkano22 like this.
  4. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    So, unlike Unity, asset store developers have to support old versions of the shaders as well. So for something like this, what I'd personally like to see is Unity having to support a version of the shader which is forward compatible through all versions, much like we have to do. So if you have some code in the vertex stage that keeps getting changed you have something like:

    Code (CSharp):
    1. #if URP_7_1_8
    2.     v.shadowCoord = ComputeShadowCoord(i);
    3. #elif URP_7_2_1
    4.     #if defined(_NOCASCADE)
    5.         // coordinates now computed in pixel shader when cascaded shadow maps are used
    6.         v.shadowCoord = ComputeShadowCoord(i);
    7.     #endif
    8. #elif URP_8_0_0
    9. etc..
    This would encourage them to better encapsulate these changes and deal with the realities we face trying to support users across multiple pipeline versions, and I suspect very quickly they'd stop asking the question "Why do you need surface shaders" after doing this with every change.
     
    hopeful, OCASM, neoshaman and 2 others like this.
  5. Meceka

    Meceka

    Joined:
    Dec 23, 2013
    Posts:
    420
    I'm a programmer that also used surface shader system to create optimized shaders targeting mobile platforms and Nintendo Switch. I'm not a professional in shaders and I usually worked with starting with an available surface shader and modifying it to get what I need.

    Note that I couldn't yet try Shader Graph because we didn't switch to URP yet. I don't know if I can recreate these with shader graph in an optimized way considering that shader graph doesn't have a "simple lit" master node.

    I will share some examples. Most are made by modifying legacy shaders. Most use multi compiles as LOD or something.

    A specular shader that uses luminosity as the specular map instead of alpha.
    Self-illuminating shader with a 4 channel mask with 4 sliders that can control 4 illumination values to access with code.
    Lot's of planar reflection custom shaders.
    Shaders that has rim lighting or fresnel reflections which can be enabled using multi compile.
    Using vertex color as alpha (transparency) to fade out roads end.
    A shader similar to standard that utilizes blue channel of the mask texture as a heightmap.
    A detail shader that has 2 different detail maps that switch based on main textures alpha.
    Using vertex color as baked shadow/AO.
    A glass shader that keeps its reflectivity no matter how transparent it is.
    A grayscale transparency shader that uses textures red channel for reflectivity and green for transparency.
    Waterdrop on glass particle shader that refracts using blit and normal map.
    Shader with reflection and emission, in which ratio between emission/reflection is adjustable with multi compile, we used it for day/night illumination change.
    A road intersection shader that has lines painted with vertex color.
    Vertexlit rain particles.
    Fake volumetric light shafts without a texture that use vertex color to fade out.
    Chrome shader with just a normal map and no other texture slot or color.

    Now it's very hard for us to switch to URP because we will either need to use less optimized default workflows on mobile with modifying most models/textures. Or I will need to completely rewrite most of these shaders.

    It was nice to see that shaders I created 5 years ago magically continue to work after major unity updates.
     
  6. CDF

    CDF

    Joined:
    Sep 14, 2013
    Posts:
    1,274
    I'm no shader programmer or graphics wizard, but I do write code, and on my team that basically means: You do everything and make sliders, buttons and fields for others to tweak.

    So using Shader Graph in its current state I find this hard/impossible to do.
    Things that I frequently miss are:

    - Controlling tags (Blend, ZWrite, Culling, Depth, Stencil etc)
    - Passes
    - Vertex and Fragment programs
    - Preprocessor directives (#if, #ifdef etc.)

    I get some of those things don't make much sense in a graph and could become quite convoluted if implemented. Simple stuff like Tags would be great though. Something that has always bugged me about Unity shaders is if I want to just change the ZWrite or Blend of a material, I need to duplicate the entire shader and change 1 line.

    A vast majority of my issues would be solved if I could tell my shader graph to expose a Tag and allow user to change.

    So given that a lot of that stuff probably will never make it into shader graph, having some documentation on how to write new SRP shaders would be immensely helpful.
     
  7. eizenhorn

    eizenhorn

    Joined:
    Oct 17, 2016
    Posts:
    2,652
    +1
     
    arkano22 likes this.
  8. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    779
    @phil_lira so a lot of it has to do with ease of use, rather than what can x do that y can't?

    I'm installing an asset store package now... Spare a thought for the developer who has to maintain this...

    Screen Shot 2020-04-30 at 2.06.17 pm.png
     
  9. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    Notice multiple versions for 2019 URP? Yup.. Patch releases require new asset store versions now..
     
    Le_Tai likes this.
  10. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,511
    Unity's own features were and still is a mess and had been lacking since the beginning. But there was a fallback called asset store. However, with SRP and DOT still going on, the biggest advantage and life line fallback been sort of "deprecated"... which makes Unity not that attractive anymore.
     
    OCASM, Le_Tai and arkano22 like this.
  11. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,611
    I think one of the main issues with modern Unity is fragmentation (kind of funny, isn't it?). Not just for us Asset Store developers, this also affects engine users. I might be repeating what's already been said, so bear with me.

    The SRP concept is great, many of us have asked for it for a lot of time. So are DOTS, the new unified UIElements system, the new networking system, etc. They're great, elegant and flexible tools with enormous potential. It's not that we want them gone, or replaced by something else. Imho, they have all been deployed way early in their development, there's many different version combinations, with way too few examples and lackluster documentation, quite some deal-breaking bugs.... because of this, any potential benefits are overshadowed by frustration. Newcomers don't know where to start, seasoned developers don't know where to stay, it's just a mess. All this contributes to the idea that Unity is hard to use, hard to develop add-ons for, hard to maintain projects in, etc. Exactly the opposite it used to be in the non too distant past.

    The way I see it, new systems should be "ready to use" when they are ready to replace the old ones. Then, deprecate the old ones. This makes the intent clear, users know what they're supposed to be using, reduces the need to support multiple mutually-incompatible systems and mostly avoids version hell. Add clear, easy-to-find guidelines on what to use, how to use, how to extend, sample use cases, and you're golden. Currently we have none of this. We have 3 different UI subsystems, one of them only for the editor, another one that works only at runtime, and the third one works for both. What-the-heck.
     
    Last edited: Apr 30, 2020
    kexei, protopop and OCASM like this.
  12. StaggartCreations

    StaggartCreations

    Joined:
    Feb 18, 2015
    Posts:
    2,109
    I personally prefer to write shaders, rather than use Shader Graph. This is primarily because I'm targeting the asset store, and want to exert full control over what's happening under the hood. Being able to make Unity/SRP version or platform specific code ensures I can cover a wider range of use cases. The usages of keywords and macros also aids in this, in SG I can't tell how keywords are interpreted exactly. Nor am I am able to do specific things on a per-vertex basis.

    That being said, surface shaders would make it far easier to work with lighting. If you're doing a shader that's only modifying UV's, implementing all the default lighting code seems convoluted, and prone to breaking in future versions. If this were more abstract like surface shaders, all the lighting code would just get compiled based on the currently installed SRP library. Ideally, this would also allow people to create shaders that work across URP and HDRP (+ forward/deferred rendering).

    Another point would be vertex animations, which is quite complex to implement right now, as it immediately launches you into a situation where you have to implement your own version of the GetVertexPositionInputs function. Next to having to create a basic DepthOnly and ShadowCaster pass that also animate.

    I think self-documenting the shader libraries would be a good first step. The built-in renderer didn't have much of a shader documentation, besides the vert/frag examples page and that worked out fine for me personally.

    Though, so far I found writing vert/frag shaders (past the skeleton code) for the URP has been easier than for the Built-in renderer, mainly because all the lighting functions are more compact. That large chunk of GI code that used be necessary is no longer a thing! Kudos!
     
    neoshaman and arkano22 like this.
  13. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    583
    Thanks everyone for sharing your feedback. There's a lot of feedback to process here so I'm going to summarize a few action points that as I understand would alleviate most of your pain:

    1) Add documentation + bite sized shader examples. We are converting the following page and examples to URP: https://docs.unity3d.com/Manual/SL-VertexFragmentShaderExamples.html , our tech writer is working on these pages now and we are helping him with the gaps. I've also prototyped a few other examples like MatCap and Unlit + realtime shadows as this seems to be very frequent uses. We are going to share these examples, but here's a preview of unlit + shadows shader (I'll share example project soon-ish)
    https://gyazo.com/efe6386cfe551966d15e873ea45879fa

    2) Add a menu in Unity to create a new unlit URP shader. This would make easier to start writing new URP shaders, I've already made the unlit example shader, I'm working on the template + menu.

    3) Shaders need to be compatible between patch version (and minor package) of Unity. I agree. we are putting a lot of effort to enforce this now. There was in 7.2.1 a change in how shadows are resolved that caused a lot of pain for developers. This was a tough decision, it was done to fix several issues and we needed to do in a minor version of URP as it was the only way to get the fixes in LTS. We wrote an upgrade guide about it, but we have no good way to bring awareness to all users about these upgrade guides now. So, in the end it's probably better to only cause changes that would require a manual upgrade for shader in major versions even if this means not getting a backport of annoying bug to previous versions.

    4) Expose ability to write passes for ShaderGraph. This requires you to write and maintain custom hlsl shaders and it's harder as they grow in complexity / features. Action point for me is to sync with shader graph to see if we can add this to the public roadmap: https://portal.productboard.com/8ufdwj59ehtmsvxenjumxo82/tabs/7-shader-graph

    5) We need to improve shader writing for developers writing and maintaining shader
    I see a lot of good feedback and justification from this here.
    - You want to be able to write a surface function and expect it to work without having to get you head on multiple lines of shader code.
    - Same way you want to be able to write custom lighting function (for toon, non-PBR, artistic choise, or even a specific higher fidelity lighting evaluation)
    - You want to be able to just write a shader to change the render state (depth func, blending, etc)
    - You want a easier way to write multiple shader passes (outline, shadows, etc)

    I think these are all valid feedback we need to address. I've started prototyping how to do this and how to improve this situation for you. I'm writing up a "sand box" custom lighting + examples project in my personal github. This sandbox is place that I can share early and collect feedback (better than pasting gist shaders here for sure :) ).

    I'm going to share this soon-ish. But here's a preview of what a "surface" shader could look like. You include a file and write a SurfaceFunction. It's more verbose than the SurfaceShader, but with the template from 2) this could be autogenerated and you would only have to write the SurfaceFunction. I've already got a "CustomLighting" function working as well that you can do something like #define CUSTOM_LIGHTING_FUNCTION MyCustomLighting. There are a lot of things that I need to figure out yet and I need to put this with the project and examples in the github sandbox to collect better feedback. You will probably find many issues/shortcoming but at least we can evolve it together.
     
  14. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    First, thank you for engaging with this- I feel like I've been screaming into a void for 3 years.

    I don't think you're going to get this good enough without a parser of some sort, but they are a pain to write so let's see how far we could get without one.

    Some questions I would ask myself while working on something like this.

    1. How does this protect me from Unity adding new passes? HDRP just added new passes for DXR, URP is about to add deferred.
    2. How much do I have to know to write this? For instance, why are there all these keywords I have to write? Why would I want a shader that doesn't support certain shadow types? Why do I have to know about these internals unless I want to change something about them?
    3. How do I control the data passed between shader stages?
    4. How does this abstract across multiple SRPs?
    5. How does it work when Unity adds a new light mapper, new keywords, etc?

    Right now your example computes fresnel and such right in the surface function. This combines the concepts of computing inputs to the lighting equation and the lighting into the same function, and will not scale across some lighting models well. These should be separate concerns. Something like this:

    Code (CSharp):
    1. Shader "Universal Render Pipeline/Custom/Lit"
    2. {
    3.     Properties
    4.     {
    5.         [MainColor] _BaseColor("Color", Color) = (1, 1, 1,1)
    6.         [MainTexture] _BaseMap("Albedo", 2D) = "white" {}
    7.         [Normal]_NormalMap("Normal", 2D) = "bump" {}
    8.         _Metallic("Metallic", Range(0, 1)) = 1.0
    9.         _AmbientOcclusion("AmbientOcclusion", Range(0, 1)) = 1.0
    10.         _Smoothness("Smoothness", Range(0.0, 1.0)) = 0.5
    11.      
    12.         [Header(Emission)]
    13.         [HDR]_Emission("Emission Color", Color) = (0,0,0,1)
    14.     }
    15.  
    16.     SubShader
    17.     {
    18.         Tags{"RenderPipeline" = "Shared" "IgnoreProjector" = "True"}
    19.  
    20.         Pass
    21.         {
    22.             Tags{"LightMode" = "Lit"}
    23.  
    24.             HLSLPROGRAM
    25.  
    26.             #pragma vertex LightingVertex
    27.             #pragma fragment LightingFragment
    28.  
    29.             #include "SurfaceShading.hlsl"
    30.  
    31.             // -------------------------------------
    32.             // Material variables. They need to be declared in UnityPerMaterial
    33.             // to be able to be cached by SRP Batcher
    34.             CBUFFER_START(UnityPerMaterial)
    35.             float4 _BaseMap_ST;
    36.             half4 _BaseColor;
    37.             half _Metallic;
    38.             half _AmbientOcclusion;
    39.             half _Smoothness;
    40.             half4 _Emission;
    41.             CBUFFER_END
    42.  
    43.             // -------------------------------------
    44.             // Textures are declared in global scope
    45.             TEXTURE2D(_BaseMap); SAMPLER(sampler_BaseMap);
    46.             TEXTURE2D(_NormalMap); SAMPLER(sampler_NormalMap);
    47.  
    48.          
    49.  
    50.             void SurfaceFunction(Varyings IN, out LightingInputs li)
    51.             {
    52.                 float2 uv = TRANSFORM_TEX(IN.uv, _BaseMap);
    53.              
    54.                 li.Albedo = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, IN.uv) * _BaseColor;
    55.                 li.NormalTS = UnpackNormal(SAMPLE_TEXTURE2D(_NormalMap, sampler_NormalMap, IN.uv);
    56.                 li.Metallic = _Metallic;
    57.                 li.AO = _AmbientOcclusion;
    58.                 li.Smoothness = _Smoothness;
    59.                 li.Emission = _Emission.tgb;
    60.                 li.Alpha = 1;
    61.  
    62.             }
    63.             ENDHLSL
    64.         }
    65.     }
    66.  
    67.  
    68. }
    So this is pretty close to a surface shader. Note that I've targeted "Shared" for render pipeline and "Lit" here, which means I don't care what SRP I'm in, just use the standard template of that pipeline to do my lighting. We'd obviously need the template for this to be in both pipelines for this to work, etc.

    Now taking it a bit closer to your example, where we are targeting a specific pipeline and custom lighting function:

    Code (CSharp):
    1. Shader "Universal Render Pipeline/Custom/Lit"
    2. {
    3.     Properties
    4.     {
    5.         [MainColor] _BaseColor("Color", Color) = (1, 1, 1,1)
    6.         [MainTexture] _BaseMap("Albedo", 2D) = "white" {}
    7.         [Normal]_NormalMap("Normal", 2D) = "bump" {}
    8.  
    9.  
    10.         _Metallic("Metallic", Range(0, 1)) = 1.0
    11.         _AmbientOcclusion("AmbientOcclusion", Range(0, 1)) = 1.0
    12.         _DieletricF0("Dieletric F0", Range(0.0, 0.16)) = 0.04
    13.         _Smoothness("Smoothness", Range(0.0, 1.0)) = 0.5
    14.      
    15.         [Header(Emission)]
    16.         [HDR]_Emission("Emission Color", Color) = (0,0,0,1)
    17.     }
    18.  
    19.     SubShader
    20.     {
    21.         Tags{"RenderPipeline" = "UniversalRenderPipeline" "IgnoreProjector" = "True"}
    22.  
    23.         Pass
    24.         {
    25.             Tags{"LightMode" = "UniversalForward"}
    26.  
    27.             HLSLPROGRAM
    28.  
    29.  
    30.             #pragma vertex CustomLightingVertex
    31.             #pragma fragment CustomLightingFragment
    32.  
    33.             #include "CustomShading.hlsl"
    34.  
    35.             #define CUSTOM_LIGHTING_FUNCTION MyCustomLightingFunction
    36.  
    37.             // -------------------------------------
    38.             // Material variables. They need to be declared in UnityPerMaterial
    39.             // to be able to be cached by SRP Batcher
    40.             CBUFFER_START(UnityPerMaterial)
    41.             float4 _BaseMap_ST;
    42.             half4 _BaseColor;
    43.             half _Metallic;
    44.             half _AmbientOcclusion;
    45.             half _DieletricF0;
    46.             half _Smoothness;
    47.             half4 _Emission;
    48.             CBUFFER_END
    49.  
    50.             // -------------------------------------
    51.             // Textures are declared in global scope
    52.             TEXTURE2D(_BaseMap); SAMPLER(sampler_BaseMap);
    53.             TEXTURE2D(_NormalMap); SAMPLER(sampler_NormalMap);
    54.  
    55.          
    56.  
    57.             void SurfaceFunction(Varyings IN, out LightingInputs li)
    58.             {
    59.                 float2 uv = TRANSFORM_TEX(IN.uv, _BaseMap);
    60.              
    61.                 li.Albedo = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, IN.uv) * _BaseColor;
    62.                 li.NormalTS = UnpackNormal(SAMPLE_TEXTURE2D(_NormalMap, sampler_NormalMap, IN.uv);
    63.                 li.Metallic = _Metallic;
    64.                 li.AO = _AmbientOcclusion;
    65.                 li.Smoothness = _Smoothness;
    66.                 li.Emission = _Emission.tgb;
    67.                 li.Alpha = 1;
    68.  
    69.             }
    70.  
    71.             void MyCustomLightingFunction(Varyings IN, LightingInputs li, out SurfaceData s)
    72.             {
    73.                 surfaceData.diffuse = ComputeDiffuseColor(li.Albedo, li.Metallic);
    74.  
    75.                 // f0 is reflectance at normal incidence. we store f0 in baseColor for metals.
    76.                 // for dieletrics f0 is monochromatic and stored in dieletricF0.              
    77.                 surfaceData.f0 = ComputeFresnel0(li.Albdeo, li.Metallic, _DieletricF0);
    78.                 surfaceData.ao = li.AO;
    79.                 surfaceData.roughness = PerceptualSmoothnessToRoughness(li.Smoothness);
    80.  
    81.                 surfaceData.normalWS = GetPerPixelNormal(li.NormalTS);
    82.  
    83.                 surfaceData.emission = li.Emission;
    84.                 surfaceData.alpha = li.Alpha;
    85.             }
    86.  
    87.             ENDHLSL
    88.         }
    89.     }
    90. }
    Note that neither of these include #defines for things like light mapping, etc, or information about passes. Those should just work, and I should only care about breaking them, not keeping them working. It's quite likely that unity will add, remove, or change these things in the future, so if I don't want my shader to work with Lightmapping, I should have a way to turn it off - rather than having to include it in every shader.

    The basic concept of redirecting more functions into the top level shader is solid, such that I can compute as much or as little as I need of the lighting model. Ideally the language would just support override functions, which would make this a whole lot easier, but macro redirection can work.

    Where this will get squirly is passing data between stages, across tessellation pipelines, etc, and that's pretty important.

    Also note, I'm only really focusing on the top example right now. In my view, the bottom example is where all the maintenance goes, and the more we can push overrides up to the top example, and the more the bottom example is maintained by people writing SRPs, the better.
     
    Last edited: Apr 30, 2020
  15. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    One thing to note, that by giving this it's own extension and writing a CustomAssetImporter for it, we could get a lot of benefits of having a parser without having to write a real lex/yacc or AST kind of thing.

    For instance, if the MyCustomLightingFunction above was in a cginc file, I could just use an include to suck it in on our shader. But with a CustomAssetImporter, it could actually be a setting you could set on the .sshader file, so a user could switch lighting models on a shader that wasn't even written for a specific lighting model.

    Further, simple comment based search and replace functionality in the template could go a long way to making the system easily extensible with new hooks, allowing for code injection into the template code rather than always having to bubble things up. Care would need to be maintained if you use any of that in the users code, but the template I feel can be more picky about it's construction.
     
    OCASM, phil_lira and neoshaman like this.
  16. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    176
    I want to provide suggestion to this problem, I know maybe it is a bit late now, but still, I want to write a new solution here.
    ---------------------------------------------------------------------------------
    First, some definition for ease of reading.
    Whenever I say surface properties, I mean:
    -albedo
    -emission
    -normal
    -position
    -AO
    -metallic
    -smoothness
    .etc

    things that are common between URP/HDRP
    ---------------------------------------------------------------------------------
    I will make the following as short as possible, but it will be still a very long read.
    ---------------------------------------------------------------------------------
    If I am a unity staff, and I need to improve the shader system for SRP now, I will do the following decision.

    Compare to just .shader, I will add 2 more new file types:
    (1) .surfaceShader (usually just a short text file defining surface properties only, imagine this is exactly a text version of any lit shader graph)
    (2) .templateShader (usually a long and complete vert/frag shader file for one RP, except surface properties are not yet defined, just like an abstract class in C#)

    unity will detect all (1) * (2) pairs in your project, and produce usable shader behind the scene which user can pick from material's shader drop table, just like a regular .shader.

    while regular .shader will not be affected, they will work the same as now, no change at all.

    (1) .surfaceShader
    What is the role of a .surfaceShader file?
    from a .surfaceShader's perspective:
    "Hey, I am a .surfaceShader file, I really don't care/understand how lighting or shadow is done, and I don't even care about if you are using URP/HDRP or customRP, I only define how surface properties are, and that's it, no more shader lines from me. If you want to make uv+vertex animation or mixing textures creatively for albedo/normal/smoothness....., do it here!"
    ---------------------------------------------------------------------------------
    (2) .templateShader
    What is the role of a .templateShader file?
    from a .templateShader's perspective:
    "Hey, I am a .templateShader file, I really don't care what surface properties are, just give the final surface properties value to me, and I will do the rest for one RP. Imagine I am a real and complete vertex frag shader, except all surface properties are not yet defined, just like an abstract class in C#"

    In URP, unity's staff will develop 2 official .templateShader for their users, they are:
    -Unlit .templateShader
    -PBR .templateShader

    so, if a user created a completely new URP project in UnityHub, and then created only 1 tri-planar-mapping .surfaceShader in the project,

    SurfaceShader"MyShaders/MyTriplanarMapping"
    {
    .....
    }

    now in any material's shader drop table, the user can already see 2 usable shaders provided by Unity automatically:
    -MyShaders/MyTriplanarMapping(official Unlit)
    -MyShaders/MyTriplanarMapping(official PBR)

    And of course, user can read these .templateShader and write their own .templateShader!

    SRP will always detect every .surfaceShader and .templateShader in your project,
    if you have 100 .surfaceShader and 3 .templateShader.
    SRP will auto combine every pair and produce 100x3 = 300 usable shaders behind the scene, these shaders will appear in Material's shader drop table.

    If the user created
    -1 tri-planar-mapping .surfaceShader
    -1 custom toon lit .templateShader
    in a new URP project,
    Unity will provide 3 usable shaders automatically in any material's shader drop table:
    -MyShaders/MyTriplanarMapping(official Unlit)
    -MyShaders/MyTriplanarMapping(official PBR)
    -MyShaders/MyTriplanarMapping(my custom Toon Lit)

    If a user wrote a good enough custom toon lit .templateShader for URP, he can go sell it on the AssetStore (yes, just sell 1 file only). Other people using URP can buy that single .templateShader and let Unity auto combine it with their own .surfaceShader which already exists in their project.

    -BigCompanyA/OldSurfaceShader00001(new custom Toon Lit from asset store)
    ....
    -IndieCompanyB/OldSurfaceShader001(new custom Toon Lit from asse tstore)
    ....
    -Anyone/AnySurfaceShader(any .template shader)
    ---------------------------------------------------------------------------------
    That's the whole auto .surfaceShader * .templateShader pairs design.
    ---------------------------------------------------------------------------------
    How will this change the asset store environment?
    -Imagine you have thousands of working .surfaceShader in your project already, but suddenly your boss doesn't like the looks of official PBR .templateShader?
    Go buy more .templateShader from the asset store, try them quickly, see how they work when auto combining with your own .surfaceShader.

    -Imagine you are very good at writing .templateShader, but you want much more creative .surfaceShader for your project's case by case needs?
    Go buy more .surfaceShader from the asset store, try them quickly, see how they work when auto combining with your own .templateShader

    -Imagine you don't know anything about writing shaders, but your boss tells you to produce a TriplanarMapping shader with toon lit and outline, and there is no such thing in the asset store?
    -Go buy all
    "TriplanarMapping .surfaceShader" and
    "Toon Lit with outline .templateShader"
    from the asset store, let Unity auto combine every pair, see if any combination looks good enough for your use case.

    Expect to see asset store's shader transaction increase a lot, because now everyone don't need an exact solution in the asset store, instead, you just buy missing parts of the shader, and combine with the others parts thats you have already in your project.
    ============================================================================
    Question from you:
    "There are 2 new types of files, it is very complex compare to just a .shader, what is the benefit behind it?"

    Answer:

    (1) decoupling surface properties and lighting
    surface properties and lighting are not related in most cases. If they are separated in SRP, then:

    -tech artists(or even artists using shader graph) can now focus only on .surfaceShader for case by case project needs, which is what they are good at, high level creative case by case visual stuff. They should not need to worry any low-level stuff, they should not be affected by any URP upgrade.

    -graphics programmer can now focus only on their .templateShader, which is what they are good at, low level performance/custom BRDF/platform compatibility stuff.

    because surface properties and lighting are now separated, now tech artists and graphics programmers can both work separately(even in a different company), solving problems that they are good at, and without worrying each other's job!

    (2)mix and match
    Imagine you have 100 .surfaceShader in your hand, by combining with URP's official Unlit .templateShader and PBR .templateShader, now you have 100 x 2 = 200 ready to use shaders!

    and if you buy 3 more .templateShader from the asset store, now you have 100 x (2+3) = 500 ready to use shaders!

    Try different .templateShader from the asset store, there usually exist 1 that suits your need.

    (3)free shader developer from upgrade/platform hell
    I think I don't need to explain this, surface shader is the only solution. No asset store developer wants to fix their shader for every URP version upgrade * every platform * every Unity version. They just want to write shader ONCE that "Just Works" in the future.
    To support SRP in the asset store, if the profit is not big enough to cover the time cost spent, there is no reason to sell shader on the asset store for SRP at all. Some of my shader friends just quit assset store when SRP enters asset store environment, their comment is "I give up, things changing too fast, it works now and I am sure it will not after 2 months. The customer support effort is 10x for SRP, but profit is not. Quit asset store and do freelance is just better for mental health"

    ====================================================
    Extra comment:
    Shader graph is going to a strange direction
    IMO Shader graph should always only generate .surfaceShader, it should be just a visual tool to make creation of .surfaceShader easier/faster/more artist-friendly. It is really not the right tool to make something similar to a .templateShader, and to be honest, it is just not possible at all using node. Unity staff write their URP default lit shader 100% in code but not shader graph, there is a reason behind it.
    ====================================================
    I believe If unity did the above design when they started SRP, unity will bring asset store back to life for SRP, people can sell .surfaceShader and .templateShader separately and no shader developer will suffer. Artist(shader graph)/Tech Artist(shader graph/surface shader)/graphics programmer(template shader) can now do their own job, everyone is happy.

    And not just Unity user are happy, Unity's staff are happy also, because most of the user will write surface shader or produce surface shader using shader graph, only graphics programmer will still want to write .templateShader or raw vert/frag .shader, now Unity can do whatever change they want to SRP/URP/HDRP, only limited people writing .templateShader or .shader will suffer from each version upgrade, while all .surfaceShader will not be affected ,which greatly reduces the URP version upgrade impact to a minimum.
    ======================================================
    The above suggestion is highly subjective, I am sure not everyone will agree with this design, but I still need to explain my exact thought here, because I will use SRP for years in the future, I want it to be as perfect as possible.

    To sum up (TLDR):
    by providing 2 extra file types:
    - .surfaceShader
    - .templateShader

    everyone(both unity user, and unity staff) will be happier when developing shaders.

    Thanks for your time reading the whole thing.
    ======================================================

    SRP has so much potential that it can easily destroy UE4 in terms of rendering flexibility and customization possibility multiply with asset store power, I was so excited about this in the past.
    I remember when SRP just came out, the official presentation always sells this idea
    "go write your own SRP, sell it on the asset store, there will be lots of SRP in the asset store for everyone to try out!"
    In the end, it failed in terms of the asset store part. But it is actually a very difficult task for any game engine, no one can blame Unity for this.
    The "scriptable" part is very good already, but the "asset store" part is just not there, due to many different reasons, shaders are only a tiny part of the reason.

    but if we reduce the scope, if we now just focus on the shaders only, just by providing an extra official abstraction layer (auto .surfaceShader + .templateShader), maybe there is still some chance to save the "shader part" of the current asset store situation?
    It may be late already, but I am sure it is not too late!
     
    Last edited: May 1, 2020
  17. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    So, some additional thought I have about the approaches outlined above.

    - Unity should make templates for the old SRPs versions as well as standard pipeline. Normally I'd say start where we are and more forward, but the types of changes we've seen from standard to LWRP to URP and to HDRP represent the types of changes we will likely see in the future. If an abstraction layer is written which works across those changes (Standard to SRPs, LWRP->URP, VR added, Raytracing added, massive structural changes), then it's likely it's going to provide a decent abstraction for the future.
    - HDRP/URP compatibility is super important. Both teams need to buy into this, and realize that keeping the templates updated is a crucial part of the job.
    - a method for handling optional functions and data should be developed if lighting functions are going to exist outside of the template file. For instance, if I'm in URP and want to adjust the fresnel, like in the example above, then what if that depends on some custom data in my surface function? In surface shaders you could define a custom struct with whatever you needed and pass it there. Further, properties and cbuffers might have to be injected so the fresnel value in the original example can be included, otherwise the main shader would have to provide those.
    - data should be passed from encapsulated template functions as inout. As an example, to blend things with the terrain I need to edit the TBN matrix used for lighting (or I guess just the N if we are going surface gradient?), so that needs to be passed in an editable manner. Internal functions should use inout, and thus if I declare the input on my function inout, it should be something I can edit and makes its way back to the lighting function. Right now, the packing functions unpack the data into local variables and pass it to us, which prevents us from changing this data in a surface function.
    - As little as possible of the internals should be exposed to the user, except when they want to take control over that area of the shader. That's why in my rewrite I removed the fresnel, conversion of albedo into base color, etc, from the surface function. Those are SRP specific functions, and do not belong in the surface shader part of the code.

    Note that I'm happy to spend time working on this- as it will eventually save me time. MicroSplat uses a ton of the surface shader system, and already has a system that makes cross compiling its code to HDRP/URP possible. Writing one adapter which makes those obsolete, and outputs this new format, would be a good test.
     
    Last edited: May 1, 2020
    hopeful, protopop, OCASM and 5 others like this.
  18. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    583
    Thanks @jbooth and @colin299 that's very useful. There's a lot of feedback here to process.

    Quick clarification note: this discussion is not intended to design a system to replace ShaderGraph.
    ShaderGraph is an amazing tool. A node based editor shader tool brings a lot of flexibility, it suits a good slice of developers/tech art. Some of the things that you mentioned as missing are being worked on and ShaderGraph has a public roadmap that you can submit ideas and improvements.
    If in the beginning of SRP ShaderGraph was at the stage it is today I'd have written SimpleLit and Lit in ShaderGraph and this would have saved me so much maintanance time :p (also I would need terrain and particle targets which are not yet supported)

    Now, I guess this discussion is important because even with the most feature completes shader node based tool there are some people that still prefer to write hlsl shader, depending on the shader sometimes it's faster/easier, it's easier to debug, diff etc.

    Now back to feedback:

    I like your suggestion of opt-out for things other than opt-in. It's not yet clear if all passes should be opt-out I guess we need to look at several scenarios, but in general opt-out will hide several implementation specific details and stay backward compatible upon adding new passes/keywords.

    I'm interested in hearing some use cases? If we allow to define custom interpolator data on top of some predefined Varyings solving some of your use cases?

    It depends on the solution we take. It will require alignment/parity between all shader data / interface to make this work.

    In my example above, one of the things that are input to a lighting function is the LightingData struct. This contains environment lighting (irradiance) and environment reflection (pre-filtered reflections). This way the internals if we are sampling environment lighting from SH, or lightmap, or probe volume or what kind of mixed light type is, it's abstracted.

    Now with the environment lighting you can choose the lighting model you want to apply, if you want to apply AO and so on.

    Maybe there's a confusion of what f0 here is. This is not lighting computation, but physical properties of a material. These values are measured and there are guides/databases with different values for metals and non-metals. I used the term f0 as it's common in PBR literature and other engines (https://google.github.io/filament/Filament.html#materialsystem/parameterization/standardparameters) but we could replace that with reflectance/specular or something similar to be more generic.

    Having diffuse + reflectance/f0/specular is much more generic and cross pipeline than Albedo. Albedo in practice causes much more confusion and don't map to what the real thing is. The material "pack" things into data and the surface function should "unpack" them. The surface function should populate things that are workflow agnostic. (otherwise you will need to define a SurfaceOutput struct for each workflow (metallic and specular) like SurfaceShaders do).

    I could have a shader that store diffuse and reflectance textures. But due to performance and knowing that non-metals have black diffuse I can choose to pack with a single texture (baseColor) and then get the diffuse and specular from it, this is "unpacking" and I should do in the surface function as it's specific to how I pack my material data.

    As for PerceptualSmoothness vs Smoothness, same thing. We don't store real smoothness or roughness in the material. In theory, we could "unpack" this implicitly, but from my experience it's better to do things explicitly and explain what things mean.

    I like these points. They are a good summary of what we should aim for. These points are solved with ShaderGraph today, but still open problems for custom made shaders. I guess point 3 might be the most painful feedback here.

    There are a lot of things to consider here. (SRP batcher requires same material cbuffer for all passes, multiple types of shaders (terrain, particles, 3D, 2D Lit, UI, PostProcessing), etc.
    I'd like to see first how far it could be pushed for a solution that doesn't require shader generation and still it's a good fit for points above. The shader generation / template proposition would require much more investment and cross team commitment to support + I personally dislike if we have two competing shader generation systems.
     
    LordSwaggelore and valarus like this.
  19. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    >ShaderGraph is an amazing tool
    Nope. SG is just tool to stuck users within unity pipes(URP/HDRP). On one team i was refused to do custom SPP cause "our artist can not do green wobbly lines for healing effect on player without Shader Graph". In my opinion, SG is kill any opportunity for custom SRP's.
     
    hopeful, NotaNaN, nxrighthere and 2 others like this.
  20. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    In MicroSplat's vertex painting workflow and Digger integration, weights are packed into the UV channels (4 weights in color.R, 4 weights in color.G, etc), and then these are unpacked in the vertex shader and passed as 28 weights to be used in the pixel shader rather than sampling a texture for splat maps. In MegaSplat, barycentric coordinates are stored in the color channels, with 2 texture indexes and a blend weight. These are sent to the pixel shader as is, and the barycentric coordinates are used to un-interpolate the texture indexes so I can recover the 6 texture indexes for the triangle and blend them.

    I could imagine simply padding out the structure to it's maximum size, and having #defines that 'expose' it for use. So you have something like:

    Code (CSharp):
    1. strict v2f
    2. {
    3.    float3 position : SV_POSITION
    4.    ...
    5.    #if _CUSTOMV2F0
    6.    float4 userData0 : TEXCOORD7;
    7.    #endif
    8. }
    These would need to be correctly brought across all the stages, and if I want to do something custom in the domain stage or something I can. I guess a thing here is that there are, occasionally, cases where you want to bring this data across some stages but not all, but defining that for each stage seems a little too much, IMO.


    Well, within reason. HDRP can have an SSS input that URP simply ignores, and compiles out, for instance. But honestly having all the code in the surface function just work for either is a massive win, and if there are specific cases I don't see a problem with #if HDRP. And with some of these additional inputs, I suspect URP will eventually get versions of them, even if the implementations are different.

    Ok, I can get by this assuming these values map across all SRPs. There was a bit too much magic in surface shaders, like what space is viewDir going to be in, that I wouldn't want to repeat. I would, however, suggest a single packing function for all of this stuff which returns the structure you need, which just calls all of these functions. Something like:

    Code (CSharp):
    1. return GetLightingDataSurfaceDescription(s);
    Where s is your standard structure of basecolor, normal, etc. If the user wants to unroll that function and do it themselves, they can, but anyone familiar with a shader graph will be familiar with the simplified model and only has to add one function call. The other option being splitting it into a separate function, of course.


    Well ideally there would only be one generation system and the shader graph would have output code that was converted by that system. But by not making that choice early, and trying to fix the problem at the end, you're likely going to end up maintaining two separate systems of templates. In the long run this is going to cost you far more than if you had the right abstraction and your SG writing out to it instead. If I was in charge of the team, I'd be making that argument right now, building out this abstraction layer, and then having the SG write to that, in the interest of long term maintenance being easier. I'd have made the same argument between URP/HDRP, as now you're having to wrangle two teams to make things similar, and manage two sets of templates (which will soon be 4). Simply trying to bandaid this problem with a new system will only allow it to grow more from here and cost more in the long run, increase bugs, increase compatibility issues, etc. I can only imagine that getting buy in on a change of direction like that is increasingly difficult as time goes by though, and thus the cost of the original sin gets greater and greater.

    The nice thing about code is that you don't really have to worry about if the shader is for, say, terrain or not. Surface Shaders don't have different semantics for terrain, where as in a SG you have to add custom support for each of these things. Not to mention doing terrain in a graph is a crime against performance- so much room for optimization on terrain shaders, and graphs hide most of the tools needed there.

    I think the number 1 thing you are going to run into without a parser is the desire to combine Properties and CBuffer entries from multiple sources. However, you don't need much of a parser to do that, and I wager that a simple ScriptableAssetImporter can solve most of those needs. The SG template system is super granular, because it's converting settings in a file to individual lines of code (tags, etc), where as this system can just pull that all in whole sale.
     
  21. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    583
    Ok. To see if understood, the gist of your feedback/proposition is to unify shader authoring system and use a common generation backend. The generation consumes 2 files and generate the final .shader file.

    .template file is somewhat similar to what shader graph has today (describe passes, shader stages, keywords, etc).
    .surface (surface + custom lighting + vertex modification?) file can be generated by shader graph as intermediate step or written in hlsl manually.
     
  22. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    176
    yes, it is! I love it, I use it for VFX a lot in URP. Shader graph = quick and easy development, and you don't need to worry about URP version upgrades, it will always work in the future, really good job!

    but to be honest it is good only if the currently available master nodes already suit your needs.
    If you want something just a bit different(say you want a toon/cel lit master node), or if you are working on custom RP, shader graph is a dead tool, because you can't create your own master node = you can't use shader graph to do anything for that task.

    I understand those .surfaceShader + .templateShader ideas above are almost unrealistic now because it is just too late.
    And you don't want 2 competitive system, I totally understand that.
    So just forget them now, let's discuss what important change must be made base on the current system without going too far.

    I feel like you really want shader graph to be the main tool of shader developement, so if I think from your perspective,
    there exist one thing that will immediately make shader graph great for everyone's project, it is open the shader graph's master node API, let people create their own master node.
    If someone wants to create a toon lit master node which renders cel shading and outline, let them write it.
    Even if there will be breaking change to the master node API in the future. As long as the shader graph asset itself (shader graphs that are using that custom master node) is not breaking, it is fine.

    I understand unity decided to close that custom master node API, because of worrying the tech debt will grow too much, so I waited, but it feels like it will not open again even in 2022.
     
    Last edited: May 1, 2020
  23. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    176
    .template file IMO can only be long vert/frag shader code (base/outline/shadowcaster/depthonly/raytracing/deferred.... passes all belongs to this file, you can't do it purely in graph), see ASE's shader template solution, the idea is really totally the same.
    A .template file is almost a complete vert/frag shader, except it leaves the surface properties logic to .surface file / shader graph, just like a C#'s abstract class.

    .surface is usually what a lit shader graph itself today(just the graph itself), but in code. .surface only defines what surface properties are, and nothing more. Most of the use case are just vertex uv animation or blending textures creatively case by case.
    ---------------------------------------------------------
    To me, I only care URP, and I am not selling shader in the asset store, so simply a shader graph with custom master node is enough for all my needs already.

    But for @jbooth, a surface shader is a must, and this surface shader must be safe(not breaking) in the future. and must work in any RP. The design difficulty for his needs is way higher than my needs.
     
    Last edited: May 1, 2020
  24. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    176
    what I wrote here

    https://forum.unity.com/threads/can...tem-this-is-absurd.834742/page-2#post-5787781

    is the "ideal shader development workflow in SRP", which I believe this change can bring the post-SRP asset store back to life, and destroy UE4 in terms of shader development workflow by multiplying the "mix and match" possibility from the asset store.

    If the work to do this for your team is too big / the direction is not what you want, of course just forget it and go for other "more practical" solutions, we all know time is limited, we must select the practical path.
     
    OCASM, MP-ul and MadeFromPolygons like this.
  25. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    That won't do a single thing to help my case, at all. It would be impossible to develop something like MicroSplat in a shader graph - even one variant of the shaders it creates would not be able to be created in any shader graph that is currently available in any engine that I know of. The best you could do is augment the shader graph with a few dozen nodes that hide massive complexity from the user, and internally those nodes would be little code generators (ie: They can't be done in the graph themselves and need a c# backing), and even then you'd miss out on certain features and whole shader optimizations from the node design, while incurring brutal complexity and graph management.

    Shader graphs, like all abstractions, make tradeoff's for workflow, paid for with performance and scale limitations. You are also paying massive tool cost to replace typing with nodes, and paying that cost with every new type of shader or feature you add, because they require abstracting into nodes. Those tradeoff's make a ton of sense when you are designing a custom shader for a unique asset and empowering artists, they don't make sense when you are designing large shader systems and empowering programmers.

    That said, I prefer we move this to a thread of its own, about designing a new surface shader system - not about improving shader graph, or the merits of shader graph vs. code. They both serve a place, have their own audiences and use cases, and Unity is weakened if it doesn't have one of them. However I fear that will also mean the end of the conversation, as has happened so many times in the past.
     
  26. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    176
    I believe this time unity should listen to @jbooth, maybe open a new official thread,
    title = "We are considering support a new unify surface shader for all RP, What are your thoughts?"
    and invite some asset store shader developers to share their "support SRP" experience inside this thread.
    We should be able to find out the common needs between different developers, which is a good start for designing this unify surface shader.
     
    Rich_A, protopop, OCASM and 3 others like this.
  27. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    That thread was already created and abandoned once ;)

    But if Phil is really going to be moving forward with this, then it's really about what's most useful for him.

    Personally, once there is something working at all, I'd be happy to start writing a MicroSplat adapter and basically make changes or proposals for the features needed in this layer to make everything work. I've written a shader graph and three shader generation systems at this point, as well as a layer to compile my surface shader code into LWRP/URP and HDRP, so I'm reasonably versed in what has to be done. While there are some technical challenges here, I think the majority of the challenge is getting both SRP teams bought in on having this layer and supporting it, and covering all the edge cases that are only made clear by having solid use cases early in the process. I have no doubt we could round up a few other people to represent other use cases which MicroSplat doesn't really stress, such as new lighting models and such. And I'm sure Amplify would want to write to this layer as well, and would bring a huge load of potential use cases with them.
     
  28. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    779
    Sounds like this would be the best of both worlds, with options to satisfy both devs and end users.

    Time to assemble a crack team of interested devs and take this conversation away from public forum threads so it doesn’t get side-tracked!

    What’s needed now is for @phil_lira, @JasonBooth and a few others (Amplify of course) to come up with a quick prototype to get buy-in from the SRP teams. Don’t involve the rest of us mortals until you’ve got something officially happening, I say.
     
  29. TheOtherMonarch

    TheOtherMonarch

    Joined:
    Jul 28, 2012
    Posts:
    782
    I would like to see more alpha blending modes for the decal graph.
     
    Last edited: May 3, 2020
  30. tatoforever

    tatoforever

    Joined:
    Apr 16, 2009
    Posts:
    4,323
    I simply refuse to create shaders in ShaderGraph or any other visual tool. The shader graph is a good tool for artists.

    The benefits of custom written CG/HLSL or surfaceshaders are currently impossible with Shadergraphs, not to mention that my assets cannot be re-crated with ShaderGraphs. Look at some of the monstrosities created with the Material Editor in UnrealEngine, it can be easily avoided by writing shaders in text. Artist don't mind having blueprints from hell, I do. But I don't want to get into a flamewar of visual vs text, just stating my pov/preference/needs.
    Now if you ask me what would be the ideal world here is:
    - Similar workflow to Built-in renderer for shader creation, we need to be able to create HLSL shaders with the help of SRP core macros where the final resulting will be a sort of surfaceShader (or a CG + builtin macros shader). Each default renderers (HDRP / URP) should provide macros/inliners to help create shaders faster like we do with built-in renderer.
    - The shadergraph should be able to create a text shader that can be further optimized by hand (similar to what AmplifyShader do, it literally creates a surfaceShader with all the nodes states commented in one line of code).
    - And of course with the benefits of still be able to mod/change/create SRP renderers.

    Doing so you will support, new technical artists willing to create shaders and your current userbase of asset store developers and engine users.

    PS: It's never too late to do the right thing.
     
    hopeful, NotaNaN, tyrot and 2 others like this.
  31. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    You can right click on the master node and have it output the actual shader code already. The code it writes is the usual gobblyty gook you get from a graph, but it will spit out a "template" of a shader for you. The problem is that this template is both too complex and too simple to use as a template.

    Here's what I mean by that: The code it outputs has a ton of abstractions, #define tricks, and #included structures and code. So if you need to modify anything at that level, you have to suck the includes up and modify them as well. For instance, the raw vert and frag functions, appdata/v2f structures are hidden in includes, and much of the data you do get has been packed into structures and passed to functions at this level making it have read only state. Meanwhile, the code is littered with tons of packing and unpacking functions which effectively abstract the low level code from the high level code. So modifying anything is difficult, requires modifying a lot of framework in the actual shader it outputs, or modifying the framework it relies on via include files.

    I kind of view it as a broken abstraction, since half of the high level shader is trying to abstract the low level, and half of the low level shader is trying to abstract things from the high level. Neither side wins. But that's fine for a shader graph, which is just generating code no one looks at anyway.

    A real template shader would look a lot different. It wouldn't pack data up into neat structures to pass to a "SurfaceDescription" function, instead it would write a vertex/fragment function for each stage that shows how data is passed between them, will those structures inlined for modification.

    The one thing I do like with the code the shader graph outputs is that they compute a bunch of things you might need. You'll see a section which is like:

    o.WorldSpacePosition = mul(v.vertex, Obj2World);
    //o.LocalSpacePosition = v.vertex;
    //o.AbsoluteWorldPosition =

    Basically, it uncomments anything you use in your shader. (Though I'm of the understanding that if it doesn't contribute to the final shader output that would all get stripped and the comments are unnecissary). Anyway, I do like the idea of being able to call one function and initialize all the crap your shader might need into a nice structure for use if you need it- thats pretty cool, and could replace a lot of the magic keyword stuff you have in surface shaders nicely.
     
  32. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    801
    Oops, I originally posted in the abandoned earlier Surface Shader thread. As this one here still seems alive I'll post here again with some more context:

    I agree with @phil_lira , think ShaderGraph is a really great tool for Artists.
    It has increased the number of people making shaders by 5x for my company, which is great. However, there is, as for every larger team, the desire to create proper templates and structure and custom modifications that are specific to the project at hand, or are specific to rendering techniques we use. Some of these can be achieved by carefully wrapping code with CustomFunction nodes, others need more crazy hacks and are thus unmaintainable right now as @jbooth and everyone else has pointed out.

    While I totally agree that the ability to write custom surface shaders should come back in one way or another, I think there are options to improve the "system capabilites" of ShaderGraph in the short term:

    - injection points for custom code at different places in the graph. Currently custom functions are added at the top, and to make matters worse they are added in different places in URP and HDRP. Injecting code through custom function works, but not being able to define where that is injected is bad.
    (Example: I was able to add procedural instancing support to vanilla URP through injected code in ShaderGraph.)

    - ability to change input and output types of vert/frag directly. This, together with injection points, is necessary for e.g. geometry/hull/domain and in the future mesh shaders. (currently have to do that manually with "Copy Code" etc.)

    - whoever thought that all those scope checks when connecting nodes would be a good thing was wrong. They prevent so many things that actually work. I had to go into graph files and manually connect nodes just to work around this bad UX decision. I understand this might help beginners, but please just add a toggle "turn scope checks off", compiler can tell me just fine what I did wrong.

    - I'd like for ShaderGraph to succeed - our artists were able to use it very flexible. However, as tech lead I need the ability to adjust the "template" they work in without having to go all-in and modify SRPs directly. This would be my preferred way - I can properly create custom masters, and they can work on graphs for those. (there are commercial tools like ShaderGraph Essentials that hack their own Master nodes and are in maintenance hell.)

    Just one example - procedural instancing support in ShaderGraph (for GPU Mesh Particles):

    1588846158046_IMG_20200423_111619.jpg

    This is using some fully undocumented ways to inject code. But, after spending hours of figuring that out, I can give it to artists and they can continue to work in ShaderGraph as before, producing great shaders that work with my custom modifications. Ultimately, having the ability to do stuff like this would be very desireable - with knowing that Unity will not just randomly break those on an engine or URP or Core or ShaderGraph update (any of these can break it).
     
    Rich_A, OCASM and colin299 like this.
  33. MP-ul

    MP-ul

    Joined:
    Jan 25, 2014
    Posts:
    226
    Well it has been almost 5 years since unity 5 was released and yet we still have no refraction,translucency out of the box in the URP/Standard-port, what's the holdup? (not to mention the water shader,proper time of day,and so on.)
     
  34. Titangea

    Titangea

    Joined:
    Oct 5, 2013
    Posts:
    7
    I hate UPR and HD Pipeline, you are just braking all the unity workflow. A lot of developers are telling you that these tools stink, don´t be prideful and hear your user needs, stop with this stupid route, you are going to a direct crash.

    Stop focusing on UPR and HD Pipeline, you need to improve built-in render pipeline and made it more powerful and optimized, just create a manager to turn On or Off render features based on built-in (that´s a simplification or the SRP concept) and add an option to add custom features on the general pipeline, that's is what unity needs, not this.

    I didn't choose Unity because it has visual tools for noobs that are lazy to learn how to code, I choose Unity because it has a fast workflow, a simple and efficient programming flow, fast shader compiling in runtime, and tools that help to increase productivity, but all these things your are making are not helping with any of that.


    We are having an unstable and chaotic engine that is divided into two incomplete and buggy render pipelines, that also broke all the compatibility with previous tools without any official alternative, and also you are just showing a lot of non-finished things without any good documentation, this is a shame.


    Almost one day a week I think about using another game engine because I am really tired of this situation with Unity, you act like deafs.

    I was using Unity the last 7-8 years.
     
    OCASM likes this.
  35. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,611
    Actually I don't think this is a great idea. Creating a huge monolithic render pipeline with a bazillion toggleable features will inevitably make the whole thing bloated, buggy, hard to maintain, impossible to optimize for specific use cases, hard to use and to develop for. The KISS principle is a good thing.

    Making specialized render pipelines for specific cases makes sense, but only if you thoroughly document them. A thin abstraction layer on top of all that lets you perform common operations in a unified way and write (note the emphasis on "write") basic shaders for all pipelines is also a very nice thing to have. One would expect getting the color of the main directional light in a shader to be simple no matter how complex the underlying pipeline is, right?

    IMHO this is what SRPs are missing as of today: documentation is abysmal (when not inexistent) and even the most basic things work differently across them for no reason: different naming conventions in shader code, different render callbacks in C#, etc. Making any tool, no matter how simple, that should work more or less in the same way across all SRPs is a close to impossible task currently. At the same time, this is what most people (users and store publishers alike) take for granted. The net result is frustration galore.
     
    Last edited: Sep 21, 2020
  36. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    3,290
    Well, I think that they've reached limit where builtin pipeline could be extended.
    Its pretty much as old as Unity itself, and probably a nightmare to support for 25+ platforms.

    SRPs are the future, like it or not. Otherwise there will be none at all.
    (Look at Bethesda's Creation engine for example (and Fallout 76 visual glitches), its pretty horrible in 2020 and that is because they've chose to never change it / rewrite it)

    Documentation is a must have though. And shader template documentation / overview is required.
     
  37. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,511
    I can't fault Unity for trying to improve their rendering pipeline using SRP way or any other way.. Things got to move on oneday. However! The change needs to be swift. This sega is been going on for way too long! The pain comes in when there is change, but it has to be quick and (relatively painless) but spanning over ... ( I even forgot ) years? ... No man...
     
    arkano22 likes this.
  38. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,750
    The problem with built-in is mostly it being a black box accessible only to those with lots of cash for a source license, not it being a "monolithic" render pipeline. SRP was initially proposed as a way open access to the internals of rendering so we wouldn't need hundreds of injection points to customize things.

    But the vast majority of Unity users doesn't have the resources to write a whole pipeline from the ground up, or replace a forward renderer by a deferred one, or add SSAO to a pipeline that doesn't have it. They want checkboxes to turn things on and off, not having to fork the entire pipeline off github and grok its entire source code. Or a 3rd party asset they can just drop in for the features they want, which is anathema to a system built around source code modification and hostile to being extended.

    The whole crusade to avoid the "combinatory explosion" via "specialized" SRPs was a fools errand. It just moved the complexity from Unity's engineers' backs onto Unity developers. URP is now (veeeery slowly) on track to getting all features built-in has, which shows Unity themselves realized it was a naive idea. If they were going to have "specialized" pipelines, they would need way more than just two.
     
  39. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,611
    I can see what makes you think like this, but I disagree. From my own experience, a single render pipeline cannot accommodate forward, clustered forward, deferred, <insert your rendering path here> and scale from low-end mobile to high-end desktop computers without compromising quality, flexibility, performance, or all three to some extent.

    The jumps across the whole spectrum are often qualitative. For instance, you can't make a modern compute-based render pipeline work in platforms that don't support compute shaders. It's not just a matter of bolting new stuff on top of your existing pipeline. I suppose you could find hacky ways to bridge the missing stuff, but by the time you get it working properly, you've got a huge system that no one can extend, maintain, or even understand with reasonable effort . Also due to the presumably high amount of boilerplate code needed to support a complex architecture, performance won't be great.

    Getting the URP up to or close to feature-parity with the built-in one also makes sense to me (that's what the "Universal" label means), but a high-end, high-quality renderer that does not need to concern itself with being able to run in an electric toaster is also must for Unity's survival as an engine, hence HDRP.

    I think SRPs are a necessary concept, even though the transition requires replacing the ship's hull while in deep sea waters, so to speak. It's a risky operation, but I think will be worth it in the long run. Only if there was better docs...
     
    Last edited: Sep 21, 2020
    xVergilx likes this.
  40. MP-ul

    MP-ul

    Joined:
    Jan 25, 2014
    Posts:
    226
    Cryengine has is best game ever ported on Nintendo Switch with realtime GI while Unity sells you a damn ambient occlusion postprocessing effect and cryengine was bankrupt not to long ago. Unity is stagnating nothing is improving, 5 years since srp exists yet 0 progress on fully integrating it.
     
  41. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,450
    You forgot those goddamn shadow hack on switch, using dof techniques to make them more realistic AND hide low resolution is the goddamn genius hack you really need.
     
    MP-ul likes this.
  42. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    What I think happened is that, essentially, and this is a lot of speculation, is that unity ramped up its graphics team quickly with the goal of providing the kind of flexibility and performance needed for larger studios to continue to embrace Unity. SRPs are the right concept, but fail in matching the implementation details to most of what the Unity user base needs. For instance, if I was still working in a mid-sized developer targeting consoles, I'd want to own the whole render pipeline, top to bottom, and would want something like SRPs so I could efficiently customize things for the game I was working on. We effectively did this for the last mobile game I worked on by hacking at Unity and it's source code, and when SRPs were first introduced we quickly wrote an SRP to replace all of that stuff, and it was quite useful.

    So what was delivered was, for the most part, what someone working on a 30 person team with several graphics engineers would want. This makes sense, as a large portion of the people hired likely worked on those types of teams before joining Unity. But this doesn't fit how most existing Unity studios work- most of the studios I contract for have no in house graphics engineers. They rely on a combination of assets and simple ways to inject slight modifications into the renderer instead of wanting to own it fundamentally. If you look back on the initial reaction Unity had to people asking for features like Grab Pass, it was essentially "Oh yeah, super easy, just modify the SRP to have another pass and do whatever you want". And if you're going to own the rendering stack top down, that makes total sense. But this doesn't work at all for the existing community, which wants to inject extra passes and not modify the SRP, or wants to buy an asset that does. Owning the graphics stack is at least a $200k a year proposition, and that's only if you can find and convince a good graphics coder to work on your project.

    The other big misconception was that people wouldn't work across pipelines, or would care about the APIs and such being different across them, or would want to move things between them. This is abundantly clear when you learn that getting a camera's background color requires different code in HDRP than in URP. The only code really pulling things together is the shader graph team, which is the only team really focused on cross compatibility, and the SRP team writing the low level framework. If those teams were not involved, they went in completely different directions. Sure, they likely realized they were completely screwing over asset developers by doing this, but thought we were a small portion of the developer population and wrote us off as a sunk cost. But what they didn't realize is that it also screws nearly every studio working under the old Unity paradigm, who rely on these developers to some degree or another, and that the actual appetite for working across SRPs is much larger than just our little area, or that the complexity of maintaining code across upgrades of a single SRP is quite high.

    Every month I get a studio contacting me about if they should move to SRPs or not. Usually these are large, well funded studios. My standard line of questioning is:

    - Do you have a full time graphics coder you can spare to the process of both converting and maintaining the pipeline?
    - Do you use assets from the store?
    - Do you actually need to customize the renderer or have features from one of the SRPs you need (ie: VFXGraph)?
    - Will you need to upgrade Unity at any time during development or after release? (The answer is always yes)

    95% of the time, the first question is the only question I need to ask, because the answer is no. Without a proper abstraction layer that allows changes in SRPs to happen without disrupting projects heavily, without clear pathways for people to modify rendering without modifying the SRPs, they are simply too costly for most studios using Unity.

    Now, Unity is starting to move in the right direction to clean some of this up- how much we will see. Even after many conversations, I'm personally not optimistic that we aren't going to be stuck with this mess until they decide to redo it all again under some new initiative.
     
  43. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,750
    That is pretty much what is happening to URP now that it's getting deferred rendering, which is compute based, BTW, according to the commits on github (tile based deferred lighting), which means it is getting features that do not work at all on low-end mobile (as it should).

    You're right, you can't. That's why you abstract the pipelines behind a common interface, which is how it works in engines like Unreal, whose premier game runs all the way from iPhone 6S to the RTX 3080. It does not use the same "pipeline" on both ends: the difference between the code paths running on the former (forward rendering) versus the latter (deferred renderering with raytracing) are as large as you'd find in URP and HDRP. But from the application POV, it doesn't matter as much: you don't use a completely different set of light components with completely different values that behave differently, or have to author a completely different set of incompatible materials for each platform.

    You cannot escape the complexity. Not in today's world of wildly different performance profiles. Unity tried and learned that the hard way, they acknowledged it and are now turning the ship around in several ways. Here's things they publicly stated they are now working on:

    - URP getting deferred, SSAO, point light shadows, and more, so it can actually replace built-in, ceasing to be the specialized low-end renderer LWRP was.
    - Making URP and HDRP interchangeable at run time. No more "pick a SRP and pray it's the right one".
    -
    Cross-SRP shader graph and maybe a sane cross-SRP shader API.

    All these moves will make URP and HDRP merely two "implementation details" behind a single interface.
     
  44. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,450
    Let's employ some big words for fun
    Unity have gotten the Einstellung effect by hiring skilled and competant people who didn't get the unity ecosystem
    https://en.wikipedia.org/wiki/Einstellung_effect
    They thought they knew better because years of working on AAA dev, they are smart but lacked context therefore miss the goal. They probably thought people making argument against their method had Dunning Kruger therefore couldn't understand the brilliance of their ideas.
    /poke fun
     
    Rich_A likes this.
  45. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,611
    I had no idea about this. Makes no sense to me, as this breaks pipeline specialization :confused:.

    This is what it should've been like from the start, I think. That's why I was blabbing about a "thin abstraction layer" on top of both -thickness may vary- so I guess we're talking about the same thing. Essentially a large strategy pattern. Right now it feels like the URP and HDRP teams didn't even know about each other, which for me is the main pain point along with lack of documentation.
     
    Last edited: Sep 22, 2020
    Neto_Kokku and OCASM like this.
  46. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    5,980
    regarding SRP being an elite's solution. i think it could be made usable by small outfits with visual scripting
    SRP is c# after all

    i couldn't script my way through URP but show it as a vertical pipeline like vfx graph and i'll be able to remove what i don't need.
    show me 2DRP and i might even clobber together new visuals
    add 'ns heatmap' overlay and i might even make it performant

    many people have many ways of creating some words some sound some images
     
  47. ArminJohansson

    ArminJohansson

    Joined:
    Jan 8, 2019
    Posts:
    14
    Shader Graph shader editor isn't even a good shader editor. You have hardly any control over any compiler/pipeline settings like Zwrite, blending, pragmas, and so on. On top of that, you can't easily add whatever settings is missing in the shader editor onto the shader code like I usually do. The editor is unpolished and not user friendly. There are a ton of quality of life downgrades compared to shader forge and amplify. No hotkeys for commonly used nodes. Can't easily tweak the values after turning them into parameters. Subgraphs turn the preview windows into mesh representations(which are unreadable spheres instead of planes). Why do I have to use three nodes to split and combine the UVs to get the Standard UV vector 2 that you use in 90% of cases?

    Shader graph feels like a kind of pretty but unfinished product and is miles behind amplify shader editor in terms of features and usability. The only good thing about unity shader Graph compared to any other shader editor I've used (unreal, amplify and shader forge) is that you can place multiple instances of a single parameter in the graph.
     
  48. DNS-Gabe

    DNS-Gabe

    Joined:
    Aug 4, 2020
    Posts:
    4
    Joining this conversation pretty late, but I guess I'll start off by saying that I think Unity's visual Shader Graph system is welcome on my end. I'm a software dev who has had 0 experience working with HLSL code, and it's definitely not a simple thing to wrap one's head around. Granted, much of it is still the same as coding in any other language, but it is still a language one must learn. So as a newcomer to the Shader world, when I look at HLSL and the visual Shader Graph system, the visual system just feels like an easier introduction to things. At least for me, and that's all I can really speak on.

    What I will say though is the following:

    1.) Documentation for Shader Graph and this visual system has been scarce. I have noticed Unity putting in more effort to explain the visual system, to their credit. There are definitely a good amount of tutorials covering a variety of things Shader Graph can do. Even so, for the longest, it was suuuuuper hard to find anything that could possibly help my understanding (and even still there could be more resources and documentation for shader graph). Lack of documentation is 100% a turn-off for those who are new, but also to the vets as shown in this forum.

    2.) Backwards compatibility and allowing developers to work by their preference is a MUST. Absolute must. I know recently, Unity started coming out with "Bolt", the visual scripting asset to create game logic and all that jazz. All I can say is that if Unity were to ever completely replace C# scripting, I would leave the platform in a heartbeat. Allowing developers to work as they would like should be supported not just for the happiness of the devs, but also because writing code (whether HLSL or C#) DOES allow for greater control of what is happening under the hood (at least in my opinion). Bringing it back to Shader Graph specifically, there were a few times I tried writing my own, custom shader files because I found solutions for the problems I was facing on the internet. To my surprise, Unity was not even allowing me to attach these custom shaders, which I just found super odd. As far as I was able to tell, Unity was basically forcing me to only work through their Shader Graph system. It is likely I missed something to allow for custom shaders to be used, but I just ended up giving up and forcing my way through with Shader Graph. If I were EVER to be FORCED to use Bolt and only Bolt...I will definitely be leaving Unity. All I can say, and I'm sure that is the frustration many of the devs in this forum are facing. I just don't really understand why there can't be use of both custom shader scripts as well as the visual system. I can pretty much guarantee, even as a complete newb when it comes to shader language, I would use both simply due to my background.

    Overall, I ain't mad at Unity. They are doing A LOT over there, and to be fair to them, they are building out Unity to be a tool that can be used in industries other than the game industry. This inherently means they have to make their tool as accessible as possible. Keep it up Unity, but at the same time, take this feedback seriously. There are definitely many aspects of Unity that can be tightened up
     
    FM-Productions likes this.
  49. najaa3d

    najaa3d

    Joined:
    Jan 22, 2022
    Posts:
    29
    It's been almost 2 years since @jbooth commented here. I'm trying to assess the situation with URP now -- 2 years later. I see that @jbooth made URP-based release for MicroSplat (Oct 2021) -- but the comments therein still sound frustrated, as though the URP-version of MicroSplat is worse than the previous non-URP versions.

    I'd like to hear @jbooth's assessment of how Unity did in responding to his suggestions/rants. How does it all look now? Is it still the SAME maintenance nightmare that it was 2+ years ago, or has it gotten any better? Should those who want to make use of assets from the store move to URP or stay with the original/standard pipeline?

    We're just getting our feet wet here, and looking for recommendations on which way to go -- URP or the previous fixed pipeline.

    Also I noticed that the Unity Hub is pushing out version 2020.3.32f1 by default as the "best release", rather than showing me options for 2021/2022 versions. What's the deal here?
     
  50. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,441
    Maintaining hand written shaders in SRP's has not changed much. You basically have to reverse engineer changes in the pipeline, which can come at any time, in any form, and be large or small, or tucked away in includes and hard to noice. There is still 0 documentation on any of it. So if you have any plans to write shaders by hand, then I'd suggest my own Better Shaders to stop you from pulling your hair out (and instead let me pull mine out).

    In theory Unity is addressing this with a new shader system somewhat based off of Better Shaders, but that is several years out most likely. Until then, SRPs are kind of "inherently unstable".

    URP has some advantages and disadvantages over the old pipeline. It's cleaner, and faster in some cases, but harder to work with and less documented, and many forum posts will be for the wrong pipeline. Things which were simple in the old pipeline might require several pages of code, and features which you expect to work based on knowledge of the old pipeline might not work in some cases and might in others. I've shipped a few things in URP at this point, so it's totally doable, just depends on your project demands and how well it fits for it.

    As for assets, it kind of goes asset by asset.

    2020.3 is the latest LTS, anything newer is a tech/beta/alpha release, and even more unstable.
     
    najaa3d likes this.