Search Unity

  1. Get the latest news, tutorials and offers directly to your inbox with our newsletters. Sign up now.
    Dismiss Notice

Unity SRP Surface Shaders

Discussion in 'Shader Graph' started by ChrisTchou, Mar 16, 2020.

  1. ChrisTchou

    ChrisTchou

    Unity Technologies

    Joined:
    Apr 26, 2017
    Posts:
    74
    The goal of ShaderGraph is to enable building customized shaders that work with Scriptable Render Pipelines, and automatically upgrade to continue working with each version of Unity. It was designed to separate, and black box the internals of the SRP implementations to make it easy to build new compatible shaders.

    It is a work in progress, though, and doesn't yet have the flexibility or power to do everything that the surface shader system in Unity did for the built-in renderer.

    But, we are working towards that!

    We are working on a public roadmap to gather better feedback from you on the directions and priorities of ShaderGraph; we should have it up by the end of the week.

    In the meantime, I have a question for everyone:

    What are the things you would build in a surface shader system that are not possible within ShaderGraph as it stands?
     
  2. valarus

    valarus

    Joined:
    Apr 7, 2019
    Posts:
    258
    Last edited: Mar 17, 2020
  3. Korindian

    Korindian

    Joined:
    Jun 25, 2013
    Posts:
    555
    Seems like a thread for @jbooth to give input.
     
    Rich_A and neoshaman like this.
  4. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    Quite frankly a graph will never get there, and forcing us to use a graph for everything to make shaders maintainable is just wrong. You're not going to force us to abandon C# and force the only text based language to be C++ when the visual scripting system ships, are you? If you want to make sure the graph can do everything, just re-write all of Unity's hand written shaders with it. If a graph is good enough for us, it's good enough for you too. Dogfood it.

    As an example of why you will never get there, take something like the basemap generation for terrain. This is a really cool feature which can be used to do more than just basemap generation- it allows you to add passes which generate a render texture for use in your terrain shader. These can be anything you want, and use the tags to determine a name, format, and relative size of the render texture. Here's an example from Unity's terrain shader in case you're not familiar:

    Code (CSharp):
    1. Pass
    2.         {
    3.             Tags
    4.             {
    5.                 "Name" = "_MetallicTex"
    6.                 "Format" = "RG16"
    7.                 "Size" = "1/4"
    8.             }
    9.  
    10.             ZTest Always Cull Off ZWrite Off
    11.             Blend One [_DstBlend]
    12.  
    13.             HLSLPROGRAM
    14.  
    15.             #define OVERRIDE_SPLAT_SAMPLER_NAME sampler_Mask0
    16.             #include "Packages/com.unity.render-pipelines.high-definition/Runtime/Material/TerrainLit/TerrainLit_Splatmap.hlsl"
    17.  
    18.             float2 Frag(Varyings input) : SV_Target
    19.             {
    20.                 TerrainLitSurfaceData surfaceData;
    21.                 InitializeTerrainLitSurfaceData(surfaceData);
    22.                 TerrainSplatBlend(input.texcoord.zw, input.texcoord.xy, surfaceData);
    23.                 return float2(surfaceData.metallic, surfaceData.ao);
    24.             }
    25.  
    26.             ENDHLSL
    27.         }

    Now let's say I have to use the graph for maintainability and I want to write a terrain shader, which means I'll need to write a basemap generation shader as well. But wait, this isn't a surface shader you say, so it doesn't count, right? Except that if my entire shader is written in a shader graph, I need to call that code from this shader, and the only way to do that is to support all of this in the graph as well. (Or constantly hack out the code I need every time I change the shader graph, which is a nightmare). And currently I can use this to do things Unity doesn't use it for - like baking out procedural texturing into a splat map, or any other data I want to bake every time the terrain is changed.

    This is where text representations just shine. Adding this functionality to the terrain system was likely pretty straight forward- read some tags from the shader, generate some render textures, render the passes to the buffers, set the buffers on the main terrain material, profit. Adding this same functionality to the graph would require a new master node with custom passes and settings, making the addition of custom features like these much more expensive for Unity. So if you really want to push everything through the graph, you need to dogfood it as such and stop writing hand written shaders, and begin the process of supporting all of these edge cases and in effect bring other areas of development to a crawl. Oh and don't forget I could easily have written this system myself, so the shader graph system will need to support any non-surface shader system as well, since once my code is in the graph I'll need to be able to call that code from any type of shader I might need.

    You don't want the graph to be everything to everybody- it's not achievable, and it will just cripple everyone in the long run. It should be focused on what graphs are good for - shaders which are closely tied to the art. And you should be writing an abstraction for hand written shaders which allows them to excel that the things a graph just isn't good for.

    ------

    But to answer your question:

    Since I write shader generators, I can basically switch anything in a surface shader very easily by generating different code. I guess it's theoretically possible for you to write a system where I can dynamically generate a graph, but this seems pretty painful compared to just writing the code the graph would generate anyway.

    - Ability to understand the code the graph is going to write; graphs are an abstraction, and every abstraction means hiding information, which means you're further from the code. This always has cost, and it's very easy to have a graph hide this without you realizing. So much better information would be required here, like a code output window, feedback from the compiler on cost, etc.

    - Control over the V2F structure and how things move across the stages (this was limited in surface shaders in some cases)
    - Ability to perform work in the vertex function
    - Structs- wiring is just not maintainable through complex systems
    - Macro's. I avoid these in shaders, but they make some things possible
    - Better handling of sampling- such that no node gets direct access to the sampler/texture but sampling nodes can be somehow chained together. Right now if you use a triplanar node it takes a texture and sampler- but if you want to do POM, they can't be combined because it needs to texture and a sampler.
    - Ability to have thousands of shader_feature equivalents (requires ability to dynamically emit code, the way my compiler does, and #if #elif around it)
    - Ability to support multiple lighting models within a single shader (I support specular and metallic workflows, along with multiple BRDF"s, and unlit, switching between them with compile time generation setting various pragma's and defines)
    - Tessellation
    - Pragmas, custom tags, etc..
    - Fallback and other special shader options (basemap shader, basemap shader generation passes, etc)
    - Instancing, including terrain instancing variants
    - Interfacing with Compute shaders
    - Proper branching, handling of derivatives
    - Ability to have custom editor GUIs
    - Access to the TBN matrix before lighting (I do things in a custom lighting function to blend normals)
     
  5. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    772
    I really hope that question will be followed by a statement along the lines of “we’ve read the hundreds of posts about this, have heard you all and will be reintroducing the concept of surface shaders ASAP to lessen your pain” rather than “we’ll try to figure out some convoluted and unsatisfactory workarounds which you will see implemented sometime in the next couple of years”.
     
  6. Edy

    Edy

    Joined:
    Jun 3, 2010
    Posts:
    1,978
    I think the title of the thread is itself is a bad approach: "SRP Surface Shaders".
    The ability to switch render pipelines in a project (built-in RP included) while keeping the shader and material compatibility.

    What I'd really need in Unity is to choose the render pipeline in a project without actually destroying the materials so the selection could be reverted. I need the ability to switch render pipelines between built-in, HDRP and URP in the same project anytime. This is critical to be able to maintain projects that require compatibility with different pipelines, being the Asset Store packages the main example.

    In my ideal world Unity has a Surface Material with the available maps and settings for that material. Built-in pipeline would use some maps and settings in some way. HDRP would use them in some other way. URP would use some maps and settings and discard others. Inside this Surface Material it should be possible to create "per-RP overrides" that would have effect when the material is used in a specific RP. Thus, the same material could have a common set of maps and properties together with per-RP overrides. Projects could be switched among the different RPs (included buitl-in) without having to maintain an entire set of separate materials and maps per RP as happens now (a painful hell).

    The above must also be available in code, pretty much like current Surface Shaders. Define the common maps and properties, then #define the code and properties to be assigned based on the current RP.

    Otherwise, I'm afraid I'd have no choice but staying at built-in RP. Maybe I'd create a HDRP branch from time to time to record some nice video, but the pain of the process doesn't really worth adopting SRP in the mid-long term.
     
    Last edited: Mar 18, 2020
  7. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    Yeah, the title of the thread doesn't really match the line of questioning. The title talks about surface shaders for SRP, but the questions are all about improving the shader graph and implying that you want to get it to replace surface shaders, which it will never be able to do simply because graphs are bad for some things. So what is this thread really about? It kind of screams of "Is there a small hack we can make to shut you all up and get you to use our shader graph for everything?" to which the answer is clearly NO.
     
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    5,602
    We also need to get out the PBR dogma and have control over lights, I can go on a very long rants about what I call "sitcom lighting" (which I already did in another thread). We need pass, and we need proper vertex.

    I have seen some answer saying "it's not pbr compliant" and that's alarming from an artistic perspective (also AO is not pbr compliant, back in the day it was called the "dirt pass" as it accumulate in corner). Also PBR will end up being more costly when we could have similar visual by simply tweaking the light instead of paying more light or area light (ie smoothing the harsh shadow transition by using a plain lambert) like you would do in a movie light rig, because we directly cheat and we aren't movie we have more frame per seconds.

    Also it future proof any new sort of rendering and NPR rendering, which aren't simpler version of pbr light, they are their own class.

    The shadergraph is just fundamentally flawed in its conception, It's great for non gaming application like architecture or visualization, which are close to simple photorealistic rendering, or for a movie workflow allowing to onboarding lighting artist without them to learn a foreign technique. But anyone who is much a vfx, technical or graphic designer is limited.
     
  9. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    5,607
    Another thing you should consider is that a programmer is just more efficient at writing code than connecting graphs.

    Having a shader api framework / abstraction layer, where ShaderGraph is build on top, is more useful than the other way around.

    I understand the need for ShaderGraph, it allows non-programmers to create stunning visuals. However, my experience with node editors is that it works good only for rather simple things.

    As soon as a graph reaches a certain complexity, these things are pushed to a programmer again, for example if there is a bug in the graph. As a programmer, I really just roll my eyes when that happens.

    Suddenly I'm forced to work in a visual graph to fix someones "code" in a tool that isn't made for programming.
     
  10. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    So in an ideal world, the way I think this should have been designed was to have the SRP own the lighting model completely, not the shader code. Obviously the code would get compiled into the resulting shader, but from a workflow perspective the SRP already specifies most of the lighting model since it specifies what passes are used into what buffers, as well as all the constants for lighting and such.

    Each SRP would define a structure, much like the StandardSurfaceOutput struct in a surface shader, with the various inputs to the lightning equation. Preferably you could define as many lighting models as your SRP can support- so Unlit, PBR, SSS, etc. So if you wanted to add NPR rendering to your URP project, you could just create some new definitions for this stuff and it would work without modifying the underlying SRP or the shaders, assuming you don't need new passes or constants set from the SRP, and every shader which specifies that model just magically starts working with it (this is assuming the inputs match- if they don't, some modifications would be required).

    With this configuration you would provide a template shader for each pass, much like the shader graph internally uses. The shader graph can funnel it's output through this system, as would a text parser. In the end, to the abstraction layer it really doesn't know if the data is coming from a graph or text, nor should it care. It also doesn't really care about what the SRP or lighting model is, it just finds the matching template and inserts functions into the template, which calls the function you've defined.

    With this, people would be able to ship new lighting models to an SRP, which could be applied to any shader you already own by selecting the new model from the drop down or changing the #pragma and input structure in your shader code. If your lighting model uses the standard inputs, then everything just works- URP->HDRP->URP w/ NPR module installed. If it uses a few extra inputs then they become available, and any inputs removed just get removed from the evaluation. SSS in HDRP can use a heavy solution, while URP uses a simpler model.
     
  11. ChrisTchou

    ChrisTchou

    Unity Technologies

    Joined:
    Apr 26, 2017
    Posts:
    74
    Thanks for all the replies!

    We are actively investigating what an SRP surface shader system would look like. The question here is to explore what are the things people would want to do with surface shaders, to help us understand what we have to build support for, and the relative priorities there.

    A simple system that just let you directly write the SurfaceDescription and VertexDescription functions and describe their input requirements, etc. would be something that could be built fairly quickly (especially if we can do a simple C# API there and sidestep the need for a parsed text file format...). But, it would largely be limited in many of the same ways ShaderGraph is currently limited, and so more work would have to be done to expand the possibilities on top of that.

    I agree with you a surface shader would allow many features to be built more quickly, without having to generate the visual UI to control it; but we would still want to make sure it is done in a clean and maintainable way, and have a path to expose as much of that as possible to the graph eventually.

    Hmm, this is similar to what we did for VFX support; VFX wants SG to describe the particle appearance, but it needs to feed the properties from its custom built particle state buffers, so there are two code-generation systems that need to play together. We ended up baking down the graph into an "uber-function" representation, that is able to generate the graph function HLSL (and describe the input dependencies) for any subset of the outputs. We are investigating how that system might be generalized and made public so you could grab those graph functions and use them as you see fit.

    Interestingly, if we do both this and the surface shader API, we'll have effectively split the code generation into two parts, and made both halves accessible independently.

    Absolutely. I've been doing a bit of that recently, and usually just end up converting good portions of complex graphs to custom HLSL function nodes so I can just see the code. I do wish there was a simpler way to pull a custom function node into multiple graphs, having to embed them in sub-graphs is a lot of hoop jumping. TODO...

    We're working on that! It's part of the Targets and Stacks tasks on the roadmap, to make it possible to take the same ShaderGraph and use it in multiple render pipelines; with the HDRP specific bits getting ignored in URP, for example. Defining pipeline-specific property overrides is a bigger issue though; will make sure that we put that on our list.
     
  12. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    I think doing it without a text parsing system would be difficult, and lock you into a small box with what would be possible. There's a lot more than needs to be accessed than just the SurfaceDescription/VertexDescription functions (and quite frankly I personally find those abstractions a bit annoying to work with). However, that format could be easier to parse or different than surface shaders were. Having it feel like writing a shader was a nice thing though.

    That said, I know you guys were playing with a C#->HLSL thing a while back. While I'm sure there would be a lot of issues to solve, I imagine a lot of shader complexity could be managed much better in a high level language like C#. Imagine being able to simply make functions virtual and override them, like the lighting function or more immediately vertex/pixel functions. Lack of macro's would be a huge loss in C# though..

    Yes, but there comes a point of diminishing returns with graphs. For instance, that base map generation example is likely something that like 2 people will ever use for anything more than just basemap generation, so putting in a lot of time there to maintain it's current flexibility seems like a lot of man hours that could be better spent elsewhere. The SG doesn't need to solve every shader issue, it needs to solve the ones it's target audience has.
     
  13. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    772
    @ChrisTchou meet @jbooth. Jason is one of our our designated representatives. :) We trust his knowledge. Please talk to him in a better environment than this forum thread about all this stuff - if you’re not already doing so. I’d say @larsbertram1 and @tatoforever could be included in that conversation. All top notch devs who know their sh*t when it comes to highly optimised shader development, and who have shown themselves to be dedicated to user satisfaction.
     
    MagdielM, Rich_A, GliderGuy and 5 others like this.
  14. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    5,602
    You forgot the amazing bgolus
     
  15. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    772
    And a shout out to @Aras who had already explored this stuff for Unity.
     
    Subliminum, Kronnect and Edy like this.
  16. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    2,985
    And don't forget the Amplify shader graph creators
     
    transat likes this.
  17. andybak

    andybak

    Joined:
    Jan 14, 2017
    Posts:
    489
    The ability to use code. Drag and drop UIs have their limits.
     
  18. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    2,275
    I'm just going to drop in my 2 cents here.

    I'm surprised you have switched to using HLSL but are not using interfaces and classes to solve this problem.

    Code (CSharp):
    1. //Apologies if I'm shaky on the syntax. It has been a couple years since I last did this.
    2. interface ILightLoop
    3. {
    4.     void doPointLight(PointLight pointLight);
    5.     void doSpotLight(SpotLight spotLight);
    6.     void doDirectionalLight(DirectionalLight directionalLight);
    7. }
    8.  
    9. class CustomLightingHandler : ILightLoop
    10. {
    11.     float3 ilm;
    12.     float specThreshold;
    13.     float4 accumulatedColor;
    14.  
    15.     void doPointLight(PointLight pointLight)
    16.     {
    17.         //...
    18.     }
    19.  
    20.     //... You get the idea
    21. }
    22.  
    23. //In fragment shader
    24. CustomLightingHandler lightingHandler;
    25. //... Initialize variables
    26.  
    27. UnityDoLightLoop(lightingHandler);
    If you declare the class implementing the interface explicitly, dynamic linkage is not required and the shader should be SM2 compatible. To continue on the example, Unity could also provide base implementations for ILightLoop such as PbrLightLoop or BlinnPhongLightLoop. Then someone could write a shader that runs multiple lightllops and mixes the values together (expensive, but powerful).

    The reason Unity's solution for simplifying shaders has always been codegen is because Unity write shaders using #ifdef and macro spaghetti. If you spend some time designing and documenting a proper API just like any other software development context, I think you will find a lot of problems disappear. And I understand that HLSL does not have a built-in way to denote public and internal functions. You would need to come up with your own scheme for that.

    Anyways, just my two cents.
     
  19. JoNax97

    JoNax97

    Joined:
    Feb 4, 2016
    Posts:
    479
    I don'tnt have a lot of experience with Shaders but I just want to say that C# shaders sounds really nice, specially if the new mathematics library helps bridge the gap between C# and HLSL.
     
    Last edited: Mar 19, 2020
  20. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821

    This would definitely help cleanup the code and make a lot of things easier to parse and combine! (I'm often still thinking DX9 era, but HLSL has progressed a lot since then).

    It doesn't solve the forward compatibility issue though, which for me is a huge deal- for instance, Unity added several new passes for raytracing to HDRP, as well as new VR rendering code, and in a surface shader world those would have automatically been supported. It also doesn't help with cross compatibility since URP doesn't have a light loop, for instance. For that, you still need some kind of system to parse and put code fragments into some kind of template shader. That template, however, could be a lot more modular and easy to adapt, and it could also not be restricted in how its authored. For instance you could imagine having nodes in a graph spit out these functions just as easy as a code fragment, and then a shader written in the graph or a text file could use it. The template itself could be a scriptable object with various code fragments- it doesn't necessarily have to be an uber template.

    I think fundamentally it comes down to wanting to break a shader into several parts with a minimal tie between them:

    - The code which determines where a vertex goes and what the inputs to the lighting equation are
    - The code which lights a pixel
    - The code which makes everything work (VR, passes, etc)

    Ideally I never have to touch the code that makes everything work, because at no time do I really ever want a shader that only works in single pass stereo, for instance, or only works on one version of Unity. I just want all of that stuff automatically supported.

    I think the code which lights a pixel can (and should) be SRP dependent. The fundamental difference between most SRPs is how you light a pixel, so this is kinda expected. But within reason it would be nice to treat lighting models like plugins which I can mix and match with the other shader code.

    The pixel/vertex code is what we want to be always sharable everywhere- with the restriction of the lighting inputs might be a little different, obviously.
     
    GliderGuy, neoshaman and OCASM like this.
  21. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,918
    I'd like geometry shaders to be skipped in shader graph / surface shaders and mesh shaders to be given their rightful engineering time. Mostly because only intel wanted them, and there's better ways to do it, and apple is right tbh.

    I don't think it's a good idea to develop for essentially, a dead technology like geometry shaders. A far better way is empowering compute and bringing that much closer. You can do better more performant "geometry" shaders this way.

    TLDR:
    Mesh Shaders = yes
    Geometry Shaders = no
    Better compute shader support = yes
     
  22. ChrisTchou

    ChrisTchou

    Unity Technologies

    Joined:
    Apr 26, 2017
    Posts:
    74
    We've played with C# shaders a bit, and personally I like them a lot. You can have proper public API and interface declarations and visibility control to manage the complexity. With the right setup, the basic syntax is very close to HLSL. Most of the conversion work goes into replacing all of the macros with interfaces or static branches. I think the resulting code is actually easier to read, and you can use all of the fancy C# IDE features to jump around the code and refactor things as well. Major downsides are: you can only use a subset of C# (similar to burst), and it isn't 100% compatible with standard HLSL.

    Yes!

    #1 is what ShaderGraph does. And what we could easily expose in a Surface Shader system relatively quickly. Then it's a question of what functionality we could expand easily from there.

    #2 is internal and tightly coupled to the SRPs, like you say. We would have to start a larger discussion with SRP teams about making that more modular.

    #3 is basically what the SubShader backends did. The new Targets and Stacks system is designed to replace that and make it more flexible. A Surface Shader would have to provide the desired Target and settings for that Target, to generate all of the passes necessary.
     
  23. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    I think the key there is to take several people who have pushed surface shaders hard and let them port to the new system. MicroSplat is a good use case here because it covers a lot of ground, and I'm betting it would be pretty easy to get a few other people to do that as well. It's hard to know exactly what you're missing until you try to do something reasonably complex..

    It definitely feels like a future place of research such that you could do a lot more without modifying the underlying SRP, as well as allow asset store authors to provide really custom lighting systems that automatically integrate with other systems. That's potentially super cool. It's great to have source access, but it's not great to own a ton of source code, and in general SRP requires large amounts of ownership once you customize it. Thus, I don't think we'll see custom SRPs in the store- but we would see custom lighting and rendering models which can be plugged into an SRP, much like another post processing effect.

    Do you have more information on this yet? For instance, I always want to generate my shaders with support the widest range of features available, especially since I don't necessarily have the hardware to check everything on. When a user says "I'm having this issue in single pass VR", I can say "surface shader" and know it's most likely not in my code. I don't see a need to generate a version of the shader without VR support, for instance, except maybe to not have to strip its variants later. I guess there are areas people might want to disable, like shadow passes or motion buffer passes, but by default it kind of just wants to work for the SRP it's installed into.

    It's funny, because my own coding style has drifted towards top down code over the years, where there's some code that just runs everything- and DOTS and SRP are both an attempt to move Unity in that direction. However, the asset store and I'd argue many Unity users want the opposite of this- they want systems that plug in, override, or add to things to existing systems in a friendly way. While I'm not going to advocate bringing something like Grab Pass back, the more things can plug into an SRP or shader system to extend and modify it, the more useful they become. And I'd gladly sacrifice a little performance for more usability in this area (gains can be made elsewhere).
     
    GliderGuy, Subliminum, Jes28 and 3 others like this.
  24. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    2,275
    It's all about scaffolding layers. At the top layer you could have API that lets you dump in all the surface data into a function and the internals of that function will do all the future-proofing for you in a pipeline-agnostic manner. But then you can have lower-level APIs that expose the different modules and would likely be pipeline-specific, but still abstract the implementation details and keep the user code short and clean. Users could also completely replace a module with their own custom setup if necessary (replace actual shadows with blob shadows).
    It actually does. They just don't call it that in the code.
    As long as it has feature-parity, I think this is a fantastic solution. Mimicing the way Burst does it with a [ShaderCompile] attribute and a Shader Inspector would tie in perfect with the new Unity ecosystem.
    You won't have feature-parity to built-in without this, as Built-in has a straightforward way to do NPR that is pretty much closed off in the SRPs.

    For what its worth, I actually hated surface shaders. I ran into too many issues trying to find the magic macros, magic variables, and magic functions, most of which had terrible names, and some of which made assumptions about my code structure and own naming conventions that were incorrect. By the time I got anything working, my code looked like the output of a random number generator.
     
  25. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    Ha, yeah, well some of the voodoo in there was not great. Like the whole "What space is viewDir in? Oh, did you assign a normal?", and "what do I have to name my variable to get the second UV coordinate?". I think the current output of the shader graph where it has very explicitly named TangentSpaceViewDir and WorldSpaceViewDir available makes a lot more sense- just let the compiler strip it if you're not using it.

    But the abstraction level was fantastic compared to dealing with vertex/fragment shaders.
     
    Jes28 and neoshaman like this.
  26. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    5,602

    Yeah performance override should be able to be done, short circuiting path so we can get something done with the warning that we are leaving home into the wild west.

    Being able to peel away layer progressively to do what we want.
     
    OCASM likes this.
  27. Le_Tai

    Le_Tai

    Joined:
    Jun 20, 2014
    Posts:
    403
    Aside from what already been said, multiple subshaders.

    But I think the real problem with Graph is that it is not code. I tried making a relatively simple shader with it and found that it is really time consuming to do simple and common thing like chained arithmetic and swizzle. It's also difficult to navigate, especially when debugging - when zoomed out, you can't read anything, and when zoomed in, you can see the equivalent of 4 lines of code.
     
    Zelgadis87, khalvr, babaqalex and 6 others like this.
  28. andybak

    andybak

    Joined:
    Jan 14, 2017
    Posts:
    489
    This pretty much sums up node UIs for me. It's probably solvable with some innovative design but it's a big challenge. Maybe all that effort could be spent on making code more approachable to non-coders so they won't feel the need to shelter in a GUI. Shader code is terrible for boilerplate, repetition and inscrutable jargon. We already know how to make code and APIs readable and easy to learn.
     
  29. Edy

    Edy

    Joined:
    Jun 3, 2010
    Posts:
    1,978
    Well, this is actually a real bummer:
    From the Roadmap 2020 thread:
    https://forum.unity.com/threads/202...gine-creator-tools.852253/page-2#post-5629195
     
    Last edited: Mar 26, 2020
  30. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,918
    I don't ever use the inline thing, it's no good for text, and I've lost work using it. Instead, reference a functions.hlsl file and use that.
     
    alexanderameye likes this.
  31. FGPArthurVII

    FGPArthurVII

    Joined:
    Jan 5, 2015
    Posts:
    104
    Good Day Everyone I have a Feature Request for Shader Graph.

    When working with shader graph I noticed that If my nodes are connected to the Master, everytime I update it in some way, adding more nodes, properties, anything the more effort I put in the slower and slower It gets, so I have to stop, wait for Unity to come back from it's faint so I can go back to work again. Well It would be normal If I was making a monster shader with mindblowing effects, but It seems more like something inevitable, the fact is, my shader is not that PC killer stuff, In fact, It seems to be more related to the number of nodes Itself. It also happen when creting new properties If you have many, input the property's name, hit enter and prepare for having to wait for Unity to finish Its seizures. My PC has 8GB RAM, a core i5 and a GTX 1080, so It's not that bad to have trouble that easily.

    I imagine It may be related to the fact that Unity has many systems and sub-systems all working to provide all f It's features, so when It tries to update the shader It ends up having to make a lot of effort and takes time. If that's the case, a solution would be to make Shader Graph a separate program, make It standalone. Let it import your mesh in, maybe entangle it directly to the prefab in Unity r something, let me have a simple model viewer so I can swiftly check the results. Make it in a way that It does not have to deal with many stuff to update, just the shader stuff, In fact, what If I want to close Unity altogether and only have shader graph working? It would be interesting to just open the Shader Graph and work on a shader instead of having to open Unity, open your project, waiting for it to load all of It's thousand functions just to work on that.

    Another benefit of that was having Universal shaders, in other words, a shader that works between projects. If I have a project in which I want an especific shader, I could just import It from the new Shader Graph If I have previously made It, instead of having It directly associated to the project files. This way, If I want to make a new project and still have that same shader I wouldn't need to open my older project and export the package or doing the whole shaderwork all over again, I just pull it from my shader Graph library and that's It.

    Thanks,
    Arthur
     
  32. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    This thread is not about the shader graph, but rather Surface Shaders or some replacement for them in the new pipelines. There's an entire forum for shader graph requests.
     
  33. FGPArthurVII

    FGPArthurVII

    Joined:
    Jan 5, 2015
    Posts:
    104
    Can you please share the link?
     
  34. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
  35. FGPArthurVII

    FGPArthurVII

    Joined:
    Jan 5, 2015
    Posts:
    104
  36. id0

    id0

    Joined:
    Nov 23, 2012
    Posts:
    404
    Tesselation!
     
    florianBrn and Lars-Steenhoff like this.
  37. Owers

    Owers

    Joined:
    Jul 7, 2012
    Posts:
    39
    Everyone else has already mentioned what I'd like to see for surface shader functionality, especially jbooth. To summarise what I use surface shaders for that isn't possible with shader graph:
    • Custom vertex shaders.
    • Custom light functions/models for stylised shading.
    • Additional passes, with properties (name, tags, zwrite, blend, etc).
    • Expose surface properties in the material (opaque/transparent, double-sided, etc). Currently this is locked to the master node, so you have to create multiple shader graphs for opaque and transparent variants.
    • Multiple sub-shaders for different platforms/variations, though nowadays I'm moving towards using keywords instead.
    Also, defining a CustomEditor. I find the default material GUI to be very chunky and messy, especially for complex shaders. Artists are often overwhelmed when dozens of sliders and texture slots pile up in the inspector, so it would be nice to be able to compact things into drop-downs, headers, and collapsible GUIs.
     
    florianBrn and OCASM like this.
  38. BrianCraig

    BrianCraig

    Joined:
    Apr 13, 2020
    Posts:
    6
    Hi! Thank you for creating this tool, for me it's awesome. I know i'm a beginner on this, but it's kind of cool the way you can dinamically create shaders, people who are experienced on creating their own shaders in code will say that "it's not so much of a deal", but honestly, the ShaderGraph creates a far more superior understanding of what is happening, than just throwing code and crossing hands to see if it compiles.

    Aside from that, i really hope for more maturity and stability for now, the community has to understand the tool before making it more complex. For example, i filled a bug for the Enum properties and keyword nodes, that thing is really buggy, and it's something that would make my graphs a lot more dynamic, i checked different versions and the beta just for that feature.

    Thanks! Brian
     
  39. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,821
    This is not a thread about shader graphs..
     
  40. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    755
    Just found this thread and also have some opinions about it. I like using ShaderGraph, but the options it allows for customization are just not good enough. I'm really tired of having to "Copy Code" on a master node and then changing the bits and pieces I need, to the point of having played with automatic "shadergraph post-processing" which feels just stupid given what this system is.

    Things that could help in the short term with ShaderGraph:

    - whoever thought that all those scope checks when connecting nodes would be a good thing was wrong. They prevent so many things that actually work. I had to go into graph files and manually connect nodes just to work around this bad UX decision. I understand this might help beginners, but please just add a toggle "turn scope checks off", compiler can tell me just fine what I did wrong...

    - injection points for custom code at different places in the graph. Currently custom functions are added at the top, and to make matters worse they are added in different places in URP and HDRP. Injecting code through custom function works, but not being able to define where that is injected is bad.
    (Example: I was able to add procedural instancing support to vanilla URP through injected code in ShaderGraph.)

    - ability to change input and output types of vert/frag directly. This is necessary for e.g. geometry/hull/domain and in the future mesh shaders. (currently have to do that manually)

    - I'd like for ShaderGraph to succeed - our artists were able to use it very flexible. However, as tech lead I need the ability to adjust the "template" they work in without having to go all-in and modify SRPs directly. This would be my preferred way - I can properly create custom masters, and they can work on graphs for those. (Yes, this is possible with hacks, as e.g. ShaderGraph Essentials shows)

    Here's an example of the type of workarounds we currently have to do:
    1588846158046_IMG_20200423_111619.jpg

    This allows our artists to build URP shaders with ShaderGraph that support procedural instancing ("GPU Mesh Particles"). Those subgraphs use a number of really hacky hacks to inject code into the output shader. This kind of stuff must get easier.

    @Owers custom editor support is in latest 8.x/9.x on Graphics repo if you want to try it out.
     
    landonth and OCASM like this.
  41. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    755
  42. transat

    transat

    Joined:
    May 5, 2018
    Posts:
    772
    There is also a conversation about it here if you want.
     
  43. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    755
    Oh @transat thanks, I guess I didn't follow along well enough. I'll cross-post over there; this thread here seems abandoned (by Unity at least).
     
  44. phil_lira

    phil_lira

    Unity Technologies

    Joined:
    Dec 17, 2014
    Posts:
    552
    We are having some discussions about this internally, involving SRP + Shader Graph teams.
    Hang on a little bit more until we have an official message about this.

    Be mindful that, like Felix said, this it not official Unity work, it's more like a personal pet project for me to use for my own learning purposes and if it helps some people on the way, that's great. However this has no ambition to be an official solution.
     
    noio, DrSeltsam, GliderGuy and 4 others like this.
  45. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,239
    Just the productivity hit we are taking from SG is rather significant. The whole it makes it easier on artists doesn't hold up for a lot of games. Performance concerns drive consolidation which increases shader complexity. You can also end up with custom generation at a certain point, which is a different type of codegen then what SG does as it has different concerns. All told it can quickly hit the point where it's outside the realm of what artists are capable of. And at that point Unity trying to say SG is to make things easier for artists will either make you laugh or cry, or both.

    You also have other disturbing trends coming out of SG. Like most complex SRP shaders on the asset store are the generated code with patches. Some authors see it as a new way to protect their work and won't even include the SG.

    Surface shaders can't come fast enough.
     
    Rich_A, Le_Tai, transat and 3 others like this.
  46. The-Wand3rer

    The-Wand3rer

    Joined:
    May 14, 2019
    Posts:
    22
    Would it be possible to add the object's orientation to the scene/object node? Currently it only provides the object's position and scale, but not the orientation. Thanks
     
  47. oobartez

    oobartez

    Joined:
    Oct 12, 2016
    Posts:
    83
    We need a way to modify the final color - a real-world use case is when you want highlight an object in a way which makes it stand out, such as an object that the player selected in the game, regardless of lighting. In Standard surface shaders, you could just add a finalcolor function which allowed you to modify the color. I couldn't find a way to do that in Shader Graph and it is actually the one thing that is stopping us from migrating a fairly large project to SRP.
     
  48. MattFS

    MattFS

    Joined:
    Jul 14, 2009
    Posts:
    219
    Adding attributes to parameters just like the URP\Lit shader

    [MainTexture] _BaseMap("Albedo", 2D) = "white" {}

    Being able to add headers to help make the UI reasonable (having to write a shader Editor for this is a bit of a joke).

    Maybe take some notes from Timeline editor? We are able to easily re-arrange tracks and name them and also group them and re-arrange groups! Could be a nice way to unify the expected behavior of a built-in Unity UI.

    ......
    At the end of the day though, and this has been said already, graphs won't replace written HLSL in many cases... on a number of productions I've been on (UE4, Anvil, Unity), shadergraphs are fine for TA's and artists to get a certain look quickly, but the lions share of the shader work for any serious project is at the text level.
     
  49. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    4,943
    heat map the graph
     
  50. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,154
    What are the things you would build in a surface shader system that are not possible within ShaderGraph as it stands?
    so what is impossible with shader graph right now?
    • tweaking the stencil buffers
      and this for each single pass. a surface shader system should support this.
    • especially assign work to vertex and/or fragment shader and custom v2f interpolators
      shader graph only allows to tweak positionOS, normalOS and tangentOS. with this you can not even write a simple speedtree like billboard shader.
      so what we need are our own vertexToFragmentInterpolators which do all the heavy work (or work impossible to do in the fragment shader) and pass down the data: imagine a shader doing per vertex adjustments like tweaking the vertices to match the terrain surface. then the vertex shader already has to fetch the needed information like height and normal of the terrain. so it could easily send down some simple final "blend" value to the pixelshader instead of having the pixel shader getting all this information again on its own. right now: impossible.
    • custom lighting functions
      especially as urp is forward still right now custom lighting functions should be first class citizens here. this does not change if deferred is officially released.
    dispite from the original question as quoted above: what could a future surface shader sytem make better?

    actually i was quite pleased with latest surface shaders in the built in rp.
    and later additions like vface and finalgbuffer made it a very powerful abstraction to write all kind of different shaders.
    however there still are/were some caveats:
    • accessing TBN and other members
      not an easy task as far as i remember. instead a new system should make it easy to access all "IN" params. HDRP i thinks does a quite job here offering the fragInputs structure.
      so if i need viewDirTS i will find it there. as well as viewDirWS. both created on the fly if needed.
    • small vertex footprint
      i am not sure about the old surface shader system here. but i know about hdrp which uses "AttributesMesh input" as vehicle to do the per vertex changes. bad, bad idea: as i can only output uvs if uvs are declared in AttributesMesh . but what is if i create uvs on the fly? what is about normals created on the fly? here i do not any per mesh input but still want to output them for the fragment shader.
      shader graph btw. creates an incredible vertex footprint. way over any top... first thing to address as it is a bug!
    • input params being stripped off
      ever tried to get worldPos and pass it to a hacked include? no chance: if worldPos was not affecting any of the base surface shader outputs like albedo, normal, smoothness, ... it just would get stripped even if it was passed correctly to a later stage like a hacked "autolight" include.
     
    OCASM, Jes28 and Edy like this.
unityunity