Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Official What is next for us at Unity with Scriptable Render Pipelines

Discussion in 'General Graphics' started by natashat, Jul 2, 2020.

Thread Status:
Not open for further replies.
  1. They don't get rid of the packman. Just the preview packages. Basically any packages which doesn't get released soon(TM) will be removed by default and will only be available if you poke around in the manifest file.
    https://forum.unity.com/threads/visibility-changes-for-preview-packages-in-2020-1.910880/

    And in this regard it is similar with the SRPs. If you want to use a newer version (for example for the Hybrid Renderer), then also poke around in the manifest file and juggle the version numbers by hand.
     
  2. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,697
    I’m just not a fan of where the package manager is going. I’m concerned this half and half approach will end up crippling both camps.
     
  3. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    Please take this to the thread about the package manager, as this is tangental to the actual SRP changes..
     
    JoNax97 and a436t4ataf like this.
  4. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,697
    Fair enough will do
     
  5. Grimreaper358

    Grimreaper358

    Joined:
    Apr 8, 2013
    Posts:
    789
    This isn't true. There are two ways to include packages from GitHub in your project.

    Direct Branch clone and link to it from the editor or in the manifest if you choose.
    Direct Branch download then add the packages from the download to your projects' Package folder (Where the manifest sits).
     
  6. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    I actually think rewriting the Lit shader in a graph is a pretty bad idea. It isn't what graph's do well, and wouldn't provide a very good example to work from. The Lit shader is essentially a mega-shader, which uses keywords to provide a lot of options for the user and generates thousands of possible shaders. However, most of those shaders it generates are actually quite simple when expressed in a graph. For instance, one might just be an albedo texture, normal, and mask map plugged into the appropriate inputs. So what you'd end up with is a very large amount of nodes that are simply for dealing with compile time variants, with very little actual code. This would make it a confusing example for a new users of a shader graph, and not at all useful as reference.

    Where shader graphs really shine are in matching custom shaders to the artwork, and the examples included with Amplify Shader Editor are more fitting- they are generally small graphs showing you how to perform common techniques. Further, in teams which are heavily shader graph based, you tend to have less use of MegaShaders, because it's simple to roll your own for the options you need, and customize it to exactly what you need. This massively reduces compile time variants of the shader vs. a megashader system.

    Now, rewriting the Lit shader in the new surface shader framework does make a lot of sense. Forcing everything through the same abstraction layer (including the shader graph itself) will help dogfood the system, and guarantee that there aren't inconsistencies in the output like there are between the current standard shader and surface shader frameworks. Further, it provides a reasonable shader system for people who are not familiar with graphs. That all said, it would be lovely if the default objects did not use this as their shader, and instead used a very simple shader, such that the Lit shader could be removed to prevent all the complications from compile time variants and keyword usage. Then projects which don't opt in to the Lit shader, or remove it completely, don't have to worry about stripping variants and such to get compile times down.
     
  7. Tanner555

    Tanner555

    Joined:
    May 2, 2018
    Posts:
    78
    SRP being integrated into Core Unity has a lot to do with separating core essential features into packages, which must be installed via the package manager. This could be the start of the end for requiring the Unity package manager to install most Unity features in the future. Hopefully Unity will continue to open source core engine features.
     
  8. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    One again, this is not the package manager thread, please take it over there if you think this is some slippery slope that leads to the removal of the package manager. This thread is supposed to be about the SRPs, not conspiracy theories about how Unity is going to move to closed source.
     
    a436t4ataf likes this.
  9. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,697
    Knowledge can give you a platform to speak but respecting others is how they follow your ideas. Maybe this is why you have been screaming for years but seen little action.
     
  10. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    1,924
    Jason can be prickly and brusque, but ... his tech knowledge is excellent and he is clear, focussed and concise in his tech conversations. Many others have also been ignored on this topic, despite their experience and credentials, so let's focus on the topic and steer away from personal attacks. It will be particularly painful (and ironic!) for those of us invested in the SRP problems if our 2020 attempt to steer the ship away from disaster gets buried in off-topic discussions about why key voices were ignored for the past few years and descends into ad-hominem attacks.
     
  11. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,697
    If jbooth was on the receiving end of this prickliness I would unbiasedly feel the need to say something there too .But your are correct lets focus on the problem and my apologies to you jbooth
     
  12. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    How I see it, there are two camps here where other is taking the asset publishers role and other is actual engine users. It's important to see this right away as it clearly drives peoples motives in discussions like this. Both of these groups are important for Unity's ecosystem but you can't just neglect one to fix these issues, they need to be both taken into account.

    This is going to be a lengthy post so I'll split it between some of the topics covered in the original post:


    SHADER API BREAKAGE
    This clearly comes from asset publishers side of things and this complaint has clearly been more about things changing throughout the minor versions of SRPs. Like having to support 2-4 different variants of a package for single major SRP because API changes. For example what was written here
    isn't really how it went in the previous SRP release cycles. We've always gotten breaking API changes in the middle of major SRP versions which is the main issue here. Do note here that the complaints of this aren't really for the Shader Graph but more for regular shader code which is pretty wild west atm and on top of it, it's mostly undocumented.

    How I see it, updating API between say, SRP 7 and SRP 8 is a nonissue even for asset publishers point of view, things breaking in the minor versions (like 7.x) is.

    I'd like to note that API changes between major versions was how it used to be even on built-in renderer when it was still being actively developed. I think there's some golden memories here from some people on how things used to work in past: the reason why built-in renderer's api is not constantly changing anymore is simply because there's really no new development on it, everything new goes to SRPs. I stress that this is not normal situation and you can't naively compare built-in renderer dev to what happens with SRPs today because other has basically been same version for years now and other gets multiple version each year now. It's not a fair comparison and should be taken into account when discussing the API breakage / maintenance work.

    I haven't seen people complaining that much about Shader Graph documentation, how I see it it's pretty self-explanotary for most nodes regardless how the docs go about them. What is practically fully undocumented is the coding API for any SRP related things. Even if you don't do anything custom but just want to change say, a postprocessing effect from HDRP volume, there's zero examples on the docs on how to do this. Writing custom shader code is very neglected in the docs (there are pointers for the entry points but nothing that guides you on how to use the api itself once you get there, you are on your own today).

    Before anyone says about Scripting API docs... it's basically useless for figuring these things out. These autogenerated docs do very little for users trying to figure out how to use the thing: in 99% cases the scripting api docs just tell in one sentence what the method does - which is already apparent from the method name alone. I feel the whole scripting api is a lost cause as it is now as there's no point in even looking at it (this applies to most autogenerated scripting api docs really as long as people use it this way). There are some exceptions even on current scripting api, but those few exceptions don't really fix what's essentially broken for most.

    All this being said, I'm hoping that at least some of this is going to get addressed in the future by this work:


    SRP PACKAGES STRIPPED OUT OF PACKAGE MANAGER

    This I couldn't disagree more with. I feel like this is a huge regression and frankly, a mistake. Package Manager updates for individual features is one of the best things Unity has done in recent years IMHO. Yes there are issues with PM but instead of going back to old and already proven to be nonfunctional pattern (ship a lot of feats as part of engine core) Unity could just fix the current issues with PM.

    I'll try to make few points why this is a bad call:

    - Having SRPs out of PM essentially blocks most users from getting any hotfixes.
    - There hasn't been a single verified SRP release so far that wouldn't have had issues
    - Now to get fix on your SRP without git, you have to update the engine and possibly get new unrelated bugs/issues along with it
    - This won't fix asset publishers main issue which is having to support multiple SRP versions within same Unity major version as there will be multiple SRP versions regardless, they just will be scattered across Unity's minor versions. From asset publishers POV, this essentially changes nothing despite their initial impressions
    - All this does it takes away things from users, doesn't give any gains to anybody, even for those asset publishers
    - Forcing only update path to git is pretty much limiting the users who can do this to a fraction. If you bring git alternative as redeeming quality for this change you clearly haven't used git from PM yourself. Not to mention that for git to work you actually have to have git setup from your CLI.
    - Even with git setup, you can't just casually browse if there's an update. You have to navigate the monster which is Graphics repo on github (I know how to navigate it and find the relevant release tags just fine but imagine any user new to git doing this as first step) and then manually type a string on certain format to PM to get each update.

    How I see it, there would be multiple better alternatives than SRP as part of core:

    - Simply don't put breaking API changes to fully released Unity engine versions at all. This fixes most of the issues alone. Asset publishers don't support alphas and betas so for them changes on pre-releases is a nonissue. And with released versions people expect stability.
    - Don't backport SRP features into SRPs with already released engine versions. I know Unity is already doing this for most things but this could be enforced even further. I know this is going to split opinions as people always want their favorite thing to be included but this is essentially one thing that drives the API changes.
    - If you have to update API, have a clear cycle for it. I feel you shouldn't change the API at all for specific Unity engine version's SRP after engine version is released but I can see this problematic for LTS versions.
    - For LTS you could make some promise that you only bring new API changes say, every 3 or 6 months and then stage these changes to release at the same time, minimizing the times people need to upgrade their code.
    - Make PM enforce verified packages more but don't take away control to manually update packages if needed
    - Make PM resolve mismatched packages better: current dependency based thing always fails now if user has installed mismatching SG, RP core and other RP packages manually. It should detect the packages that are required to be same version and enforce installing the "companion" packages unless user truly wants to override this.


    SRP UPGRADE PATHS

    These are all nice to have but what people really struggle with these today is that these are one way streets. You can't convert back to built-in or more importantly, convert between URP and HDRP which is really common issue when people are still not sure which one to choose. I feel that if Unity gets URP and HDRP play ball together more in the future (like the plan apparently is), this will be less of an issue but it is an issue today.


    SRP CROSS-COMPATIBILITY

    This is something I love to see on this post. As some of you (both Unity employees and users) know, I've been doing prototypes of runtime swapping between LWRP/URP and HDRP for quite some time now. I have proof of concept from this that dates a long way back and recently did a new prototype using new SG stack change which is amazing for SRP cross-compatibilty: no more shader swapping from materials since you can have both URP and HDRP shader passes right there in the same shader graph file.

    I get that the ultimate goal for SRP cross-compatibility isn't users being able to runtime swap between SRPs but rather being able to author same project for different platforms using different SRP targets but I do welcome the opportunity to take this even further.
    This is essentially what I did on my last prototype altho I could reuse the same materials now on both SRPs as long as I used custom SGs that targeted both URP and HDRP: I had SRP specific components on additive subscene which I loaded based on the set SRP asset on the graphics settings.

    I don't understand why you label first stage to be 2021.1 though as the Shader Graph stacks change already landed for 2020.x SRPs?

    As for future work on this, right now the biggest pain really are the SRP specific data components which now attach automatically into cameras and light components. I strongly feel this shouldn't be this way. I feel these settings should eventually stack like now does with recent shader graph change. We should be able to just open camera component and have expandable HDRP and URP specific settings under it. Same with light sources. Today these components have options that remove or override others settings so you simply can't have both URP and HDRP data components there at the same time.

    Additional things that are a bit of a pain today when having both URP and HDRP at same place (I'm trying to be brief here as this isn't that important in the big picture, just wanting to note them here):
    - URP and HDRP sky is setup totally differently, this could be unified
    - While URP uses seemingly similar Volume framework, if you have HDRP effects in Volume and then swap your project to use URP, none of the URP effects you add to same volume work. There needs to be a separate "URP Volume" on additive scene for this to work today, which isn't obvious at first at all (probably something that HDRP overrides now).
    - HDRP exposure currently can carry over to URP upon SRP asset swap but since URP doesn't have a concept for this, you can't fix this from URP side.

    I'll probably post another thread on the SRP swapping topic in the future as again, this isn't that important in the bigger picture here, just hoping Unity will address the small pain points too while going further with this.
     
    Last edited: Jul 6, 2020
  13. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    As additional thoughts about Shader API:

    - While stable scripting API for custom shaders is preferred by many, there is already an abstraction layer and that is SG
    - SG has been problematic in past for really custom stuff as you can't hook up everything there and have your thing work with stock SG package
    - I assume similar issues would arise even with text based shader code abstraction layer so that wouldn't be a magic bullet to solve this issue either (chances are this layer would still abstract away things you needed)
    - Current SG extendability painpoints would need to be fixed regardless
     
  14. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,324
    just adding +1 for this, because in scripting:
    - you can easily copy-paste (from docs/web)
    - comment/uncomment 1 or more lines to quickly test things
    - jump to error line in 1 click (without zooming / panning around canvas)
    - can open 2 views of the same shader code
    - can see which lines have recently changed in visual studio
    - swap swizzles, invert values etc. literally all in 1-2 keyboard clicks (instead of trying to pull wires into right slots, which seems to always fail if you look people working on shadergraphs in streams, they try to connect wire, it doesnt connect, its wrong wire or wrong input or other issues)
    - and probably many more..
     
    Ruslank100, OCASM, protopop and 6 others like this.
  15. eagle555

    eagle555

    Joined:
    Aug 28, 2011
    Posts:
    2,705
    Totally agree^ and also with all of Jason's points. The biggest difference is that with code you can reach magnitudes the performance, endless complexity and customization. E.g. CTS terrain shader (made with a Graph) v.s. MicroSplat hand coded by Jason. Microsplat is many magnitudes faster than CTS while having tons more features...
     
    OCASM, knxrb and bac9-flcl like this.
  16. protopop

    protopop

    Joined:
    May 19, 2009
    Posts:
    1,557
    I have your asset and I think it's great. And this is a good example of one of my Unity fears. I love the overall unity experience, but it is the asset store that is the main reason I don't think about leaving unity because there are 2 or 3 irreplaceable assets I use that don't exist in the other two engines. But now I am seeing the number of assets in the store decrease and many assets I've purchased become obselete in part because of Srp. I also am seeing many posts from asset store authors talking about the difficulty of working around srp. I really don't know what the future holds here but it is hard to not be worried about it.
     
    SMHall, JBR-games and Elringus like this.
  17. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,219
    Thank you for all the great feedback, I'm going through it all right now - there are a lot of things to break out and respond to it will take some time for me to get through all of this (maybe a few days).
     
  18. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,594
    Not really, but I'd welcome some examples.
     
  19. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Even simple things like shader macros has been constantly broken and / or changed the way they behaved in the old Unity days. I've had to write totally different shader code for say Unity 5.0 than Unity 5.x after it to make do the same thing. It's been largely a process of trial and error before issues have been more widely known. Stuff like this doesn't really happen anymore as there really isn't active dev on this part (unless we go to SRP land).
     
  20. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    You can still go with standard rendering. Standard is not bad at all. If someone asks me "what rendering i need to use", i answer: CustomRP, Standard, HDRP, ..., ..., ..., and never never use URP.
     
    protopop likes this.
  21. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,594
    Fair enough. I've had shaders from 3.x all the way to 2019.4 that more or less worked without major changes (I don't consider macros changing to be that major... some of them even auto upgraded and there weren't even that many AFAIK).

    The biggest changes I guess happened in early 5.x, where you could fine tune your surface shaders, and then they would change fundamental things about the shaders worked and you'd get very different results from version to version, but outside of that early 5.x era I don't think built-in underwent anything compared to the changes that current URP HDRP go through from version to version.
     
  22. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    I think you might be talking about surface shaders vs. vertex fragment shaders. A surface shader written in 5.2 will work fine all the way until current unity, while a vertex/fragment shader requires extensive changes on nearly every release until they stopped working on standard. Supporting vertex/fragment shaders was just as hard as supporting a single SRP today.

    I literally stopped working on MegaSplat and created MicroSplat because of the trouble with supporting vertex/fragment shaders, and decided I wouldn't add anything that surface shaders didn't support- in the end, there isn't really anything I haven't managed to get working under surface shaders- while sometimes a bit funky, and sometimes there are bugs in the system I can't get around (Draw Instance on terrains with tessellation not passing the instance ID), they are surprisingly complete.

    My biggest fear with Surface Shaders 2.0 is that they will be a subset of the features that Surface Shaders 1.0 supported, and that I'll never be able to move my product line to them, and remain forced in vertex/fragment land, where the upgrade path is still a major issue. That's why I have some very specific details in my pitch for them, things like making sure parameters get passed to functions as inout so anything can be modified in the surface shader functions, or that custom attributes can be passed between the shader stages (even in tessellation, which the original did not support). It's also why I think pushing as much as possible through them makes a lot of sense (re-write lit shader, shader graph outputs to surface shader, etc). The more complete this abstraction layer is, the less chance someone has to write a vertex/fragment shader directly.

    I think the reality is that each layer will close off some possibilities for gains in productivity. That is usually the fundamental tradeoff, and actually ok. Each layer should try to provide the widest possibility space that it can within the constructs of what it does well. And if you can drop down to a lower layer when you need it, like using a code node in the shader graph, or asm in C++, that is an acceptable solution to widen that scope. Right now the closed API on shader graph is its biggest limiting factor.
     
    OCASM and a436t4ataf like this.
  23. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,113
    I will propose wat I think it's better solution to solve this problem. I agree with each shipped version of Unity will have Universal Render Pipeline (URP), High Definition Render Pipeline (HDRP), Shader Graph, and VFX Graph by default as part of core Unity but override the graphics packages with a custom fork or branch from the git repository is not enough.

    The better way to do it is for most of the users, although now it's part of core Unity, user still can see the version of each graphics package from Package Manager but cannot see any other version of graphics package and able to install other version of graphics package by default.

    For advanced user that would like to change version of graphics package, the user will require to go to Advanced - Unlock advanced user mode and type special unlock code that will provide at Betas & Experimental Features - Preview Package of Unity forum. So, most of things are just identical like now but with a little of modifications to prevent most of users to simply change version of graphics package then breaks something. The goal here is providing good default for most users and still able to unlock extra feature for advanced user.

    For custom fork or branch from the graphics git repository, I would like to see awesome visualization of all branches and able go to specific branch and specific commit easily from Package Manager instead of only able to configure manifest file with text editor.
     
    Last edited: Jul 6, 2020
  24. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    This wouldn't work on current Graphics repo as it's now close to 1000 branches :D I guess it could work if you could search the branches like in github.
     
    optimise likes this.
  25. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    So something I've been thinking about a lot is shader modularity.

    Right now, a common approach to this is to move code and passes into various .cginc files, and then have a top level shader file which has property definitions, pragma's and includes, where most of the actual data/code is buried in these cginc files.

    Unity's shader graph output goes even further, having the actual functions for the shader stages in include files, which call up to the main shader file to do the processing.

    I personally find that at some complexity level, this becomes hard to work with and reason about. There are lots of packing and unpacking functions, functions in your main shader that get called from deep within an include chain, structures not defined where they are used, dependencies going every which direction, etc.

    Now, previously I described a surface shader 2.0 model that uses a Scriptable Asset Importer and parser to insert code, properties, etc, into a template. This is essentially how the shader graph works, so it's pretty likely that we will see this in SS2. I have heard from various Unity people that they want to avoid getting into a large AST like parser, and to me that makes a lot of sense - something fairly straight forward should be fine.

    In my example, I had a block of code that might look like this in the surface shader:

    Code (CSharp):
    1. BEGIN_PROPERTIES
    2.     _MyVector("MyVector", Vector) = (0,0,0,0)
    3. END_PROPERTIES
    Then when generating the actual shader, the parser does something like:

    Code (CSharp):
    1. string properties = parser.GetBlock("PROPERTIES");
    2. shaderOutput = shaderOutput.Replace("//INSERT_PROPERTIES", properties);
    3.  
    I posit that a lot of modularity can come from having an include block as well.

    Code (CSharp):
    1. BEGIN_INCLUDES
    2.     "Assets/Shaders/Foo.subshader"
    3.     "Assets/Shaders/Bar.subshader"
    4. END_INCLUDES
    Now when GetBlock is called, it searches through all of these files and grabs the properties block from them as well.

    What this would mean is that you could have multiple files contributing properties, cbuffer entries, code, etc, to the final resulting shader. This gives you a much easier to parse code locality, allows for easier reuse, etc.

    Because HLSL now allows for interfaces, we could implement something like the URP lighting model this way. While you could easily change your include files around to do this in the current structure, you would have to add any properties and cbuffer entries to the top level shader as well. Now that data could be included entirely in a single file, and not require any knowledge of how it works or what data it needs to use it. For instance, if I have some texture based ramp lighting model, I could just:

    Code (CSharp):
    1. BEGIN_INCLUDES
    2.     "Assets/Shaders/RampLighting.subshader"
    3. END_INCLUDES
    And the property for the texture, any cbuffer entries, etc, come along with it.

    This could be easily extended so that the ScriptableAssetImporter allow you to add these includes to an existing shader, or have some dummy file type that does this. For instance, you could download a cell shaded lighting model and apply it to your existing shaders, rather than having to modify them to use the new model.

    MicroSplat/MegaSplat are both shader generation systems; when you change an option, they rewrite the shader code by combining various .txt files with code, properties, cbuffer entries, etc to produce the resulting shader. With this simple include block/parser change, a large part of that modularity is easily exposed for everyone who writes shaders.

    Further, I would propose a define block as well, which I could use in an included feature file:

    Code (CSharp):
    1. BEGIN_DEFINES
    2.     #define _SOMEFEATURE 1
    3. END_DEFINES
    This would get inserted at the top of the shader passes, such that you're main code can now do:

    Code (CSharp):
    1.  
    2.     #if _SOMEFEATURE
    3.           DoSomeFeature(i, o);
    4.     #endif
    5.  
    Note you'd still have to be wary of code ordering issues- unfortunately HLSL is not like C#, and requires functions to be above callers in the final code. But effectively I could rewrite the bulk of MicroSplat to just spit out an include block with the major modules you have active, and set some defines for features. The resulting shader might look like:

    Code (CSharp):
    1.  
    2. BEGIN_INCLUDES
    3.     "MicroSplat/DistanceResampling.subshader"
    4.     "MicroSplat/Streams.subshader"
    5.     "MicroSplat/Core.subshader"
    6. END_INCLUDES
    7.  
    Note that in my proposed model (in original post), structures are defined for all the common stuff people use. IE: TangentSpaceViewDir, WorldSpaceViewDir, the TBN matrix, the appdata structure, v2f structure (with custom0, custom1, etc), etc. The parser simply searches the file for any of these strings, and if they exist it leaves the code to generate that data in the resulting shader. If by some chance the string is there but the user doesn't actually use it, the shader compiler will strip the code anyway.

    Due to this standardization of naming conventions, subshader code can safely call s.WorldSpaceViewDir and know that it will magically compile regardless of what is in another subshader or the top level shader file. Yes, it means you can't rename this viewDir, or name the data you stuffed into UV3 something meaningful, or name the v2f structure whatever you like, but I think that's mostly a good thing.
     
  26. Bordeaux_Fox

    Bordeaux_Fox

    Joined:
    Nov 14, 2018
    Posts:
    589
    Unity just should buy MegaSplat and MicroSplat and hire Mr. Booth. They would save so much time. :)
     
    SMHall, Radivarig, Ruslank100 and 2 others like this.
  27. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,594
    I mixed the two actually. The early 5.x era I was referring to Surface Shader, but in the case of my own shaders, I was talking about vert/frag shaders, and I really didn't have a hard time maintaining them. With that said, my shaders don't really interface with a lot of Unity stuff (I do my own lighting/box projected cubemaps/mirror refl/fog etc) and they are all relatively simple. And with the exception of some macros and how the call some matrices changing from time to time, I haven't really had much trouble.
     
  28. JTAU

    JTAU

    Joined:
    May 12, 2019
    Posts:
    24
    One of the biggest issues I have with current SRP updates is how they are scheduled. If you are waiting for a feature, for example Light Cookies for URP, that is essentially a whole 2 year wait because it's not going to be backported to version 2020, and it's not practical to even start developing with version 2021 until it is in late beta.

    With these packages being integrated directly into Unity it's probably going to mean even less flexibility here.

    I don't think it's a good idea tying SRP updates to a Unity version because of this, the updates on the URP board are already tied to versions, and this is already a problem.
     
  29. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Yeah, I feel i'm going to be stuck on legacy render for many years yet. Until I can count on 99% feature parity ( even if features don't work exactly the same ) of URP/HDRP with legacy its just too much of a risk to even look at starting a client project with them.

    I'm really puzzled as to how this all happened.

    When i initially heard of and explored the original Scriptable Render Loops ( must have been more than 3 years back ) I was super excited. The concept looked to bring full control over the render pipeline. From memory there were a few missteps and the API changed radically between updates, but I had faith in its future. Then we had the name change and the promise of a HD renderer built on it, a little later the LDRP. Its at this point where I feel things went off track.

    My expectation at the time, before Unity even started on making HD/LD renders was that they would make a legacy render replacement. A literal drop in replacement, that replicated the old legacy render to 99%. Sure some changes would be expected, maybe you couldn't literally have grab pass at the same granularity as before, but it would cover most cases. This would provide the perfect test case for SRP API, ensure that obvious cases ( stacked cameras, projectors, grabpass, imageeffects ) etc would be accounted for, and importantly provide a good A/B test to ensure performance was maintined or improved over legacy. Once the replacement was at a good stage then work could start on new renderers ( ideally just one that built on new technologies ) like HD or Universal.

    Sadly I think Unity just got excited and ran ahead with the possibilities. Even now I would honestly like to see a legacy replacement developed. I assume its possible? If not surely that means the SRP api still have fundamental issues?
     
  30. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,219
    There is A LOT of minutia and details here about very specific things. We've been reading though it and internalizing a lot of it. What I want to respond to is some higher level things

    Shader graph vs Surface shaders
    We understand there are use cases for both, I want to walk the thread here away from arguing about if we should have one or the other. We want to have both - let's go for constructive conversation instead of arguing over which is more important for individuals and why.

    Core Packages vs Non core packages
    There are a lot of varying opinions on this, and for us we have to balance a number of things, story for users, simplicity for asset store developers, ability to deliver fixes in a timely way, and our internal developer processes. Our goal here is to balance all these needs, but still allow power users to do custom things. We're talking with people from this thread who voiced uncertainty of core packages to better understand the worry there. Our driving force though is to have a great solution for the majority of users. Right now the equation feels the wrong way around.

    HDRP Feedback
    I'll let someone from HDRP team come and weigh on on some specifics here.
     
  31. valarnur

    valarnur

    Joined:
    Apr 7, 2019
    Posts:
    436
    Could you do clean-up and data structure reorganization of whole Unity to use less drive space and that is not scattered around?
    2019.4 takes 5.7GB + around 3.5GB SRP + 400MB of HDRP per project.
     
  32. Bordeaux_Fox

    Bordeaux_Fox

    Joined:
    Nov 14, 2018
    Posts:
    589
    I'm just wondering what caused Unity so long to understand the point that for URP, people actually demand the same features as build-in. Just look at the URP forums: It's scattered with feature request and people discussing pages over how certain missing feature affects their art style. In the first post, it is mentioned a lot of missing features will come for 2020.1. Still I missing for URP Light proxy volumes and Projector. It's not on your list.
    I just want to ask Unity how we are supposed to properly light large dynamic objects without Light Proxy Volumes? If you ever worked with dynamic objects in a high frequency scene, you would know that only one blend probe is not enough to make large objects not look out of place regarding the GI. Again this is something people just expect from you: URP has to be the (atleast) same feature set and quality as build-in otherwise it is not a selling point. It's also not good for your marketing. While other engines really present huge new features (just look at the tech Unreal is working right now), Unity struggles with realtime point light shadows. This will surely does not blow developers away. You just make the engine look outdated and obsolete when standard features are missing.

    Also the timing when the missing features come is also very bad. You announced LWP/URP way too soon as production ready when in reality a lot of features are missing. And developers mostly just want to work with a LTS version of Unity. Now there are in a dilemma: They are stuck with 2019.4 LTS but all these missing features will not be backported. They had bad experience with unstable and bugged versions of Unity and do not want to work with anything than LTS versions.
     
    Last edited: Jul 7, 2020
  33. Refeas

    Refeas

    Joined:
    Nov 8, 2016
    Posts:
    192
    I think there should be an exception made for the LTS feature backports due to URP missing a lot of the basic ones at least to the point of decent feature parity. I don't like the idea of being forced to upgrade to bleeding edge tech releases of Unity just to get BASIC (not new or shiny) functionality from the URP.

    What I (and I bet many others) would like to have is an option to use the latest URP with point light shadows, SSAO and deferred renderer (which I would say are absolutely essential) on 2019.4 LTS but in an official supported way, not by doing some hacky backports ourselves.

    Once the URP stabilizes feature wise you can switch to standard LTS model (aka no new features).
    This way, users who would like to use the new features would upgrade URP to say 9.x.x on 2019.4 LTS and people who don't need those features (which probably isn't many...) would stick with 7.x.x.

    Otherwise we would have to wait for what? Another year to get those basic features in and upgrade to 2021.1? That just seems insane to me. Unity should really consider putting more manpower into SRP developement and speed up the process signifficantly.

    We and many others have started using URP because it was announced as production ready and looked very promising. We expected that all the missing essentials, such as deferred, point light shadows and SSAO would be backported to 7.x.x release cycle to make the LTS more feature complete.
     
  34. Laex410

    Laex410

    Joined:
    Mar 18, 2015
    Posts:
    51
    First of all thanks for this post, being open with your plans for SRP and for all the work you guys are putting into this.

    I am using Unity for two different things: A private game project I am working on with a few other people and tutorial projects I am creating for my website. The recent year(s) have affected both projects quite a lot and I'd like to share some of my experiences.

    The tutorial projects I've created for my website (5 at the moment) are all written for the built-in pipeline. There are a couple of reasons for that:

    - I am linking as many resources as possible to enable the reader to find additional information on a wide variety of topics. The lack of any kind of SRP shader documentation doesn't really allow me to do so.
    - The constantly changing pipelines made me fear that the tutorials would be outdated about 3 months after I post them.
    - In order to keep the tutorials somewhat reasonable in length I tend to rely on surface shaders in order to skip lighting code. I know there is a way to kind of do the same in URP (https://github.com/phi-lira/Univers...Scenes/51_LitPhysicallyBased/CustomLit.shader) but it's not something that is documented and thus not really something I can use here.

    I am hoping to transition to URP and HDRP tutorials in the future, but I do not think this is a realistic target for this year.

    As for the game project, we started developing it in 2019.1 with HDRP. We knew that it was still in preview back than and we were fine trading some bugs and other issues in order to have the project future proof (we didn't know how much longer the built-in pipeline would be supported) and to be able to use the VFX graph.

    We started struggling with HDRP basically the day we started writing custom shaders. Limited injection points, no real render queue and other issues prevented us from implementing the effects we wanted to have. The lack of documentation made it even worse. I am writing the shaders for this project and while I'm fine with searching through the git repo and looking at your implementations it takes about 10 times as long to write shaders for HDRP compared to what it took to write them for built-in (or even writing HLSL shaders for non-unity DirectX projects).

    Now, about a year after we started, we decided to discard the HDRP. There are still a lot of bugs even though it is supposed to be a verified, production ready package and we just can't expand the pipeline and the shaders the way we want to. In addition to that, GI is currently quite a mess and we need a custom solution for that as well which would lead to additional painful changes to the pipeline. We thought about switching to URP, but the lack of a deferred renderer and some other features (like light cookies) doesn't really make that a great option.

    While I'm sure that the missing features, bugfixes and new GI system will come eventually, we just cannot afford to wait that long. Unity also lost a lot of our trust during this year and to be absolutely honest here, we are not really confident that URP is ready for production even after the missing features are fully implemented. We are in the fortunate position to create a custom SRP specifically for our project and a custom GI solution to go alongside it but not every project has the capacity or experience to do so.

    Everything related to SRP seems as if it was developed with some great ideas in mind but the lack of overall direction, management and coordination between the two pipelines is visible at every corner of them. The best examples for this are the post processing setup and the path required to switch between pipelines. You have an amazing post processing package available (the stack V2) which is in my opinion the best package Unity ever created. Why does every pipeline come with their own post processing? And while you guys write about the future option to have multiple pipelines in one project, to me that sounds like just a bandage for a feature missing from the pipelines cores due to the lack of direction early during development. You should be able to easily switch between the pipelines at any point but I do not see any chance this is happening with URP and HDRP in the future.

    If we were to start a new game we would currently not choose Unity as the engine and I would advice everyone who asks me against using it for medium to large scale projects. That being said, your post gives me the confidence that you guys are having this overall direction now and I am really looking forward to what you are doing with those pipelines. The things you listed might be enough to make it fun again to work on rendering in Unity and to make SRP usable for actual game projects.
     
  35. Bordeaux_Fox

    Bordeaux_Fox

    Joined:
    Nov 14, 2018
    Posts:
    589
    Thanks your feedback, I find it interesting because I actually had the opposite experience with HDRP. But I'm only developing for PC and HDRP does not suit the needs for cross development for low end platforms and mobile. Switching between HDRP and URP is a pain.
    For me HDRP is more usable than URP, because HDRP has a lot more features I need, for example the realtime point light shadows, light proxy volumes, ambient occlusion, volumetric fog. It also features Realtime GI with Enlighten. By that time I used it with HDRP first, Unity did not have announced they will drop Enlighten support. On the other side, Terrain is completely useless for HDRP: No Grass details and layers are limited to 8, also Custom Passes does not affect it. When it comes to customization I have limited experience: I tried the Custom Pass for a fairly simple effect to make some individual layers in greyscale and some water caustics for a certain height in the complete scene. But when it comes to Shadergraph, you are fairly limited when you want all features of the Lit shader plus some own effects. Tesselation does not work and you have rebuild a lot of features from the Lit shader which is annoying and time consuming.
    If we are forced to work with Shadergraph to keep the ability of automatic updates and custom shaders keep working through Unity versions, then I need a full Lit master with all features, yes also tesselation.
     
    Last edited: Jul 7, 2020
  36. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    400
    LWRP was never been planed as replacement to standard. It's was just a proof of concept what you can make with SRP-api. Then came marketing guys.

    It's good for marketing. Look at how peoples excited about URP SSAO, despite HBAO was available for 4 month from now, and other ssao solution maybe even more.

    Unity already has at least 5 SRP-related teams and 30+ peoples. Adding more people just slowdown developing, read "The Mythical Man-Month"
     
    FM-Productions and LooperVFX like this.
  37. JTAU

    JTAU

    Joined:
    May 12, 2019
    Posts:
    24
    I agree, there needs to be some exceptions made with backporting features for URP at least until it has most of the features that would be considered standard - with a particular focus on things that simply don't have workarounds.
     
  38. RoughSpaghetti3211

    RoughSpaghetti3211

    Joined:
    Aug 11, 2015
    Posts:
    1,697
    To me this would kind of defeat the purpose of shipping SRP as core. But I agree you can’t leave LTS as is.
     
    JTAU likes this.
  39. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    Someone recommended I post an example of a modular shader using the system I proposed above, so here we go. The basic idea is that we have a snow system, and we might want to include it in many different shaders. Our traditional way of doing this would be to include it as a cginc file, and have the shader call the various functions in it. However, to do this we have to add the properties and cbuffer data to the main shader, which means we have to know a lot about the snow code to add it to our shader. In an ideal world, this data lives with our cginc file, and at most all we have to understand is how to call the function.

    Note that I'm not particularly tied to any part of this formatting- I'm using the BEGIN_THING END_THING blocks because it makes a parser stupidly easy to write, but with a bit more work the syntax could look much better. I'm also using old surface shader style textures, etc, but that's just it's less typing and I'm more used to it.

    The modular shader would look like this:

    Code (CSharp):
    1. BEGIN_INCLUDE
    2.    "/Snow.subshader"
    3.    "/Core.subshader"
    4. END_INCLUDE
    our Snow.subshader would look like this:

    Code (CSharp):
    1.  
    2. BEGIN_PROPERTIES
    3.    _SnowAmount("Snow Amount", Range(0,1)) = 0
    4.    _SnowAlbedo("Snow Albedo Map", 2D) = "white" {}
    5.    _SnowNormal("Snow Normal Map", 2D) = "bump" {}
    6. END_PROPERTIES
    7.  
    8.  
    9. BEGIN_CBUFFER("UnityPerMaterial")
    10.    half _SnowAmount;
    11. END_CBUFFER
    12.  
    13. BEGIN_DEFINES
    14.    #define _SNOW 1
    15. END_DEFINE
    16.  
    17. BEGIN_HLSL
    18.  
    19.    sampler2D _SnowAlbedo;
    20.    sampler2D _SnowNormal;
    21.  
    22.    void DoSnow(ShaderData d, inout SurfaceOutput o, float2 uv)
    23.    {
    24.       float falloff = saturate(dot(d.WorldSpaceVertexNormal, float3(0,1,0))) * _SnowAmount;
    25.       half3 albedo = tex2D(_SnowAlbedo, uv).rgb;
    26.       half3 normal = UnpackNormal(tex2D(_SnowNormal, uv));
    27.  
    28.       o.Albedo = lerp(o.Albedo, albedo, falloff);
    29.       o.Normal = lerp(o.Normal, normal, falloff);
    30.  
    31.    }
    32.  
    33. END_HLSL
    34.  
    35.  
    And finally the core code would look like so:

    Code (CSharp):
    1.  
    2. BEGIN_PROPERTIES
    3.    _Albedo("Albedo Map", 2D) = "white" {}
    4.    _Tint("Tint", Color) = (1, 1, 1, 1)
    5.    _Normal("Normal Map", 2D) = "bump" {}
    6. END_PROPERTIES
    7.  
    8.  
    9. BEGIN_CBUFFER("UnityPerMaterial")
    10.    half _Tint;
    11. END_CBUFFER
    12.  
    13. BEGIN_HLSL
    14.  
    15.    sampler2D _Albedo;
    16.    sampler2D _Normal;
    17.  
    18.    SurfaceOutput SurfaceFunction(v2f v, ShaderData data)
    19.    {
    20.       SurfaceOutput o = (SurfaceOutput)0;
    21.  
    22.       o.Albedo = tex2D(_Albedo, v.uv0).rgb * _Tint;
    23.       o.Normal = UnpackNormal(tex2D(_Normal, v.uv0));
    24.  
    25.       #if _SNOW
    26.          DoSnow(data, o, v.uv0);
    27.       #endif
    28.  
    29.       return o;
    30.    }
    31.  
    32. END_HLSL
    33.  
    34.  
    The ShaderData structure is just a structure to hold all the common stuff you might need- viewDir in various spaces, world positions, etc. These are effectively there when you need them, and not when you don't. The v2f/appdata structures would all work the same way- if you use .uv0, then texcoord0 is in the appdata structure, and copied to the uv0 in the v2f structure.

    The main advantage of the above is that the CBuffer/Property data lives with the code that uses it, and a #define is available if you want to know if that code is included or not. Note you could also have a keywords block, and use keywords to turn this feature on/off instead of the define block.

    You do still have to know what function to call and insert that into your main shader code (which could have been written in the main file instead of included via core).

    So now we decide we want to have a custom lighting model for our URP forward rendering, so we download some lighting model for a simple ramp model.

    Code (CSharp):
    1.  
    2. BEGIN_PROPERTIES
    3.    _RampTexture("Ramp Lighting LUT", 2D) = "white" {}
    4. END_PROPERTIES
    5.  
    6. BEGIN_URP_LIGHTING
    7. sampler2D _RampTexture;
    8.  
    9. half4 UniversalLighting(inputData d, ShaderData d, SurfaceOutput o)
    10. {
    11.    half3 worldNormal = mul(d.TBN, o.Normal);
    12.    half nDotL = max(0, dot(worldNormal, d.mainLightWorldDirection));
    13.    float lit = tex2D(_Ramp, float2(nDotL, 0.5));
    14.    
    15.    return float4(o.Albedo * lit, 1);
    16. }
    17.  
    18. END_URP_LIGHTING
    19.  
    20.  
    and then we modify the top level file like so:

    Code (CSharp):
    1. BEGIN_INCLUDE
    2.    "/Snow.subshader"
    3.    "/Core.subshader"
    4.    "/RampLight.subshader"
    5. END_INCLUDE
    Note that we don't have to modify any of our shader code to change lighting models, because all of the code, properties, etc required is in the lighting file, and the compiler just finds the BEGIN_URP_LIGHTING block, and if it's there expects to have functions with the correct signatures in them, and doesn't include the default lighting system.

    Overall, this would make it much easier to integrate separate chunks of shader code together, override sections of the shader system you'd like to customize, all while keeping things relatively easy to support and reason about. There is also nothing preventing the shader graph from having a master node for any of these things, allowing you to write the Snow or Lighting subshaders in the graph instead of in code. You might have one shader with a weather (wetness, puddles) system written in a graph, a snow system from a different author, with a lighting model by yet a third person. These could be connected by writing a small core shader with conditionals as I've done here, or in a graph through the "custom code" node, or via a custom node the author provides.
     
    Last edited: Jul 8, 2020
  40. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    I don't rate my shader programming skills very high and the example above looks perfectly approachable and easy to learn as well as maintain. I'd be very happy if writing 2.0 surface shaders looked as simple as this! Most of the shaders we use are surface shaders, and the example above fixes some of the friction points we've hit, like inconvenience of sharing same feature across multiple shaders.

    I'll quote an older post from half a year ago, everything I said about importance of surface shaders there still stands today:
     
    Last edited: Jul 8, 2020
    knxrb, OCASM, neoshaman and 2 others like this.
  41. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    What I would have assumed and would like to see is parity between shader graph and writing custom code.

    By which I mean what you show here as BEGIN/END Blocks should be easily parsed automatically by Shader Graph into node so you write code once and it can be used in both places - best of both worlds.

    Not sure if that's truly practical, there may be edge cases that can't be supported, but it would be nice.

    One other thing would be if the code blocks were in a supported text format like c# or similar, so that way we could leverage IDE like VisualStudio to interact with it. Support for jumping to definitions ( properties, methods etc ), intelisense stuff like that. Having that level of code support for Unity surface shaders would have saved me days when learning the internals of Unity shaders and cginc files.
     
    Last edited: Jul 8, 2020
  42. Darkcoder

    Darkcoder

    Joined:
    Apr 13, 2011
    Posts:
    3,400
    Please Unity, just implement everything Jason suggests.

    If the core surface shader is kept really simple like the above example then we will be happy as we can focus on making cool things, and Unity can be happy as they can focus on what aspect of it they want to drastically change each week. The difference being that since it's so simple, it should also be really easy to write an auto-upgrader for it to not impact us.

    I've converted most of my asset example shaders to be unlit vert/frag ones because I don't want to deal with any of this mess. When I need to have lighting I begrudgingly use SG, but even that becomes a huge mess when you want to do anything complex, and many complex things can't even be done with it. What do I do in this scenario? Right now I just cut features because I don't want to deal with SRP vert/frag shaders. Must be a nightmare for assets and projects that are more shader focused than mine...

    Imagine how far Unity would be right now if SRP began as a new surface shader system that also supported the standard pipeline, and SG was built off of it?
     
  43. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    This is a very interesting and nice idea, which is IMO the current best idea about write minimum code in high level+ maximize code reuse + survive future SRP update so far.
    The pain point I found when sharing common code (e.g. a snow.hlsl) between 2 .shader in URP is the copying of Properties and CBUFFER and multi_compile keyword sections. You have to copy these sections into every .shader, because you can't write Properties and CBUFFER and multi_compile keyword sections inside a .hlsl(you can't hide them from user), which makes it hard to update in the future.

    I would like to see people throw out some example usage, which this system is not able to handle.
    Currently I will expect some include conflict problem, for example, I downloaded Snow.subshader and Rain.subshader from the assetstore, I included both of them inside the same modular shader, but these 2 .subshader both defined a property with the same name, which make this modular shader failed to compile.

    I would like to ask the following question from the user perspective:
    -How can I / Can I add an "Outline" pass with LightMode="Outline"?

    I have a simple project to find out if surface shader in URP is possible or not without codegen in the past:
    https://github.com/ColinLeung-NiloCat/UnityURP-SurfaceShaderSolution
    My answer is surface shader it is not practical without codegen, because you can't hide multi_compile/pass from user. When URP updates, these shader may break, which is not a practical surface shader.

    Jason's solution is much much better. If Jason's solution can recreate the above github link's shader, It will solve most of my needs.
     
    Last edited: Jul 8, 2020
    OCASM likes this.
  44. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    I would not expect the shader graph to be able to reconstruct nodes from text. This would be incredibly complex, and likely never work correctly. What I would think is possible is for any of these to be written in either format.

    Well that's just it, they don't need an auto-upgrader, because the parts that change are, for the most part, entirely abstracted away.
     
    OCASM and colin299 like this.
  45. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Well not to reconstruct them totally, but more like drop in code sections. Maybe i'm massively off-base here but that's what I would expect most SG nodes do? I mean it has to parse the graph to generate properties and CBuffers, so here it would simply extract and add your BEGIN/END block. Then you'd have the node that does the calculations which I would have thought would work like most nodes?
     
  46. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    Ok, one more deep dive for tonight. One issue that Surface Shaders had was once you switched to tessellation, you could no longer do a lot of things- like read arbitrary vertex data and pass it through the stages to the pixel shader. I imagine this was because doing all the parsing work to figure out what needed to be passed through the stages was complex, and well, a lot of graphics coders kinda hate tessellation for a bunch of reasons anyway.

    When the current shader graph generates a shader, it has a massive structure for everything it might need.

    Code (CSharp):
    1. struct SurfaceDescriptionInputs
    2.                     {
    3.                         float3 WorldSpaceNormal; // optional
    4.                         float3 TangentSpaceNormal; // optional
    5.                         float3 WorldSpaceTangent; // optional
    6.                         float3 WorldSpaceBiTangent; // optional
    7.                         float3 WorldSpaceViewDirection; // optional
    8.                         float3 TangentSpaceViewDirection; // optional
    9.                         float3 ObjectSpacePosition; // optional
    10.                         float3 WorldSpacePosition; // optional
    11.                         float3 AbsoluteWorldSpacePosition; // optional
    12.                         float4 ScreenPosition; // optional
    13.                         float4 uv0; // optional
    14.                         float4 VertexColor; // optional
    15.                     };
    This is generated by asking each of the nodes what they use, and then generating the appropriate structure. It also generates a function to generate all of this data:

    Code (CSharp):
    1. SurfaceDescriptionInputs FragInputsToSurfaceDescriptionInputs(FragInputs input, float3 viewWS)
    2.                 {
    3.                     SurfaceDescriptionInputs output;
    4.                     ZERO_INITIALIZE(SurfaceDescriptionInputs, output);
    5.            
    6.                     output.WorldSpaceNormal =            normalize(input.tangentToWorld[2].xyz);
    7.                     // output.ObjectSpaceNormal =           mul(output.WorldSpaceNormal, (float3x3) UNITY_MATRIX_M);           // transposed multiplication by inverse matrix to handle normal scale
    8.                     // output.ViewSpaceNormal =             mul(output.WorldSpaceNormal, (float3x3) UNITY_MATRIX_I_V);         // transposed multiplication by inverse matrix to handle normal scale
    9.                     output.TangentSpaceNormal =          float3(0.0f, 0.0f, 1.0f);
    10.                     output.WorldSpaceTangent =           input.tangentToWorld[0].xyz;
    11.                     // output.ObjectSpaceTangent =          TransformWorldToObjectDir(output.WorldSpaceTangent);
    12.                     // output.ViewSpaceTangent =            TransformWorldToViewDir(output.WorldSpaceTangent);
    13.                     // output.TangentSpaceTangent =         float3(1.0f, 0.0f, 0.0f);
    14.                     output.WorldSpaceBiTangent =         input.tangentToWorld[1].xyz;
    15.                     // output.ObjectSpaceBiTangent =        TransformWorldToObjectDir(output.WorldSpaceBiTangent);
    16.                     // output.ViewSpaceBiTangent =          TransformWorldToViewDir(output.WorldSpaceBiTangent);
    17.                     // output.TangentSpaceBiTangent =       float3(0.0f, 1.0f, 0.0f);
    18.                     output.WorldSpaceViewDirection =     normalize(viewWS);
    19.                     // output.ObjectSpaceViewDirection =    TransformWorldToObjectDir(output.WorldSpaceViewDirection);
    20.                     // output.ViewSpaceViewDirection =      TransformWorldToViewDir(output.WorldSpaceViewDirection);
    21.                     float3x3 tangentSpaceTransform =     float3x3(output.WorldSpaceTangent,output.WorldSpaceBiTangent,output.WorldSpaceNormal);
    22.                     output.TangentSpaceViewDirection =   mul(tangentSpaceTransform, output.WorldSpaceViewDirection);
    23.                     output.WorldSpacePosition =          input.positionRWS;
    24.                     output.ObjectSpacePosition =         TransformWorldToObject(input.positionRWS);
    25.                     // output.ViewSpacePosition =           TransformWorldToView(input.positionRWS);
    26.                     // output.TangentSpacePosition =        float3(0.0f, 0.0f, 0.0f);
    27.                     output.AbsoluteWorldSpacePosition =  GetAbsolutePositionWS(input.positionRWS);
    28.                     output.ScreenPosition =              ComputeScreenPos(TransformWorldToHClip(input.positionRWS), _ProjectionParams.x);
    29.                     output.uv0 =                         input.texCoord0;
    30.                     // output.uv1 =                         input.texCoord1;
    31.                     // output.uv2 =                         input.texCoord2;
    32.                     // output.uv3 =                         input.texCoord3;
    33.                     output.VertexColor =                 input.color;
    34.                     // output.FaceSign =                    input.isFrontFace;
    35.                     // output.TimeParameters =              _TimeParameters.xyz; // This is mainly for LW as HD overwrite this value
    36.            
    37.                     return output;
    38.                 }
    You'll note that many of these are commented out. If these values are never used, the compiler will strip this code for us. However, to speed up the compiler Unity's code comments unused calculations out.

    With a text based parser, we can't ask each node what they use. But I have proposed that simply scanning the HLSL blocks for the string name is fine. If ".WorldSpaceViewDir" is found, then we include it. If for some reason that variable name is used elsewhere, the code would be included, and the compiler would still strip it, but compile times would be ever so slightly slower.

    For this type of structure, this should work wonderfully. It also standardizes naming conventions, which means I don't have to guess what ".vdir" is in your shader anymore, because it will be a better named ".TangentSpaceViewDir" instead.

    But there are some details to solve here. For instance, there is often dependencies on the vertex data, so variables in this structure will have to cause the appData structure to be expanded. Further, it will have to be passed to the fragment shader in some cases. Because our variable names are fixed, it's trivial to write these dependencies, such that having the string ".TangentSpaceViewDir" in your file means you require a normal and tangent, construction of the TBN matrix, etc.

    But here's something I'm not sure about- I'm not sure the compiler will strip that data if you don't actually use it for things like the v2f structure or appdata structures. So instead of just slowing down compile times if someone uses ".uv2" as a variable name, it might cause the shader to add texcoord2 to the appdata structure and send it through the stages to the fragment shader as uv2. Ugh. Maybe I'll poke around in the assembly and test this. If it does get stripped, then this is a pretty golden solution, since it doesn't require us to declare all this crap, or use some massive list of stuff like the SG shaders do (#define NEEDS_TANGENT_SPACE, etc). Ideally, the less crap you have to specify over and over the better.


    Passing Custom Data

    Stuffing data in the vertices is a time wonderful thing, and people will want that, and the ability to pass that data (or process it in the vertex shader) to the fragment stages. Note that the appData structure is covered, simply by declaring more texcoord5 : TEXCOORD5, etc.. We could follow the same pattern in the fragment shader:

    if ".uv3" is present, then that means we need the texcoord3 passed to the pixel shader and should declare it in the structure, as well as copy it along the stages.

    Note that if we need to do custom processing, you can just modify the texcoord3 data in the vertex function. Note also that these variables are named differently so that the v2f (and associated structures for tessellation) can know if they are needed or not- if only texcoord3 is used, then the appdata structure needs it, but the v2f structure does not.

    This part of it gets a little "magic" for my tastes. Normally I prefer very explicit code, but the amount of boiler plate we lose by trying to handle this stuff auto-magically is huge. I'd really hate to have to have a giant block of

    Code (CSharp):
    1. #define ATTRIBUTES_NEED_NORMAL
    2.             #define ATTRIBUTES_NEED_TANGENT
    3.             #define ATTRIBUTES_NEED_TEXCOORD0
    4.             #define ATTRIBUTES_NEED_TEXCOORD1
    5.             #define ATTRIBUTES_NEED_TEXCOORD2
    6.             #define ATTRIBUTES_NEED_TEXCOORD3
    7.             #define ATTRIBUTES_NEED_COLOR
    8.             #define VARYINGS_NEED_POSITION_WS
    9.             #define VARYINGS_NEED_TANGENT_TO_WORLD
    10.             #define VARYINGS_NEED_TEXCOORD0
    11.             #define VARYINGS_NEED_TEXCOORD1
    12.             #define VARYINGS_NEED_TEXCOORD2
    13.             #define VARYINGS_NEED_TEXCOORD3
    14.             #define HAVE_MESH_MODIFICATION
    In every shader to control this stuff. It's much easier to just pretend it's always there and know if it's not needed, it gets stripped.

    Finally, another issue is that you might want to generate data in the vertex shader to pass to the fragment shader, where there is no related data on the vertices. While I don't like the define scheme above for every day use, here it seems more applicable. Something like:

    Code (CSharp):
    1. BEGIN_CUSTOM_V2F
    2.    #define CustomSlot_0     float4 myData
    3. END_CUSTOM_V2F
    In most cases, name collision would be rare and easy to solve when working with modular shaders, but this is a case where it might happen a lot.

    Buuut, those issues aside, this makes it very easy to move this data across the various shader stages, with or without tessellation.

    Keywords
    Much like the post above, where we can merge cbuffer/property chunks, multi-compile and shader features should be able to live with their code as well. Having to include all this stuff at the top of each pass is error prone, and requires you understand all the code below it. Rather, the block of these statements in the lighting file should be included in the lighting file, so the user never has to think about them. However, some forms of opt-out might be required, if you want to turn off certain features on a shader, like fog, or shadowing functions.
     
    SMHall, bac9-flcl, knxrb and 3 others like this.
  47. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    Converting even simple HLSL to graphs would be very complex, and shader graphs often don't support all the features you need. For instance, macros, dynamic branching, loops, etc. Further, some logic just doesn't map well to graphs, and trying to lay out that graph in any kind of readable way is unlikely.
     
  48. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    181
    (1)Are customSRP/URP/HDRP/anyFutureNewRP all using the same modular shader & .subshader?

    (2)is the following sentence correct?
    User will only need to write shader once, then this shader will work in every RP and survive after every RP update

    (3)In your above .subshader example - Snow.subshader, there are (ShaderData d, inout SurfaceOutput o),
    so struct ShaderData and struct SurfaceOutput must be 100% the same between all RP?
    If yes, struct ShaderData and struct SurfaceOutput will be a huge struct that include all possible data in every RP?
     
  49. harry_js

    harry_js

    Joined:
    Jan 22, 2020
    Posts:
    139
    I'm sure something like this has been considered (and possibly discarded) already - and I don't have the long-term battle-scarred background of wrangling Unity that others on this thread have to know if it's truly useful and practical - but at least from what I've seen on my travels through the HDRP + SG source has made me wish we had a higher level DSL that drove shader composition, (automatic) shadergraph integration and UI generation; rather than a spider-web of C#, HLSL, macros, text-templates, repeated code, undocumented pragmas ("magic spells")

    Understandably, building a custom abstraction language and such is a seriously non-trivial undertaking and then forcing your users into a new custom syntax is a learning hurdle - so perhaps instead it's viable to solve some or all of this using the model Burst is, by restricting C# syntax and weaving results from the AST / IL / metadata.

    I've seen some interesting R&D in this area elsewhere; writing modular shader code in C# giving you more powerful language features - inheritance, interfaces, reflection, declarative (attribute) expression et al (as well as conditional compilation if you need it!) instead of having to just manipulate raw shader text through preprocessors.

    HDRP is at least already doing shades of this with the generated shader includes layer, and somewhat with the reflected SG code function nodes.
     
    OCASM and Noisecrime like this.
  50. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,457
    In my proposed model (which is not necessarily what Unity will do):

    1. Yes, the shader would compile under any SRP.
    2. Yeah, that's the whole idea of a surface shader.
    3. There are multiple ways to approach this. If you look at the output of the shader graph, most of these are the same between the pipelines. Also note, the SurfaceData struct is effectively only as large as what you use in it, because things you don't use are stripped. It just feels like it's always there.

    This does bring up an interesting thing, which is how to handle things which only exist in one pipeline. You'll notice that my lighting example is URP specific, since HDRP doesn't support custom lighting functions. Essentially that lighting code is ignored/stripped in an HDRP shader. It's ScriptableAssetImporter simply never calls parser.GetBlock("URP_LIGHTING").

    Now, take something like SSS - this is an input that HDRP has and URP currently does not. So we have two choice here. We could have the structure be the total of all SRPs, but that would make it seem broken when a user tries to set that data in URP - and also make custom SRPs more difficult. The other way is to define the data only on a given SRP, and force the user to wrap the assignment of that data in a conditional, ie:

    Code (CSharp):
    1. struct SurfaceOutput
    2. {
    3.     half3 Albedo;
    4.     half3 Normal;
    5.     ...
    6.     half3 SSS; // only defined in HDRP
    7.     half SSSThickness; // only defined in HDRP
    8. }
    9.  
    Code (CSharp):
    1.  
    2.  
    3. o.Albedo = albedo;
    4. o.Nornal = normal;
    5. #if _HDRP
    6.    o.SSS = SSS;
    7.    o.SSSThickness = thickness;
    8. #endif
    9.  
    I personally prefer the later. Though I'm advocation a lot of magic for 'reading from structures which the system provides', I think this is more appropriate for writing to them.

    This also sparks another issue- what if I want to add my own data to a structure and pass it to the lighting equation? In Surface Shaders, you could do this by adding extra data to your CustomLighting structure. So perhaps the lighting function should take a structure that you define. IE:

    Code (CSharp):
    1. BEGIN_DEFINES
    2. struct CustomLightingData
    3. {
    4.     float3 extraData;
    5. }
    6. END_DEFINES
    7.  
    8. BEGIN_PROPERTIES
    9.    _RampTexture("Ramp Lighting LUT", 2D) = "white" {}
    10. END_PROPERTIES
    11. BEGIN_URP_LIGHTING
    12. sampler2D _RampTexture;
    13. half4 UniversalLighting(inputData d, ShaderData d, CustomLightingData cld, SurfaceOutput o)
    14. {
    15.    half3 worldNormal = mul(d.TBN, o.Normal);
    16.    half nDotL = max(0, dot(worldNormal, d.mainLightWorldDirection));
    17.    float lit = tex2D(_Ramp, float2(nDotL, 0.5));
    18.  
    19.    return float4(o.Albedo * lit, 1);
    20. }
    21. END_URP_LIGHTING
    Since the define block is placed at the top of the shader, we can insert any custom structures needed between multiple code blocks here. In this case, you're breaking modularity because you're expecting some other part of the shader to fill that data out, but sometimes that's exactly what you need to do for some effect you are doing. It would also mean defining an empty structure so we don't have to change the signature, but we could just have a template for generating these like we have a template for making a new surface shader now..

    Anyway, a lot of this is somewhat on the fly thinking. But I hope it shows that we can get very far with a simple parsing system and some structural organization of our code and data, and hopefully product a system that requires very little boilerplate. With more time spent on the parser, the syntax could be better.
     
Thread Status:
Not open for further replies.