Search Unity

  1. We are migrating the Unity Forums to Unity Discussions. On July 12, the Unity Forums will become read-only. On July 15, Unity Discussions will become read-only until July 18, when the new design and the migrated forum contents will go live. Read our full announcement for more information and let us know if you have any questions.

Official Block Shaders: Surface Shaders for SRPs and more (Public Demo Now Available!)

Discussion in 'Graphics Experimental Previews' started by dnach, Oct 19, 2022.

  1. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @jbooth

    To add to what Aras said: the binding information data that we store contains two parts nowadays, the intersection of data for all variants of a given stage and the diff per variant. So, the more common the resource layout is between all the variants, the less data is stored.

    Now, this is only true for data that is stored on the disk (which is compressed on top of everything). At runtime we combine the common part and the diff back into a full per-variant representation. I'm not sure if you're talking about RAM usage or the data from the build stats.

    When we load the data at runtime, we load all variants unless the user specifies some limits (as described here - see Dynamic Shader Loading). Then when a specific variant is needed for rendering for the first time, we send the intermediate data (DXBC or SPIR-V or whatever the current graphics API needs) to the driver. At this point the memory occupied by intermediate data is freed.
    Dynamic Shader Loading caps what we keep in memory per shader to whatever is specified in the settings. So one could set it to "1 chunk, 1MB" and the intermediate data will be capped at this (soft) limit.

    To your original question: materials define the part that specifies the shader feature usage. So if you have 23 materials each with a unique combination of keywords, you get base variant count of 23. Multiply that by the possible permutations for multi compile directives, and you get the final variant count before builtin stripping. If XR is enabled, we automatically add variants that make sense for the current build target and XR settings. So "23 unique materials of the lit shader" may mean different things based on what the shader has.

    The easiest way to reduce memory usage - for now - is to enable dynamic shader loading. By default it's chunks of 16 MB, 0 (unlimited) number of chunks limit.

    If the resulting shader is the same, the memory usage should be exactly the same.
     
    aras-p likes this.
  2. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Thanks for the answers - That all seems about what I would expect, but I wanted to make sure there wasn't anything unusual going on. I've been tweaking and making builds all night, and some things I've figured out:

    - A multi_compile for the LOD cross fade stuff was left in BiRP which was doubling the variant count.
    - Removing 4 lerps from the code base reduced the size of the shaders by 0.1mb in memory, which I guess speaks to just how many base variants are needed if 4 lerps takes that much memory.
    - The standard shader is about 0.7mb for the minimal one (albedo/normal). Better Lit is about 2mb for the same, which is quite a bit bigger. The 4 lerps were the ability to remap the range of smoothness, ao, and metallic, about the only major difference between the two when stripped all the way down.

    Do you mean the cbuffer layout by resource layout? Or something not under our control? (I don't #ifdef in the cbuffer because that breaks batching in SRPs).

    Both, but ram is the major concern from most users.


    [/QUOTE]

    Strangely I'm seeing a difference between the two, usually about 0.3mb. What makes this so odd is that the output of the shader is the same, the only difference being the name and that one sets a custom editor. I can spit out text files from each and do a diff to see if I can find any other differences, but there shouldn't be. But 5am now, so I'm going to bed and will resume tomorrow.
     
  3. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    All resources - cbuffers, textures, other buffers.
    For example, if
    Texture2D<float4> albedo
    is present in all variants, it will end up in the common data. If there's at least one variant that doesn't have it, it will end up in each variant that does.
    Same for cbuffer contents etc.

    This is weird. Where do you see it? In the profiler?
     
  4. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Yes, when taking a memory sample.
     
  5. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    I'd love to have a repro :)
    I haven't seen this behaviour.
     
  6. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Some interesting findings:

    Since I'm using better shaders, each module of Better Lit Shader can be compiled as it's own shader. So I made a material for every module to see which ones were largest. The average is about 370k each. The one with the muilti_compile for LOD cross fading is 700kb, which makes sense since that should double the variant count. Two others are 700k though.

    The first is the Bakery integration. It has a block of pragma shader_feature_local:

    Code (CSharp):
    1.    
    2.     #pragma shader_feature _ _LIGHTMAPMODE_STANDARD _LIGHTMAPMODE_RNM _LIGHTMAPMODE_SH _LIGHTMAPMODE_VERTEX _LIGHTMAPMODE_VERTEXDIRECTIONAL _LIGHTMAPMODE_VERTEXSH
    3.     #pragma shader_feature_local _ USEBAKERY
    4.     #pragma shader_feature_local _ BAKERY_VERTEXLMMASK
    5.     #pragma shader_feature_local _ BAKERY_SHNONLINEAR
    6.     #pragma shader_feature_local _ BAKERY_LMSPEC
    7.     #pragma shader_feature_local _ BAKERY_BICUBIC
    8.     #pragma shader_feature_local _ BAKERY_VOLUME
    9.     #pragma shader_feature_local _ BAKERY_VOLROTATION
    10.     #pragma shader_feature_local _ BAKERY_COMPRESSED_VOLUME
    Note that all vertex and fragment code is wrapped in USEBAKERY, such that if that is not defined none of the code will run, and that the material has no keywords on it (I used the debug inspector to make sure).

    This compiles to 700kb, but if I remove all of the shader_feature_local blocks except USEBAKERY, it compiles down to 350kb again. Further isolation points to the first pragma, which does not have the local keyword. Does this mean it's working like a multi_compile and producing variants? Commenting out the other pragma's also saves a bit of memory (0.7mb -> 0.6mb). Do shader_features take memory even if the variants are not in use? (I'd imagine there's some LUT somewhere from keywords -> variant, but not 350kb worth per variant)

    The second one is the vegetation studio instancing integration. This one uses a pragma:

    Code (CSharp):
    1. #pragma instancing_options procedural:setupVSPro forwardadd
    Which I assume is internally causing another variant for the option?

    So, with both of these disabled, BLS gets down to 1.1mb for the base level shader variant from 3.2mb, still a lot- but when users have 150 uniquely keyworded materials in their scene, that adds up quick. But unfortunately it doesn't seem like I can support these integrations without massively bloating the shader size.
     
  7. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Ok, I want to diff the text output to make sure there are no differences (other than the custom editor) to make sure it's not something on my end.
     
  8. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Ok, so I found the culprit.

    The bakery module uses:

    [KeywordEnum(Standard, RNM, SH, Vertex, VertexDirectional, VertexSH)] _LightmapMode ("Lightmapping mode", Float) = 0

    To do the keyword switch. I don't normally write my enums this way, but rather back them with enums in code and use EditorGUILayout.EnumPopup to draw them and then disable/enable keywords on the shader. But it's a lot nicer to do the enum this way and not have to write all that editor code.

    So whats the difference? Using KeywordEnum leaves _LIGHTMAPMODE_STANDARD turned on in the material, and keeps adding it even if you strip it. This seems to cause it to produce a 'dead variant'.

    All the code is wrapped in USE_BAKERY, but the system doesn't know that, and generates a variant for _LIGHTMAPMODE_STANDARD.

    Now, in theory this is fine, because we would never generate one without it - but with this on the shader grows in size even though none of its code is used. IE: I have one material in the scene, this keyword is set, but all the code is wrapped in a keyword that isn't set - yet the shader is still larger. This seems like it might be a bug? Shouldn't the variant without this keyword no longer be generated?

    Removing the use of KeywordEnum from the few places it was used got the shader down to 1.0mb from 3.8mb (after clearing keywords on the material).

    Also, the shaders appear to be the same size now regardless of which packing method is used, so perhaps there was a loose keyword hanging off one of the materials?
     
    BOXOPHOBIC and SonicBloomEric like this.
  9. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    Definitely not.

    No. The only thing that affects it it the number of variants that end up in the build.

    Yes, it produces a
    _ INSTANCING_ON PROCEDURAL_INSTANCING_ON
    .

    For this enum it will be just a single macro picked. It shouldn't matter, as it's a single option and won't increase the variant count unless materials use different options. But it would still be a per material thing.
    As to why the shader is larger with this - I suppose this is because the material property is still there regardless of whether it's used or not. And this uniform is present in each variant at runtime.
     
  10. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Well all this digging has got me thinking about variants in the future..

    - It can be hard/time consuming to tell what is happening inside the black box, so any visibility added there would be a win. For instance, even being able to push a button and have it compute how many variants will be calculated for a shader (and why) would be a big win vs having to generate that during a build.
    - Unity uses a lot of multi-compile, especially in the SRPs, and this makes the base size of any lit shader quite large.
    - It would be nice to have a way to switch between favoring variants vs. dynamic branches. The new pragma makes this a really easy switch, but what I kinda want is to be able to choose between dynamic branches or variants depending on build target or user choice. From what I understand dynamic branching on something which is the same for every pixel should be really fast on most modern platforms, but if you're on the low end you might prefer to compile things out instead. I have people trying to squeeze performance out of mobile, and others using hundreds of variants on PCs and complaining about compile time and memory - so there's no right answer as to which to use for small amounts of functionality when writing the shader.
    - Some of these issues may be alleviated by blocks, if blocks makes it easy for the average user not familiar with shaders to combine functionality. For instance, if Better Lit Shader could require Better Shaders to be installed, I wouldn't ship it as one giant mega shader, rather as features you can add and remove via a simple UI to generate the shader you need. This would have fixed the issue I have with Vegetation Studio's Indirect Instancing, which is used for a small percent of shaders yet the #pragma makes several new variants because of it regardless of if it's used or not. Same for features like LOD cross fade, which uses a multi-compile but might not be used on many surfaces.






    -
     
  11. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    @jbooth yes, we're thinking in the same direction :)
     
    Last edited: Jun 2, 2023
  12. merpheus

    merpheus

    Joined:
    Mar 5, 2013
    Posts:
    204
    When should we expect an update for this? Its been almost year since the initial public demo?
     
    Edy, SoyUnBonus, Jaimi and 7 others like this.
  13. Jaimi

    Jaimi

    Joined:
    Jan 10, 2009
    Posts:
    6,245
    @aleksandrk - any there any updates you can share?
     
  14. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    3,074
    We're working on it.
    We'll give an update as soon as we're ready :)
     
  15. LooperVFX

    LooperVFX

    Joined:
    Dec 3, 2018
    Posts:
    182
    as eager as I am for updates on this. I'd rather unity gets block shaders right than rush it out like shadergraph was. in the meantime shadergraph custom (hlsl) function nodes are much more capable than they used to be, and @jbooth's better shaders is even... better ;)
     
  16. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Better Shaders was developed and shipped in 2 weeks. SRPs are going on what, 6+ years of pain now? I'm all for getting it right, but the pace is unbelievably slow, and breaking changes still aren't being documented to make it easier for anyone dealing with the mess. So much could have been done in all these years, and now that something is being done it's still several years out.
     
  17. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    706
    LooperVFX likes this.
  18. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    874
    Good to hear! Meanwhile I'm sticking to BiRP in all projects that allow it. I've noticed this reduces my productivity-to-gray-hair ratio significantly.
     
  19. LooperVFX

    LooperVFX

    Joined:
    Dec 3, 2018
    Posts:
    182
    ;)
    only 2 weeks, wow. fair point :D. there is something to be said about the approach known as: "build fast, fix later" / "ship, then fix" / "move fast and break things" but (to
    your 2nd point) this only flows when there is a powerful and effective process for communicating, documenting and dealing with breaking changes along the way.

    the current design of the unity documentation site and disparate package microsites certainly has room for improvement here, among other things, like the unity graphics repo mirror where we can no longer see all the original pull requests that provide additional context for changes, and whatever goes on internally, culturally at unity perhaps around leadership and teams feeling empowered and safe to deliver quickly and imperfectly. (pure speculation informed by consultant research and other companies)

    so it feels a bit like a ship that is moving carefully and slowly by rowing but doesn't often set its sails for one reason or another.

    sorry, getting a bit off topic and i feel for many in the forum this may be preaching to the choir, but hopefully provides insight to someone. :)
     
  20. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    I wouldn't say it's ignored, they've talked about it a lot. But we haven't really seen the impact yet. IMO largely because the previous issues (like splitting the SRP teams) created so many problems a bunch of foundational work is needed before they can move forward.

    Block shaders is likely connected to the coexistence effort, so launching it too soon might create more problems than it solves.

    At this point you might be hurting yourself in the long term :p
     
    LooperVFX likes this.
  21. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    I would argue that in some ways Unity might be making the same mistake they made last time.

    Originally, Aras designed surface shaders as output format for a shader graph to produce - Unity never actually got to the shader graph part, but they did finish the abstraction for it. Surface Shaders got us through many iterations of the lighting and shading module with minimal breakage. It was a funky system, but it worked well in most cases.

    Then with the SRPs they threw out this abstraction, and rather than replacing it just built one into the shader graph, such that the shader graph would output different shaders from the same graph. This abstraction did not have to be tied to the shader graph at all, and could have been written in a way to take input from a graph, text file, or some other representation, and the Shader Graph could have targeted that format rather that writing the code directly. No doubt a bit more work, but not much in the grand scheme.

    Now after years of demanding an abstraction that works with text based input, the proposed solution to this is block shaders, which allow for composition of shaders via interface like structures, and apparently offers some type of analysis layer (AST Tree, etc) which in theory will allow them to do more with the shader code (optimize, port to different target languages easier, add their own language constructs, etc).

    But once again, the abstraction layer we needed 6 years ago is tied to another system rather than being its own thing, and we're left to manually updating/rewriting all our shaders constantly with no documentation, change lists, or help. I fear this is the same mistake, again, with the continue'd 'rubbing salt into the wound' of doing nothing about the issues we are dealing with every day.

    Now, I can totally understand the long view here, particularly when it comes to what they might be able to do with the backend of owning the compiler, etc- but I suspect those gains are years out beyond the initial release, or might not come to fruition at all if company direction changes (which is the one constant in large companies). But maybe this will turn out a bit like Jobs/Burst and end up really useful, to either us or them. It's just not possible to make a call on that with what little information we have right now.

    But I do know the SRPs are a nightmare for anyone who has to support hand written shaders, and Unity is not doing anything to make that easier while we wait. Further, I also know that in a few weeks of work they could have plugged a text based parser into the current shader graph system and solved most people's issues- after all, it took me 2 weeks to write my own from scratch (along with a lot of time reverse engineering the pipelines over the years, which is knowledge they intrinsically have since they wrote it and make all the changes). So an intermediate solution could have been done at any time.

    So, given that this isn't even in preview yet, it's likely to be 2 more years before we see a non-preview version of it. That will be about 7-8 years of waiting and suffering through unnecessary reverse engineering by lots of developers because someone wouldn't take a few weeks to make an intermediate format we could write shaders in during this period. And having to rewrite my shaders every 7-8 years is far more acceptable than having to reverse engineer someone else's every few months.

    I'm sure someone at Unity is pissed right now because I'm giving time estimates for how long it takes to do this work. The last time I said something should be easy to do a Unity dev talked to me at their office in SF and said they were pissed because how could I know, it might be more complex than I realized, etc, etc. Then admitted the fix was actually trivial after they figured out they had a pointer class that doesn't immediately load a resource. So I'm pretty confident in my estimate here, the shader graph is not really different than any other shader abstraction system- it builds up a function call and stuffs it into a templated shader using a bunch of string operations, just like ASE, MicroSplat, or Better Shaders does. We're just all individually maintaining our own systems to do this, along with maintaining the templates when ever something changes (which is the real work). And I have both created the equivalent myself, and have seen the shader graph code, so don't even pretend it couldn't have been done or would take an insane amount of time.

    To me, this is a bit like a billionaire waking up every day and going "You know, I could cure world hunger, education, or solve climate change. Fix something important. But screw it, let's just make a phallic shaped rocket instead!".
     
    thelebaron, sirleto, merpheus and 7 others like this.
  22. saskenergy

    saskenergy

    Joined:
    Nov 24, 2018
    Posts:
    35
    At this point, I don't know why Unity hasn't tried to hire @jbooth. Even just for consulting work, we wouldn't have to be in this situation. Block Shaders or some other abstraction should have been a thing back when SRPs were first announced.
     
  23. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    The thing is maybe they probably could do it in 2 weeks (well probably not due to large corpo reasons but bear with me), but the problem is they have internal stuff to fix first.

    If I wasn't sure that merging the foundations of the pipelines so that switching between them would be easier wasn't a monumental task, due to two separate teams working on them for years. I might have the same opinion.

    But to me, until it seems like this unification is seemingly at least mostly complete I can excuse the wait. Cause Even if they could deliver a temporary quick solution, it would likely change a ton and cause even more friction. Status quo might be preferred in the meantime rather than even more fragmentation.

    Cause I assume a bunch of aspects are on hold until coexistence and shader blocks is at least mostly complete, for example things like the water system can't make it's way to URP if that's even in the cards. Which if easily switching pipelines is the goal it should be even if it's just on a basic level.
     
  24. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Oh that would go over like a lead zeppelin.
     
  25. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    I don't think they are planning anything close to that level of compatibility between pipelines. Coexistence, as I currently understand it, still means having multiple copies of every light, material, or other pipeline specific data. It's just being able to have the data for both in a single project. And nothing about making a shader abstraction requires or prevents co-existence from working. For instance, they could keep things incompatible at the shader level and just require two shaders, much like they are doing for other data. Or they could create an abstraction for shaders and not do coexistence, much like the shader graph does. The problems are independent of each other.
     
    LooperVFX likes this.
  26. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,560
    As a multi-billion dollar company, they can't really just throw something together in "2 weeks", even if one of their programmers could pump something like that out. There are so many layers of procedure with a goliath of a company like this, gone are the scrappy gang of programmers days of Unity (which in some ways got them into this mess without enough long-term forethought, though one cannot be expected to predict the future completely). A system that could underpin the backbone of custom shaders in Unity for the next couple decades has a lot of monetary risk riding on it, risk that a solo dev does not have to really worry about.

    That said, it does feel like they aren't being as proactive and focused as they should be, this kind of thing should be a top priority task as it's kind of a core feature that a lot of future content will end up being built on. But Unity is unfortunately in a touchy position after spending the past many years working to revamp almost everything about Unity and is finally on the other side of that hump, in the difficult position of committing to finalizations of systems decisions and melding the newer ecosystem together. It's in their best interest to ensure they spend significant time considering pros and cons of various approaches and future proofing, to experiment before committing so they don't end up with another half-baked solution that increases the ecosystem fracturing that already got bad enough with URP/HDRP, Post Processing and DOTS fumbles. And them opening up for outside input on development of this system is a sign of that desire at least.

    But whoever is heading this side of the projects should maybe consider bringing on outside consultants/help to try and get this feature's development on a focused path that isn't just TBD maybe in a year or two. A company this big should be able to set some kind of soft deadline for the team to focus their efforts around and get down to committing to decisions, polishing and pushing it out. This public demo is nearly a year old now. A system like this should have had multiple rounds of feedback based changes and new test iterations in that year's time, but instead it's kinda just sat here.

    Make use of the community to harden your design choices for this since it's such an important system.
     
    sirleto and LooperVFX like this.
  27. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    I'm quite familiar with working in large companies, and yeah, they'd easily turn 2 weeks into 2 months or more, but the point is not really changed by that. It's been 6+ years already. Further, it's not the "scrappy gang of programmer" days of Unity that gave us the SRP, it's the "We hired all the console dev's we can find, didn't listen to the original developers of Unity or it's users warnings, came up with a giant waterfall plan, and have 3 teams working in 3 different countries - what could go wrong?!?" version of Unity that gave us SRP. HDRP is about 4.5% of current projects and is the only pipeline really getting new features, URP is kinda at parity with a BiRP, a 12 year old renderer that wasn't state of the art at the time. Meanwhile over at Epic Brian Karis basically researches and writes most of Nanite while being in a company nearly as large as Unity.

    Giant over-managed teams run by committees rarely make brilliant software. Individuals and small groups of engineers with a vision do. There are a ton of smart people at Unity, capable of great work, but there seems to be a lot of problems preventing them from being able to actually do anything. And anything small or that addresses immediate issues seems to happen adhoc because of twitter rants, or gets swept up into some massive new initiative which will take years to complete.

    Proving my point- the community wasn't asking for a "shader system to last decades", they were asking for something to solve the problems they face right now - the abstraction layer which should have been built in the beginning. And trying to do anything that "last decades" is a fools promise - technology changes quick, and our systems for rendering are still changing rapidly.

    Sometimes it's worth waiting longer for a more grand system, sometimes you should give people something they didn't ask for that intrinsically solves their problems and changes the paradigm, but 7-8 years of problems ignored? They could have easily fixed this issue and then spent 7 years designing the grand shader system to replace all shader systems. Upgrading to that after 7 years of peace would be way easier than having to reverse engineer everything every few months.

    Sometimes I wonder if the only way to get anything done at Unity is to start a big new initiative, like passing a bill in the US congress, filled with pork.

    They are refactoring the fundamental design of the systems they built, not putting the icing on it. They aren't even close to being over that hump.

    All that said, I really do hope blocks turns out brilliant, opens up modularization and customization of shaders in new ways, and ends up future proof for years. I'd love to get back to just writing shaders instead of porting between internally created platforms and playing the "what broke this time?" game.
     
    Last edited: Aug 19, 2023
    wilgieseler, thelebaron, Edy and 12 others like this.
  28. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,560
    I wasn't arguing against the fact this should have happened earlier, I was only remarking about the rate of progress from since this feature was announced. I agree with most of that.
     
    LooperVFX, wwWwwwW1 and jbooth like this.
  29. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    329
    Reminds me of this:

    https://blog.royalsloth.eu/posts/it-takes-a-phd-to-develop-that/

    You can read the original discussion here:
    https://github.com/microsoft/terminal/issues/10362
     
    cecarlsen likes this.
  30. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    Honestly I don't think this was the case here - the person I had this interaction with is a perfectly good programmer from all accounts, I just think that in general Unity is now a large piece of software in which it's quite possible to not know how everything works, and there's huge pressure on not breaking things or promising too much to to the public before you absolutely know what the deal is. Combine that with the chaos of a company that's grown from a few hundred to 8000 people in a few years, focus/vision/management issues, and you easily have an environment where the assumption is that everything is hard and the general answer to touching things is no. I can move so fast because I'm not in such an environment.
     
  31. Shikoq

    Shikoq

    Joined:
    Aug 5, 2023
    Posts:
    12
    I honestly don't understand the purpose of block shaders. Unity already has a great abstraction - shadergraph. Keep developing it and give people options for more customization. Interface, stencil, etc. Why do we need another surface shader? What problems do they solve?
     
  32. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    706
    *Insert heavy face palm gif here*

    There are 4 pages of people talking about this feature and why its needed/underwhelming right here...
     
    joshcamas likes this.
  33. Shikoq

    Shikoq

    Joined:
    Aug 5, 2023
    Posts:
    12
    Literally all I saw was endless whining about how they were used to those nasty surface shaders and how they don't want to change their habits.
    Compatibility? Ditch the dead legacy RP already. SRP is one of unity's best ideas ever, seriously.

    Stop listening to forum whiners, Unity.
    I mean, "Do what people need, not what they ask for" is the first rule of game design, right?
    Luddites will always be against everything.
     
  34. LaireonGames

    LaireonGames

    Joined:
    Nov 16, 2013
    Posts:
    706
    There is SO much I would love to comment on this but its obviously not going to be a constructive conversation so I'm just going to ignore you in an attempt to keep this thread on point
     
    ElliotB, OCASM, cecarlsen and 2 others like this.
  35. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,560
    This is like saying why do we have C# programming in Unity, why not just focus on the Visual Scripting plugin.

    A visual graph based approach has its benefits for realtime feedback and making the barrier for entry lower. But workflow is often slower compared to someone experienced in writing the code itself, and it's hard for a visual graph system to cover every aspect of a reasonably complex language/system, especially in an efficient manner.

    Block shaders would be part way between things, enough abstraction to keep things reasonable to implement and retain compatibility, but not so much that you lose out on capabilities. At least, that's the hope with the system.

    Also a lot easier to version control and debug shader code than serialized graph data.
     
    Last edited: Aug 23, 2023
  36. BOXOPHOBIC

    BOXOPHOBIC

    Joined:
    Jul 17, 2015
    Posts:
    531
    The purpose of the block shaders is to get rid of this idiocracy below.

    The abstraction will benefit not just written shaders, but also Shader Graph, Amplify Shader Editor, and the next mega awesome shader graph/shader system that might come next. Abstraction, that's why surface shaders work from unity 5 to unity 2029.7, regardless if written or generated with Amplify or Shader Forge or Strumpy Shader Editor.

    Now, let's take this list from below. Using the 2022.3.0 HDRP shaders will not work with decals in 2022.3.7, because the decal code changed again between minor versions.

    upload_2023-8-24_22-43-12.png
     
    Last edited: Sep 4, 2023
    Reanimate_L, sirleto, OCASM and 3 others like this.
  37. SoyUnBonus

    SoyUnBonus

    Joined:
    Jan 19, 2015
    Posts:
    43
    This. We make games for PC and Consoles. We've released over 30 at this point. And we need a stable engine that we can trust, that's why we've been using BiRP for 9 years, and thanks to the Asset Store devs we've been able to release games with volumetric lighting (Ziggurat 2, for example) that standard BiRP doesn't support. And we released that game from the PS5 all the way down to the Switch.
     
    Edy, SonicBloomEric and BOXOPHOBIC like this.
  38. ElliotB

    ElliotB

    Joined:
    Aug 11, 2013
    Posts:
    302
    Hi all! I'm still excited, are there any updates/news on when the next version of this might be available for preview? Looking back through the thread, the last mention of eta was in January when the answer was 'when its ready'. Cheers!
     
    OCASM and JesOb like this.
  39. Kabinet13

    Kabinet13

    Joined:
    Jun 13, 2019
    Posts:
    162
    So long as you end up making Block shaders support compiling to BiRP (I should hope this is incredibly easy, I'd be worried if it wasn't) I'm a happy camper. I'm more of a SRP user, but having the ability to move one shader across all three pipelines is critical to defragmenting this current mess of an ecosystem.

    +1 for BiRP support
     
  40. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,251
    Will this land in 2023.3 in time for the LTS?
     
  41. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    Considering 2023.3 has another year probably :p
     
  42. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,251
    Its in alpha now , right?
    So might be feature locked in beta somewhere
     
  43. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    Dunno, I just know they said Unity 2023.3 will take a while longer. Hopefully to polish and deliver something more complete than yearly releases allow for how much work they have with coexistence and such.

    2023.3 won't be LTS until end of next year.

    https://blog.unity.com/engine-platform/2023-3-coming-april-2024-with-updates
     
  44. They said the complete opposite, their excuse is that they want to bring NEW features a couple months earlier (so repackaging some 2024LTS features into 2023LTS and delay 2023LTS to end of next year).
    So even if we believe in their fairly transparent BS, they are trying to rush stuff shoveling more into this, not less and test more thoroughly.
    I don't think any of this is true, BTW, just for the record, it's complete BS. I think they don't have anything even remotely memorable feature for 2023 because they F***ed up the engineering in the company.
     
  45. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    They don't seem to talk much about coexistence despite that being much of their behavior.

    There's no mention of 2024LTS in the post, and far as I can tell all the new stuff kind of seems geared towards coexistence features like APV with URP, render graph in URP, etc. Unifying stuff.

    HDRP seems to not be getting much in new features until the foundations are fixed.

    I think it's reasonable foundational stuff like block shaders is best done without the demand for a bunch of new features on top of it. The coexistence stuff is a big undertaking. And hopefully result in less major API changes going forward after 2023.3 LTS
     
  46. "Brought forward" -> done in the same timespan, just called 2023LTS now, because... reasons.

    There is zero technical reason delaying 2023LTS. They could easily say whatever they have pushed out normally in Q1 as 2023LTS and then they release 2024LTS in 2025Q1 as normal. They chose not to because they have nothing to release with 2023LTS which would play along with marketing and the effed up price-change. I yet to saw any other remotely realistic reason why they introduced 2023.3 tech stream instead of 2023.3LTS.
     
  47. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    I think there is if they're doing foundational work instead of feature work. It's about expectations, outside of what's already in 2023 then 2024 would be on the surface level a horizontal upgrade. So, it stays 2023 and expectations for new features stay low.

    Especially since they've been avoiding talking about incomplete features other than forum announcements.
     
  48. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    5,461
    I would warn everyone who's thinking co-existence is going to fix things to dail their expectations way back. As I have seen it, it's mostly just the ability to be able to stuff all the problems of URP and HDRP into the same project - not some unification of workflow so you can easily switch the rendering. It's like FBX, which is really just a wrapper aorund having 3 separate file formats in the same file.

    For instance, lighting, cameras, and materials are still expected to be separate, with some way to load the right ones depending on which render pipeline your using. So you'll still need to manage all that data and changes through it. And while block shaders will certainly help the biggest issue of incompatible shaders, it's not going to magically solve the data management issues of having all these different versions of everything.

    I also don't expect them to ship block shaders in 2023. We'd be hearing more about it if they were. All this speaks to fixing the SRP issue being fairly low on the priority list.
     
  49. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    1,012
    Yes, I'm well aware. I'm not at all expecting a simple switch. For me the excitement is more that in order to do all this Unity needs to the two to be more similar internally and also make core functions work for both which we've seen they've been doing lately (APV, Render Graph, etc).

    This also means in the long term new features between the two will be easier. It was always a problem that the pipelines were so different and made by separate teams. Not saying I expect every HDRP shader to work, but it is strange to have a water feature, but have it work only on one pipeline.

    Along with things like HDRP's realistic light values (and hopefully the EV exposure too) according to some stuff they've said seems to be coming to URP. Which would be extremely valuable to me alone.

    Unity isn't talking about any new features really at the moment, I doubt we'll hear more about block shaders until it's closer to completion. They've said many times they've learned their lesson about talking about things too early. It's been a year since they released a test version of block shaders and 2023.3 is still another 6 months from now. I wouldn't be surprised if it launched around then.

    The other reason is I would presume block shaders being incomplete is a barrier to many other things moving forward.
     
  50. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    12,015
    I mean that is not their problem though, if some of their features actually delivered, even if really late, this wouldn’t be a problem.

    Instead all features are released incomplete and then instantly abandoned <= this is the problem, not them hyping them up too early, but I don’t think Unity understands that, they probably think it’s a communication issue or something.
     
    SoyUnBonus and bnmguy like this.