A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Universal Render Pipeline' started by Tim-C, Jul 9, 2019.
Will the new UWP post processing work on Oculus Go or Quest?
I suppose because its more modern architecture allows features and functions previously not possible with the standard renderer?
Also it is a mistake to say that URP is not about quality. You can achieve amazing quality using URP tools while taking advantage of ECS etc. for the ultimate performance that some applications desperately need. It's all in the shaders.
Sadly many people mix up photo-realism which is a style, with quality. Not the case. Regardless, you can achieve at least the same levels of realism as with the old standard renderer, while using URP and having amazing performance.
People should stop mistaking graphics technology for creative results and styles.
The amazing artwork that someone created is not because of the tool they used but because of the way they used it. Technology is the brush. Not the result.
Mediocre artwork is still mediocre even if you use the latest DXR tools on it.
Agreed, my feeling is also that this whole pipeline thing is something to be totally rewritten as to its use later, because just does not make any sense currently.
VFXgraph and Shadergraph for mobile is like coming out of the stoneages.
You could barely make a good water shader running down a material without a convoluted flowmap.
With AR, it made the camera materials more convoluted, and then you have people stacking cameras ontop of eachother for ML processing and whatnot.
Pre-lwrp was a disaster for AR. I embrace this new render pipeline
If you submit a bug report, QA will check if this is a known issue or not. I don't think I've seen such an issue.
In the article "Unity 2019.3 beta is now available" dated August 27, 2019, was reported:
"We have completely revamped post-processing for Universal, it’s now integrated directly into the pipeline bringing greater performance. Features in Universal post-processing include: anti-aliasing, depth of field, camera motion blur, panini projection, bloom, lens distortion, chromatic aberration, color grading & tonemapping, vignette, film grain, 8-bit dithering."
-Great... But, where can I now (in 2019.3.0b1) find everything related to postprocessing?
That's all I found:
In the documentation said:
"For post-processing, the Universal Render Pipeline (UniversalRP) uses the Unity Post Processing Stack version 2 (PPv2). This package is included by default in any project that has UniversalRP installed
For detailed information about steps to configure the post-processing, the effects that are included, how to use them, and how to debug issues, see the PPv2 documentation."
But this package is not installed.
If I install it manually - postprocessing refuses to work while URP is turned on.
What could be wrong?
It uses the Volume framework, like HDRP. Create a volume and you'll see all the settings you need. Create a global volume to set the default settings.
This comment at the top of Volume.cs might be relevant to you:
//Volumes are documented in HDRP for now
//Volumes are documented in HDRP for now
Our updated docs need to be pushed – and should be pretty soon (if you want a glance at them you can see the GitHub PR here).
In 2019.3b1, when you open up the template you should see a "Post-process volume" in the scene, or if you have a new scene you can just right click in the scene and create a volume (so, global volume then create a new profile, then modify that profile or add overrides).
Manually adding doesn't work because that's the older post processing (v2), we've moved to having post processing directly integrated into the render pipelines. This lets us have an HDRP specific post processing that's all about cranking things up vs. a universal that gives more scale and better performance.
Edited to add: we have a bug with Scene View + Post Processing (so you can see your post in game view but not scene :/) this has been fixed and will land with 7.1.1.
I agree. Every time a new version comes out I end up having to rebuild all of my projects in order to get them to work. I just spent a ton of time getting my project to function with LWRP and now one year later, its redundant, cause it now needs to be Universal pipeline. Everything is now pink, and I cannot find any documentation on how to update from LWRP to universal pipeline
The upgrade was automatic for us. URP is mostly LWRP renamed. To facilitate porting, a gutted version of the LWRP package is still there and installed.
I have to agree with @richardzzzarnold. I am 99.9% onboard and agree with most of the decisions UT makes and a champion for this engine in general, but the URP API changes are not great. I was under the assumption that the URP amounted to a simple name change of the LWRP. Instead, there are breaking API changes. Post-Processing has been refactored entirely such that custom image effects no longer work. There are 4 separate posts of people asking if it will be supported in the URP with not one answer from UT.
In my current project, post-processing is broken and I am also told the 'fix' is only available in 2019.3 (#1178127).. so I am forced to use the beta if I want image fx. Even this isn't even a viable solution for me as I will clearly have to rewrite them (the 3rd time) for the URP (Assuming of course this is even supported...the pending documents don't even mention custom pp). A name change is one thing, but introducing a new API or replacement for a component that doesn't have feature parity with existing ones is something else.
Admittedly, I just "dabble" in graphics, I can't imagine what asset store devs or graphics programmers have to endure. I feel like I am chasing your API changes every 6 months or so. Do I need to keep expecting this when you decide you are tired of the current render pipeline or want to move yet again in a different direction? I'm sorry for the rant post, but I think you guys handled this one rather poorly - even by your standards.
I just wish that Unity made a table showing all the features that are supported/unsupported in the Legacy Pipeline, URP, HDRP... it's so confusing and hard to make decisions without having a single place to get the information, and when its also changing every few months. The last couple of years have been a nightmare trying to keep up with Unity changes, specially with the Render Pipelines.
Is this really too much to ask?
Will you file a bug and get me the number? Materials being pink when going from lightweight to universal definitely shouldn't happen and is a bug – I'm sorry you ran into that.
Post-processing for Universal doesn't currently have custom effects. They are coming and I'm sorry I don't have a timeline for when. Depending on your needs the custom renderer system might be a good solution (video - additional docs coming).
I totally understand your frustration, we're doing some big pushes which creates thrash – but the goal is to get to a better place. We never want to cause pain, or make changes for the sake of changing things. Our built-in render pipeline has a foundation that's evolved over 10+ years - and what game devs want to be able to do, as well as what's possible in realtime rendering, has changed a bunch with the evolution of hardware and technology.
With the changes to post-processing we had a difficult choice to make: release new post-processing which is much more performant - but doesn't have custom effects, or keep the older less performant post-processing. More performant post-processing has been the number one request for post-processing, which we've had from a large number of users. Our other data point is that HDRP has had its new higher end post since 2019.1, and the requests for custom effects have been relatively few (but that pipeline is used by a different audience.) Getting more performant post-processing in to the hands of our users ahead of this LTS to give our users a stable base to ship on. This pain is compounded by the fact that we should have made this kind of change before leaving preview but we weren't able to get it out as soon as we wanted. We should have taken that aspect into consideration.
This was also the rare case of needing to force a switch – as we couldn't have an embedded post-processing and a package based post-processing co-existing. Forcing a switch is never how we want to get feature adoption. We likely have under-estimated the importance of custom effects and, I'll bring up this situation with the graphics team to keep in mind for how we evolve future features. We want to keep making Unity better and we need to do that in a way that isn't frustrating for our users.
All of our materials broke when upgrading from 2019.1+LWRP to 2019.2+LWRP, but our setup is probably to blame.
In order for LWRP to work for us, we are having to copy it out of the package manager into the project each time, make our custom edits (some tweaks to fog, some custom renderer functionality we need in the base Lit shader, etc).
Sometimes when copying over the new version, it breaks all of our shader references in materials (I'm guessing the GUIDs are being stomped, even though we maintain the .meta files). If you select the material, it says its shader is InternalShaderCompilerError (or something like that?).
I've since written tools that will fix all materials in the project once I've identified the culprit shader GUIDs and what they should actually be now. All of the old properties of the material are retained. It looks like there already may be an internal solution for this with the material converter, but the shader types are hardcoded to the ones chosen by your team. It would be amazing if we could extend this functionality with our own shaders, or at least have a case you handle on your end for materials that are reporting an improper shader compiler error. If you inspect the properties its easy to see what shader should probably be assigned to the material.
Is there a reason you didn't use custom renderer passes / custom renderers for that? Those were introduced in 2019.1, and are a safer forward facing solution vs. overriding. I suppose it would likely not work for the case of base Lit shader but maybe for fog (though you might have to extend it yourself).
I thought you could be modify what shader types get converted? But that's a vague memory of an older conversation. I'll bring it up – as having a more user-friendly and general solution is something we should do.
We've been using LWRP before the custom renderer passes were a thing, however we did try converting some of our custom edits to these to not great affect. It was difficult to find documentation on them and they seemed to be in a incomplete working state.
The biggest issue I had was the depth data I was getting back in the custom render passes was bogus, and after bringing it up, was told that depth data is not available in the custom passes yet by one of the Unity Devs (I can find the post if that helps)...
Then don't deprecate the current render pipeline.
I just finished some new surface shaders with multi-compile keywords and lots of maths (plus some custom post processes).
I really can't see how I could do this with URP or HDRP.
I have no use for a render pipeline that's not as flexible as the current render pipeline.
This is going backwards.
I loved Unity for it's flexibility.
I'm so disappointed it's depressing.
Ah, yeah understandable. And on docs, we are getting more people helping with as it's something we need to better about but we do have catch up to do. The docs for custom renderer are in progress, hopefully out ahead of 2019.3.
For depth data, I'll ask if that's on our to-dos to expose. Likely not happening for a bit though–short term are working on camera stacking (aiming for 19.3 cycle) and a deferred renderer (20.1) as our next two big features.
The current render pipeline isn't slated for removal. There is naming something as depreciated which means we are no longer doing feature development on it (and only fixing high priority bugs). And then there's removal. The former is driven by prioritization/customer needs–in this case our built-in renderer has been stable for quite a while. The latter will be driven in part by adoption data, which is in part driven by whether or not we have all of the features we need.
It's interesting you call out the flexibility of the built-in render pipeline as a lot of the feedback we've gotten on it has been about it's blackbox'ness. I would be interested in hearing more about your use case and seeing if we can figure out how it would exist in a scriptable render pipeline world, I sent you a DM with my contact info.
Thank you very much.
Yep, so that explains why I and others can't use the offical "custom" features you guys work on. Because they aren't truly custom, and as an above poster also said, is actually a regression of functionality if I use the "official" way, as it doesn't expose any of the things I need.
Using depth data in a shader feature is pretty common/standard
For me I was expecting opening the black box, not going into a new box altogether. I understand that there is technical debt under the hood that made thing difficult, but couldn't just have the "fontend" stay similar enough + extension (like shadergraph) and cleaning the "backend", so the transition is less painful. I mean it's a common enough complaints at any skill level, see Jbooth who advise having a full time tech artist dedicated to the new pipeline (so much for democratization), I think he's pretty much a veteran and a credible source. I mean shadergraph is cool but being mandatory and replacing surface is a whole level of inflexibility!
I dunno how it's behind the curtain, and maybe it's obvious to you why you go that direction, it's just that breaking continuity that harshly is kinda disorienting. I have took a huge step back before committing now.
Indeed, right now we only have two options: go super high level with the shader graph and lose all the power of actually programming your shader, or go super low level and write thousands of lines of HLSL just to do the most basic shader that integrates with URP/HDRP (shadows, ligthing, lightmapping, lightprobes...)
We NEED a middle ground between these two extremes.
That's what SURFACE SHADERS are! A way to program a shader that integrates with the render pipeline without having to spend your life in the process!
Losing surface shaders would be like losing C# scripting and force everybody to use Playmaker or else write your own C++ application that interfaces with the Unity API at the lowest level.
This is the same thing but with shaders!
i really enjoy surface shaders. they allow very quick prototyping and the automatic upgrade is a wonderful feature.
however i doubt that they will come to hdrp or urp in the nearby future – if ever. unity has decided to provide a visual shader editor because there were a lot of demands. and for 90% of the users this may just should be fine.
talking about lwrp you can create your own template which will take some time. but once you have it you can more or less just wrte something like the good all surface shaders: just put your InitializeSurfaceData() function and all the structs into an include file and you should mostly be done.
so instead of riding a pretty much dead horse i would love to talk about announced features of the URP – like deferred rendering. i do not know anything about it like: how does the gbuffer layout look like? what will be the difference between hdrp and urp deffered? lighting models?
So @quixotic to be clear are you saying that the built in pipeline is depreciated?
I have strong issue with saying its stable, shadows using realtime GI and a rotating light (basically any standard day and night cycle) are currently horrendous. Please see case 1169018 or page 1/2 of this thread for my posts on this.
You're not helping.
Mistakes can be fixed.
Not being an important feature for you doesn't make it less important.
thanks, i doubt you do. but it is up to you how you want to communicate.
i would suggest to start a poll.
I agree with you, having surface shaders would be swell, but it would be good to actually look at the options you have available to you at the moment. Have you actually tried to do this? Because while it's not as convenient as surface shaders it's not thousands of lines, since the new library is a hell of a lot more usable than the old one.
Unless you want to do your own lighting model, lighting and shadows are encapsulated in the LightweightFragment* functions, which is fed with data pretty familiar from surface shaders. Look at LitForwardPass.hlsl calling LightweightFragmentPBR() or SimpleLitForwardPass.hlsl calling LightweightFragmentBlinnPhong().
Sampling lightprobes and lightmaps is done in InitializeInputData and is just inputData.bakedGI = SAMPLE_GI(input.lightmapUV, input.vertexSH, inputData.normalWS); plus OUTPUT_SH(output.normalWS.xyz, output.vertexSH); in the vertex shader.
You'd probably not even edit that in the majority of cases, even if you edited the rest of InitializeInputData.
Sampling your actual textures happens in InitializeStandardLitSurfaceData in LitInput.hlsl but you could just dump that entire function straight into the fragment function, it's only 15 lines of code including the specular/metallic preprocessor if.
It gets more complicated if you want to do your own lighting model but even there the framework makes things quite alright. Look at the implementation of LightweightFragmentBlinnPhong().
Just because in the old library that was poorly encapsulated and had to be done manually does not mean that is how it works in URP. Where the real issue lies is writing shaders that work with both pipelines is not feasible at all outside the really high level ShaderGraph. The way they function is just way too fundamentally different, which was not the case in the legacy pipeline. Creating some types of shaders for sale on the asset store is going to be interesting to say the least.
All I'm trying to say here is that while it would be good to get this abstraction back to whatever extent is possible, things are not as tragic as you are making them out of be.
inputData.bakedGI = SAMPLE_GI(input.lightmapUV, input.vertexSH, inputData.normalWS);
Frankly, no. But that's because the pipeline is not finished and I have learnt in the past to stay away from diving under the hood of new Unity implementations that haven't been finished yet. (since it will keep changing)
Thank you very much for the in-depth tips. I really appreciate it.
If all of this is so much simpler than with the current render pipeline, then why not refactor the code or at least make some macros or functions to make the life easier for shader developers?
As I said before, I'm not against change, I just want a real alternative to surface shaders.
It doesn't have to be the same syntax as before as long as it gives the same power by code.
Maybe the solution to this would be to just give us some very clearly and detailed docs about programming shaders for the new render pipelines along with user-friendly macros and functions.
Thanks for chiming in and listening to some of the frustration here. I hope you guys are seeing that many of us who are not full-time technical artists have a hard time keeping up with the changes. As a solo dev, it seems like it's becoming hard to build a game only a few custom visual tweaks, but mostly standard graphics. (As atomicjoe said: "go super high level with the shader graph and lose all the power of actually programming your shader, or go super low level and write thousands of lines of HLSL just to do the most basic shader". )
To illustrate, I'll list the (modest) features that I want to implement:
- Simple AO to give depth to a dynamic scene (player created, so can't use offline baking).
No AO in LWRP/URP Postprocessing Stack- Fog and and 'selection outlines' implemented as a custom PP effect.
No Custom Postprocessing FX in URP- A foliage shader that is 99% standard but makes a TINY tweak to the BDRF/lighting function.
ShaderGraph has no way to change only LightFunction (like you'd do in Surface).- Runs on mobile
One of the reasons sticking with Legacy is not really an option.
I can't believe that for the above set of requirements, my only resort would be to build a custom render pipeline!?
(Edit: just to illustrate, this early footage should give you an idea of the visual style of the game, and why I think it should be doable using 'mostly standard' features. https://www.dropbox.com/s/180wfrlxf3wtitk/2019-03-21 building-tower.mov?dl=0 )
Yeah, I guess this is where my approach at my job is different, out of necessity. I always follow the internal development of the packages we use and modify them when needed. I do realise that this approach is definitely not for everyone, but it's one of the reasons I really appreciate the SRPs and packages in general, I can make changes to things that were all native code before.
That is an extremely good question. I suspect the answer is that because no-one in the team has done it, since the project plan currently has different priorities. It is entirely feasible to do this as an external developer but it would need to track internal development, which is best done, well, internally.
Absolutely. Unfortunately, I have long given up on Unity documentation ever being useful to that extent. There has always been a weird split, where some documentation is in-depth and good for figuring out how to do things as a developer and some is extremely high level and doesn't expose remotely enough information for a developer. Annoyingly enough, those nearly never cover the same material at the two detail levels. Even before the reference source was released, ILSpy on the code side and the shader source on the shader side was my documentation and that very much shouldn't be the way things are.
I'm pretty sure we can change that just asking nicely. (and very insistently)
Thanks for the update. To be completely fair, I do like the direction you guys are headed in. There is some very cool tech included with the URP (VFX, Shader Graph, etc). I am also keenly aware of the issues of post processing on mobile and looking forward to the replacement. I'm sure I can find a workaround for my image effects in the meantime.
I think the biggest issue here is just communication. I believe most of us would have been ok with the announcement that 2019.3 will deprecate the pp v2 package and introduce the new forked version of the hdrp pp package... Also, custom image fx are not supported yet, but performance is much better - we are looking into ways to support it. If your software teams are anything like mine, it could just have been a simple oversight due to a team not being familiar with the intricacies of the solution.
I believe there should be some advanced notice somewhere that with the adoption of the URP, that custom image fx will not work out of the box. I'm ok finding a solution for my needs, but when 2019.3 rc drops, developers that must switch to the URP from the LWRP will not be happy finding out the hard way that FX aren't supported. at least not yet
It also makes it much harder to share and adapt code. With visual editor that are node base it's harder to swap bit of different shader, especially when working on multiple test variant.
I'm still using old lux code to swap Franken shader as way to accelerate dev experiment. Graph as been useless so far by making this one order of magnitude complicated. Which mean it stifle growth, there is a huge skill gap between handling graph then going full srp.
Also a lot of stuff I do is basically custom npr light and screen space like effect combined. I feel like I got stuck doing stuff with the new approach.
And the argument that thing is simple once you see spend time reverse engeenering functions that don't match the behavior you are looking for (initializewhatever don't spell like sampling texture to me), is ridiculous, and a trap I used to fall into. Especially when the promise seems to redo everything on a project basis, while standard allowed to Franken shader my way out of that.
Well I don't have to upgrade up until they clear all of this. I mean that's the reason I still don't have a paid license, they keep pulling the carpet under me. each time learn something they make it obsolete so my effort goes to waste.
Er, no. They match precisely what they claim to do once you understand the terminology. The terminology should be documented and isn’t but SurfaceData refers to the input to the lighting function, so yes it makes sense that InitialiseStandardLitSurfaceData provides you with the SurfaceData for the standard Lit pass. It is basically exactly the same data that a surface shader surf function used to output to the lighting function.
As for it being a trap to rely on internals, that is unfortunately because it is an absolute necessity in this engine and always has been. It has served me just fine in 6.5 years of commercial Unity development. Granted, I would not be doing that if I were making assets for the asset store, but I’m a game developer, the engine is a tool not a temple I should tread lightly around. It is a hell of a lot less of a problem with packages than it used to be, all package updates go through our VCS, I can see precisely what has changed and where and how before merging my patches in.
This situation is not great for asset developers but it is manageable otherwise. Edit: That said, we used to use the Uber shader pack, which did precisely this for the legacy pipeline, duplicated most of the standard shader, calling into the entirely undocumented standard shader library etc. At least the URP shader library is simple.
I worded that very carefully when I wrote it, I didn't say they don't match their intended function, I say they don't match the behavior, the person looking at it, with a specific idea of modification to do in mind, want to operate on.
I can't just locate and isolate a single part, I have to reverse engeener the whole code flow and the culture of naming behind it, know in advance complex shader concepts. I'm essentially arguing the bolded part, that is you can't make isolate quick change like in standard, unless you take a course on the whole philosophy (without any doc, on an evolving convention), which mean you are several step remove from the action yo want to go. I don't want to learn trigonometry when I'm trying to do simple addition basically.
I still have no complete mastery of the standard shader way of thing, it's not needed, but I could jus stay on the small subset I only needed and reference more whenever I needed, and grow at my own pace. I'm arguing the current way of doing in the state it is right now prevent growth, you get stuck at shader graph and the gap between that and srp mastery is huge. I moved to simply swaping function in standard (like modifying Lux to have per pixel tangent for hair shader) to actively building a RTGI solution that might work for open GL es 2.0 (the question of that project being viable is still up in the air and not the subject).
Standard implementation might be flawed underneath, but the idea didn't prevent graph shader (strumpy, shader forge, amplify, etc ...) nor limited us to that, the philosophy was key. For me srp would have been specific use case, I was fine with them being, specific use case (hd and lightweight machine) but I thought they would retain the powerful philosophy of standard + surface.
Ok, but maybe with some refactoring of the code to make it more accessible, some user-friendly macros (or defines, call it what you want) and functions coupled with a detailed documentation with examples about the whole process of making shaders by code for URP (like the current docs about surface shaders), maybe then it would be on par with the current surface shaders?
Since the pipeline has changed, syntax must have changed, but that hasn't to be a problem if there is a detailed documentation with examples on how to program shaders for URP.
Can some Unity dev like @quixotic or @aleksandrk confirm if they can do that?
long story short: when the first preview came out we asked for surface shader support. and even more advanced surface shader support which would let you access all internally used variables like tangentWS or viewDirWS. if i recall properly this was more than a year ago.
unity instead has decided to solely focus on shader graph. which is far away from being ready but it is evolving. i just saw a commit adding support for tweaking the normals and tangents in the vertex shader in hdrp.
so i doubt that we can make them change their plans on this.
you and me – we both do not have to be happy about this, but somehow deal with it.
and if trying to make them change their direction fails what else can we do?
submit bug reports and feature requests?
apart from that: pointing out really essential features such as: in most scenarios you will get a full depth prepass (if there is a directional with shadows enabled e.g.).
this depth prepass could safe the gpu a lot of work if was usable in the actual forward lighting pass: no overdraw at all, no need to use alpha testing so early z out would speed things up.
asking for more congruent macros/functions but keeping old ones still in the code and adding overloads might be another request which has a good chance to be taken into account:
half4 albedoAlpha = SampleAlbedoAlpha(uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap));(uv, texture, sampler)
half3 normalTS = SampleNormal(uv, TEXTURE2D_ARGS(_BumpMap, sampler_BumpMap), _BumpScale);(uv, texture, sampler, bumpScale)
but it does not fit:
half4 maskSample = SAMPLE_TEXTURE2D(_MaskMap, sampler_MaskMap, uv);(texture, sampler, uv)
could it not be:
half4 maskSample = SampleTexture2D(uv, TEXTURE2D_ARGS(_MaskMap, sampler_MaskMap));?
No, it is not currently depreciated. I wanted to separate out the concern of removal from depreciation. I looked into that bug, it's still open and I'm trying to find an owner to see if when it gets fixed for render pipelines if we can also cover built-in. We do need to solve for the broader case of making sure built-in continues to receive bug fixes while many of our users are shipping on it. This is something we are actively trying to solve.
- AO is a thing we are missing that we plan to add, if that's a blocker for you built-in is a better current solution.
- Likely possible custom render passes, I linked a video above that mentions a bit of them but documentation is currently in progress for those – built-in is your easier option for the short term, until custom post-processing is supported again (that or using 2019.2 of LWRP).
- Unfortunately not functionality we currently have. We only offer going the route of fully custom lighting calculations on an unlit shader. Built-in is the easier pass for you right now with what you want to do.
- What issues on mobile are you having with built-in? While Universal is more performant in many use cases, there have been a large number of mobile games shipped on our built-in render pipeline, which might be the better choice for your project right now.
We are creating anew many systems that have existed for a long time. With that there is a long list of previous features and requirements that we have to prioritize. In addition, we don't just want to have the old system – we've been adding new features in functionality with the goal of becoming the better solution (Shader Graph, VFX Graph, custom renderers). It is useful for us to hear what we've missed on to help inform our 2020 planning.
There were some misses in the chain of communication. We'll make sure in our 2019.3 release notes and blog post we call this out, as that's something that should have been in the 2019.2 beta blog post / notes but wasn't.
We don't currently have documentation for how to write shaders in Universal, I'll ask our technical writer to make sure that's in the backlog. We're working to make sure everything gets proper coverage for documentation but it is a work in progress.
For learning about the shaders in Universal you can take a look at how the Lit Shader was written - it's in the package contents. Also, in Shader Graph you can right click master nodes and copy out the generated code. It's pretty readable if you know vertex/fragment shaders, though we did a refactor that will land in time for 2019.3 that changes how we generate to block things out and are better commented.
This is a very insightful comment. Unity would see greater acceptance of their new pipelines if there was an upgrade path for legacy shaders.
actually i am a bit tired of filing bug reports.
so i started to create a simple table anybody can edit or comment:
No one gives a crap about any of this until you fix CAMERA STACKING!!!!
custom lighting in shader graph and LWRP:
This description is probably a bit outdated. That is what is usually wrong. The whole setup has changed and as someone already said this has come into the Volume section, (Not sure how it makes sense Post elements to be in a section called "Volume" but hey! what do I know about good UX ) like in HDRP, which by the way is changing yet again for 19.3
This is a really really problematic situation. Not sure who is signing off such mess.
Might be a stupid question, but is there actually anywhere a side by side feature compare list for built-in, urp and hdrp?
I tried to find one but unless I'm blind, there is none. It would perhaps help us to understand where urp actually stands in regards to feature parity with the built-in pipeline.
Here you go:
The biggest benefit of LWRP/URP is it renders several realtime lights at once in a single pass, whereas built-in has to do a fullscreen pass for each realtime light. (EDIT: talking about built-in FORWARD renderer here. Deferred has not this issue but is heavier because it has to render several buffers each time no matter what)
Each time you do a fullscreen pass, you waste a lot of fillrate/bandwith and this kills the performance on mobile devices particularly, so this is awesome. However the LWRP/URP is more limited than built-in on other things (or at least it was).
for HDRP, it's the same technology but with lots of extras, like better transparency sorting, refraction and volumetric lighting:
So no custom Effects??, confusing progression.. Unity Can you clarify this MESS?????
I tell you UNITY IS ON A DOWNWARD SPIRAL WITH ALL THESE CONVOLUTED VERSIONS...
Hi and thanks for the feedback.
I note two issues with sbove, one is that extending the renderer as suggssted is not globaly applicable, because you instantly become incompatible with any other asset in store and future pipeline versions.
Also the other issue is why chose to integrate the image effects in a way that is not extensible, is there a major design concern for this, because maybe could offer this library as an asset that directly injects into the pipeline to work faster but is extensible for example and not provided as a closed package.
It's a mess alright and yes it is convoluted, but they are not on a downward spiral. This is a long term improvement that will make a difference. Combined with DOTS it's amazing and very promising technology.
Sadly, whoever approves feature deployment strategy probably (for whatever reason) has no clue of the effect this has on people's projects and they are losing good teams already because of that.They just need the right person to get their house in order and help restore trust and confidence. Whoever is in that role right now does not think of users with projects. Or perhaps they believe not many are using it. In my opinion, SRP should not even be in Unity. It is one of these things you show at SIGGRAPH as the technology you are working on for the future. It has one more year ahead of it to mature, it was introduced to the users 2 years too early. The standard pipeline may not have the potential SRP has, but there is nothing you can't achieve with it. SRP still leaves a lot to the "wait and pray" list and that is not what teams need. It's fine for hobbyists that only experiment and end up doing very little or nothing, but for professional teams a game engine is not a time pass activity.
Thanks for this... I might be blind then
I've been in game engine development since '96 (heck, whoever uses 3D Studio Max and works with animations might use animation SDK functionality my colleagues and I established with Autodesk in '97). So, I've got a quite good grasps of what is currently happening with the ongoing restructuration of the Unity Engine.
That said, as long as there is no feature parity between the legacy built-in pipeline (BIP) and the URP for low- to mid-range devices and PCs, it would be a bad idea to drop or reduce support for the BIP. In terms of shadow caster support, there is still some need for improvement (point light shadows). Even the support for at least a second directional light should be implemented (backlight anyone?).