A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Calling all New Unity users! Join the Halloween Mods Showcase Challenge until October 31.
Discussion in 'Graphics Experimental Previews' started by phil_lira, Sep 28, 2018.
It's here: https://github.com/Unity-Technologi...ity.render-pipelines.lightweight/CHANGELOG.md
Visually, it allows you to do things like planar reflection via some coding:
Programmatically, it allows you to do extra processing before or after rendering of a scene, the goal is to make use of current render result to do some tricks like above, so it's like post processing, but not necessarily the whole screen.
AFAIK this entry refers to "adding a new API" that's compatible with LWRP 5.x (Unity 2019.x) releases. Not the ability to do extra rendering, which was already possible using an older API.
I said it in another thread I might as well say it here:
- Quick note: don't use GitHub release that wasn't released officially via Package Manager unless you know the package like the back of your hand.
- Reason: there are times they haven't passed the QA test, meaning, only consider the Package Manager version to be final.
By the way, with this latest version, with unity cloud build, I receive this error for LINUX (windows and mac compiles fine):
[Unity] -----CompilerOutput:-stdout--exitcode: 1--compilationhadfailure: True--outfile: Temp/Unity.RenderPipelines.Lightweight.Runtime.dll
22: [Unity] Library/PackageCachefirstname.lastname@example.org/Runtime/DefaultRendererSetup.cs(2,42): error CS0234: The type or namespace name 'LWRP' does not exist in the namespace 'UnityEngine.Experimental.Rendering' (are you missing an assembly reference?)
23: [Unity] Library/PackageCacheemail@example.com/Runtime/DefaultRendererSetup.cs(41,22): error CS0246: The type or namespace name 'IBeforeRender' could not be found (are you missing a using directive or an assembly reference?)
Maybe I just need to reimport the project. I will and edit this if anything changes.
Hi, regarding 4.7.0-preview. It's not in our production repository yet, only in staging. Before moving from staging to production we do release QA.
What's holding it from being released is that we found some issues in mobile VR with that version.
Thanks for sharing.
That is caused by a namespace error when backporting the PRs required for AR... We will have to fix it.
Will 4.7.0 fix postprocessing in VR?
Rendering from a second camera in 2018.3.1 with 4.6.0 preview to a rendering texture does not work in LWRP and VR mode. The second you turn on the VR camera the texture stops updating or turns black. It works as normal in a non-VR mode so this is a game breaker if you want to have a scoped rifle for instance in VR.
2019.1 seems to solve the above but creates new issues instead.
It would be case 1112145, Unless Alpha's not meant to be a back buffer image in Premul.
Sorry for my ignorance and my little English but I am interested in using it for my own VR projects, for it, Do you know if it would be VR ready soon? Thanks for your time
4.7.0-preview seems to reproduce a issue: https://issuetracker.unity3d.com/is...-pipeline-gives-corrupted-image-in-ios-device
In 4.7.0, a "Depth only" clear flag doesn't work. It results clearing all previous rendered image. So I changed a flag to "Don't clear", a glitched image was shown on iOS. (works correctly on macOS)
We are testing a new package today. 4.7.0-preview is a no-go for production.
We re-enabled Post and VR in 4.7.0-preview but found out that it was not properly tested on mobile. So, we will disable post processing for mobile until VR and Postprocessing team fix it. On PC it should be fine, but please let us know if you find any issues.
There are still a few issues. The major ones I'm aware of are:
1) Shader variants in VR are causing long build times.
2) PostProcessing on mobile is disabled.
3) SRP Batcher is not compatible with VR yet.
If you are experiencing more issues please create a bug report and feel free to report also here the case number. These issues sometimes depends on multiple teams to fix and this way I can stay on top of them.
*Depth Only* and *Don't Clear* are not supported in LWRP because we don't support camera stacking.
In 19.1 we are replaced camera Clear Flags with Background Type to be clear about this and don't allow you to select configurations that don't work in LWRP.
To prevent that clearing issue in 18.3 we changed Don't Clear to behave same as Solid Color. A new package is coming out soon with this.
Wow, It's a big change.
Can I see any ScriptableRenderPass injection example? (ex: Render a scene in perspective -> Render some models in orthographic as overlay -> Render uGUI)
The best thing we have in terms of examples is this atm: https://github.com/UnityTechnologies/LWRPScriptableRenderPass_ExampleLibrary
This API is still under experimental namespace because we want to be able to make it simpler and also provide some built-in scripts for things that require special tricks/projection like guns in FPS.
Also, atm Screen Space - Camera render mode is not supported but we will add support to it.
Later this week I'll post a doc with some WIP public roadmap and feature comparison table between LWRP and Built-in.
And I see what you mean. Works for PBR... Not for unlit.... Bizarrely enough.
Hi all, here's a WIP document to compare current state (2019.1) of LWRP and Built-in in terms of features, along of ETA for features. The items flagged with "In Research" means we don't have it in the plans for 2019 roadmap.
We are working on getting a public roadmap with cards as well so you can vote and help us prfioritize the work on the most important features.
Let me know what you think. We will also add this to package docs.
Regarding performance. I'm working on a project with some baseline performance tests (fillrate of different shaders, batching/cpu cost and bandwidth cost). so we can test wide on devices and share results with you. Hopefully I can share this soon.
I have recent performance tests with 5.3.0 on an iPhone 6S.
1) A fillrate test rendering 2.5x native resolution comparing LWRP Lit vs Standard Lit with 1 directional + 4 lights:
LWRP renders with about 4ms less than Builtin (13.x vs 17.x ms) - GPU time.
2) A bandwidth test rendering shadows + opaque + transparent + post + msaa + HDR.
LWRP renders about the same GPU time but is consuming more bandwidth than Built-in (161MB L/S vs 148.2MB L/S)
I'm atm looking at reducing bandwidth cost in LWRP, in fact 5.3.0 comes with a few improvements in that area.
3) I didn't finish yet a batching baseline performance test of LWRP vs Builtin, but another dev shared with results on SRP Batcher on and off in LWRP, which should be pretty close comparison to Builtin. In a scene with 1024 renderers casting shadows using 1024 diff materials. On an S8 Adreno LWRP with SRP Batcher is taking 38.5ms (CPU Time) vs without SRP batcher (56ms). To give more context about this. SRP Batcher is coming enabled by default with 5.3.0. It currently supports only Vulkan and Metal but the work to support GLES3 is in progress.
Can you not just create a new material and assign it to the cube?
Is there no support for D3D for the SRP Batcher, or am I misunderstanding? I have turned it on in our branch of 4.x and it appeared to be doing *something*, though I may be imagining things.
In regards to the performance testing, I too have been seeing higher bandwidth costs than expected with LWRP. We won't be moving to 2019.x for this initial release, so 5.x is not an option for us. I'm not requesting a back port to 4.x, but do you think a back port is possible? I can merge code myself in our local branch but I know there may be some core engine dependencies.
Speaking of the Shader Variants in VR, this was so crippling to our iteration that we actually stripped Standard Shaders all together and have instead opted to only use ShaderGraphs. If there is anything else you can speak to, or would like some additional insight as to what we are seeing I'm more than happy to help.
So you have tried the mini profiler and it didn't change anything?
I can see SRP Batcher helped to reduce draw call on my end, but I am using Metal API (not enabled by default in 2018.3, but can be enabled at runtime apparently)
Results were inconclusive using the Stats window and better diagnostics tools like RenderDoc and NSight weren't revealing anything further, hence my inquiry.
Our Render loop and Material definitions are quite well optimized as it is, so I'm not ruling out that even if it does work with D3D, it may have little to no effect in our case.
On a side note, I see on a re-read that @phil_lira mentioned it doesn't work with VR yet, so it actually wouldn't matter in our end use-case.
If you've used any other material/shader editor before you should be fine. It's basically the same for all of them in any engine you use, path/raytracers included (except these have some other advanced features built into their master nodes but they share a lot)
If you're completely new don't worry. there's a lot of tutorials out there that you can use to learn from (keep in mind it doesn't have to be Shader Graph specific). I would suggest starting off with recreating a basic version of the standard shader. Basically just adding texture inputs and tiling.
I don't actually use ShaderGraph, but the Content Creators on our team do. They are making cool things with it, but I have had to make modifications and additions, as well as custom Master Nodes to really be able to unlock them.
It is as intuitive as a node editor can be, but it is somewhat limited in its current state. It mostly serves as a surface abstraction, rather than a full-blown Shader Editor at this point.
Still, very good, because it puts the power in the Artists hands and unblocks them from waiting on someone like me who otherwise has to hand write everything.
There are performance concessions that you'll make, too, as naive implementation on top of a node abstraction will never be as efficient as hand-optimized HLSL.
C'est la vie. The first step is always making it work/look right. Making it fast comes after. And if it comes to needing advanced low level optimizations, then that's where you seek out help from the community and use it as a learning opportunity.
"Premature optimization is the root of all evil." - Knuth
4.8.0-preview version is available through package manager. It contains a couple of bugfixes + enabled support for non-mobile PostProcessing in VR. VR team is working on to add support to mobile VR and postprocessing and once done we can enable it in LWRP.
This will be the last version for 18.3. I know this is not what many of you want to hear but supporting it further will consume a lot of resources that we are putting in 2019 cycle to get LWRP out of preview, stable and performant.
SRP batching is not supported on VR unfortunately.
I'll check on the ETA so I can share with you.
Sure. Poke me in pvt for backports if you are interested. however it might be tricky to do given the codebase has diverged a bit. For bandwidth there are the following things in the work to dust off quite a few bandwidth in LWRP but these mostly benefit mobile.
1) In mobile platforms there's an option to preserve or not alpha in framebuffer so the game might blend with the background in iOS/Android. This is exposed in Graphics. preserveFramebuffer. By default in Unity this is false meaning we have to kill alpha. This might require an extra blit pass just to kill this alpha which is insane. So in SRP we are investigating an option to globally mask colorwrite alpha.
2) Framebuffer attachments are being created in the forward renderer with stencil even though they are not needed in most cases. There's no way to create it by using RenderTextureFormat API but it seems to be possible with the new GraphicsFormat API. So I'll take a look at that soon and expose an option in the asset for devs to ask depth + stencil if they are doing custom stencil based shaders.
3) LWRP is performing worse than builtin in bandwidth when using MSAA. One of the issues is than when doing a screenspace shadow resolve a "hack" blit is causing and unnecessary flush of the just cleared screenspace shadow resolve texture to main memory. The fix requires some special case for VR and some work on how we setup camera matrices in LWRP, which is why this is not fixed yet.
We'll keep working on those to get a lean bandwidth cost on LWRP.
The situation is pretty bad in VR unfortunately but the VR team didn't had time to look at it yet. to be honest is already bad itself on non-stereo rendering, some work on that is planned to reduce variants in general in 19.1
I can’t be absolutely sure, but I recall ‘receive shadows’ on the LWRP Lit shader in transparent mode used to function as you’d expect. In the 5.2 package the surface just renders background shadows over the top and doesn’t receive shadows- is this a regression or just future behaviour?
Is VFX supported?? or its just for HDRP exclusive only?
As per this Doc: https://docs.google.com/spreadsheets/d/1nlS8m1OXStUK4A6D7LTOyHr6aAxIaA2r3uaNf9FZRTI/edit?usp=sharing
2019.1 is their current target.
As noted by others, this is a really big change. I feel this is the kind of information that really should have been in the changelog, so users don't have to trawl through long forum threads full of unrelated issues to figure out why everything suddenly broke. I personally found it funny (in a bad way) how there's specific mention in the log about fixing a bug for Depth Only cameras in 4.6.0-preview only for Depth Only to be deprecated in 4.8.0-preview without any note in the log.
In any case, is it safe to say users that currently make use of stacking should not update past 4.6.0-preview?
Am I in a timeloop or does the changelog have the year mixed up too?
I think this might be a bug related to how we resolve shadows in lighting pass vs screen space resolve pass. If you disable cascades does it make transparents receive shadows?
Thanks for the feedback.
Regarding changelog. In 19.1 is a little bit clearer in regards to this clear flag change. We replace it with Background Type. For 18.3 it's indeed missing info, but that's because we needed to partially backport PRs to fix issues and support AR there. So we didn't backport the camera UI with Background Type but backported the clear flags behavior.
About Camera Stacking, it never worked on LWRP. If you are below 4.6.0-preview selecting Depth Only or Don't Clear have undefined behavior. The bug you refer to had to be solved as it would cause tile memory on some specific GPU vendors to not be initialized and then cause screen to display trash when not rendering all pixels. We changed to force it to be a solid color clear as there's no stacking we can't load contents from previous camera.
The reason we are deprecating camera stacking in LWRP (and camera clear flags because they allow it) is that it's a naive option that pitfalls developers into a solution that's bad for performance in mobile. Mobile GPU vendors like PowerVR and ARM warned users about the pitfall of camera clear flags.
So, the big question everyone's asking is what's the alternative and how to do it. So we are doing it.
1) Prototyping a solution that not only will fix this issue but allow great flexibility to customize LWRP and for asset store devs.
It will involve injecting a custom render pass to render objects with a specific camera fov/matrices. In order to do that we have to greatly simplify how custom render passes are injected in LWRP and possibly provide some builtin render passes with LWRP that developers can add to the renderer.
2) Write a page to document how to upgrade from Built-in to LWRP. So all these workflow changes get addressed.
That's expected since we are a preview package. LWRP is not finished yet and we both get the opportunity to evolve it's API together. Think of it as you having active voice on the development of this product and that's absolutely true. In fact we already changed API many times based on feedback we received both here and internally/externally customers.
Although it is annoying how API and behavior breaks happen it's done in the best interested of all, so we can provide you with workflows and solutions that are performance and user friendly oriented.
Once we get out of preview (which is in 19.1 Release) these things will not happen anymore. At that point we will consider LWRP a product ready for production work.
Yeah, I saw that after I did another skim of this thread. It's not *that* critical as we are almost entirely GPU bound, but I suppose more optimized state changes as a result of the new batcher could help with that as well, but we're mostly SM+L1$ or VRAM bound (as per my last NSight profile) which are things that content will hopefully be able to address.
I did a quick search and it doesn't look like the GraphicsFormat API is usable when creating RenderTextures through a CommandBuffer or RenderTextureDescriptor in 18.3. I may have missed it though.
I've noticed this for sure. I was curious to see whether or not leveraging Compute to do the resolve may help us here. Understanding that UAVs have their own implications and that this wouldn't work for mainline LWRP because of Mobile's shaky Compute support, but I'll do a test in our branch.
Yeah, it was an atrocious increase in build times for sure. Glad to know it's acknowledged and being looked into. It almost seemed as if it were compounded by how many scenes are actively being built, as if the Shader compilation was executing for every scene. We've moved entirely to ShaderGraph Shaders as they only generate systemically-driven variants and our build times are good again.
I'll shoot you a message about the backports, many thanks!
Does anybody knows how to make real time shadows on lit terrain not as black as on other objects, this only issue is bounding me from using LWRP
Yes that's it.
Thanks for the timely reply!
My apologies, as 19.1 is still in alpha, I haven't been using that so I completely missed the changelog for that version.
Right, I did read that from the Google spreadsheet/doc that was linked earlier in the thread, and that makes sense. However my observation thus far has been that it worked like it did in the built-in pipeline, but my usecase is probably a lot simpler than for most other people. I used it simply for displaying particle effects using a second camera, and overlaying that on top of the main camera via the depth value setting.
Ah, my apologies, I referred to the bug as a bit of a jab at how the bug was listed in the changelog, specifically mentioning the Depth Only flag, yet there was nothing in the log about how the functionality was completely changed in subsequent versions. I have not experienced the bug in question, fortunately.
I look forward to the alternative solution and documentation! I greatly respect the work that you guys do, and am always for a performance gain, was just frustrated at the lack of information about such a big breaking change. Will this alternative be exclusive to 19.1, or will that be back ported to 18.x? In any case, I suppose I will be remaining on 4.6.0-preview until that's ready!
It needs somethings like the HDRP RTHandleSystem. https://github.com/Unity-Technologi...ition/Runtime/Core/Textures/RTHandleSystem.cs
We are avoiding leveraging compute as much as possible because instabilities in some mobile devices. That's not a strict rule though. As we scale LWRP up we might support it depending on the platforms, for instance it will help a lot with postprocessing.
for that specific screenspace setup if you replace the blit with rendering of a fullscreen quad (or even better a fullscreen triangle without attributes) that should fix it. With a custom shader and might get around avoiding setting up and restoring camera matrices.
We added support to skylight in terrain recently. That will fix it. It's coming in 5.3.0 for 19.1.
Regarding docs we are working on it. as well as a compiled FAQ. That feature table I shared earlier will be polished and added to docs as well as a public roadmap with upcoming features for LWRP.
Phew. It's coming
thank you Phil, one more question seems like camera.layerculldistances doesn't work in rendering pipeline mode, is it any alternative to it?
It should work. Can you open a bug report so we take a look at it?
I apologize, my bad. It works but somehow amount of triangles become much bigger (in "game" mode stats) after updating to pipeline. I'll check, its probably my internal bug
Any ETA for light cookies?
Sorry for my ignorance but I am working with VR projects and interested in using LWRP to optimize my 3D PC games for VR, for it, Do you think if I would use LWRP could optimize it for VR?
Thanks for your time
Not in our roadmap for 2019. We will try to make it for 2019.3 but can't promise it as the schedule is already super tight.
You'll get performance improvement if you are using dynamic lights. The more lights you use, more performance gains as LWRP scales multiple lights better than builtin (for the Forward Renderer that is as there's no Deferred Renderer yet).
Ok, Thanks for the info.
Another question, please . Since I am new in this topic, Could I use the Book of the Death Environment in a LWRP via template? If it is like that , Could you teach me to do it? Thanks
You would need to move all of the Materials (which are HDRP Materials) to use LWRP Shaders. They will likely show the magenta error colour otherwise.