Search Unity

  1. Unity 2020.1 has been released.
    Dismiss Notice
  2. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Unity What is next for us at Unity with Scriptable Render Pipelines

Discussion in 'General Graphics' started by mirrormask, Jul 2, 2020.

  1. mirrormask

    mirrormask

    Unity Technologies

    Joined:
    Jun 20, 2019
    Posts:
    1
    From the beginning, we developed the SRP architecture to solve several goals:
    • Provide the most optimal pathways to take advantage of the fragmented hardware landscape and reach performance on those platforms
    • Enable deep customizability of rendering paths in user land
    • Enable content scalability across the entire platform reach of Unity

    We view the Universal render pipeline as the successor to the default rendering pipeline in Unity (aka the ‘built-in pipeline’). Universal RP is designed to be the default great place for authoring graphics and deploying everywhere, allowing you to achieve beautiful visuals, great performance with maximal platform reach.

    At the same time, we positioned the High Definition render pipeline as the primary rendering pipeline for the bleeding edge graphics functionality for GPU-compute-capable, high-end platforms.

    We’ve been working on getting Scriptable Render Pipelines ready for production in the recent years. As part of that effort, we’ve been eagerly listening to the feedback we received from you. There has been a lot our community has shared with us - and we’re grateful for that - thank you! Our major goal is to incorporate the feedback that we got from you to improve the quality of the graphics packages and SRPs.

    Some of the feedback we’ve received is already being addressed through the work we have in the pipeline - we will share more on that below. Some of the feedback, however, will take longer for us to address as we need to define a solution that addresses the concerns presented, yet has longevity through the evolution of our product and the graphics API ecosystem.

    We want to do this right - and the right solutions require deeper thinking and time. We also wish to balance the needs for all of our user groups (Asset store publishers, Developers, Artists, etc.) with the changes we are making.

    With that, we wanted to share some of our latest plans on what we are doing _right now_, and where we will be going next. While this is a somewhat lengthy post, we hope that it helps build clarity and transparency so that you know how we’re approaching this architecture for our product.

    What are we going to cover in this post?
    • Addressing your questions and feedback about SRP quality and stability improvements, shader API stabilization
    • Sharing our plans to distribute verified Graphics and SRP functionality with the core Unity.
    • Evolution of the Universal render pipeline to be the default render pipeline for Unity
    • Ensuring reliable and predictable upgradability of projects from built-in to URP and HDRP
    • HDRP architecture, design considerations, and extensibility approach
    • How we are approaching Scriptable Render Pipeline Cross-Compatibility, including Cross-Pipeline Shader Authoring
    • Helping the Community to Be Successful with SRP through best practices guides and tutorials for both pipelines
    Quality And Stability

    Stopping breakage between versions

    One of the biggest pain points that you have shared with us is that SRP API often breaks between versions - this is a cause of upgrade anxiety and frustration for you.

    The API we have in our systems is a contract that we have with you as consumers of these systems and they should be stable, clearly documented (as internal or external), and, if changed, have auto-upgraders or clearly marked changes in the upgrade notes.

    The most vocal feedback has pointed to shader API changes as the major culprit of upgrade pain. To address this, we are improving the automation and tooling for shader API validation for SRP, to ensure that the functionality is not regressed between releases. This will ensure a robust and reliable upgrade path for shaders between versions of the SRP.

    We already run extensive automated testing in our build system for supported platforms (Windows, OSX, Android, iOS, consoles) with a focus on functional testing for features for SRP.

    The next step of getting SRP robust for productions is to extend automation validation and upgradability testing to projects and asset store packages.

    We are injecting nightly tests validating on a number of user projects and asset store packages for both HDRP and URP in our automation process. These tests are intended to validate both the API stability and the data, along with the visual results. These tests are designed to catch shader API breakages as well as the core API breakages and prohibit unintentional changes from landing.

    Note that the shader APIs may change as we add new features or improve iteration speed or optimize performance. We strive to make those changes be rare events and as minimal as possible. We also will make sure that they are implemented as early as possible in the product cycle, by the end of alpha. Additionally, we will guarantee that they are clearly documented; that the standard nodes in the shader graph will work for all shader API changes. Yet, there may be still a need for adjusting the custom shaders or custom shader graph nodes.

    To improve our product testing coverage, we have also extended testing across the packages combinations (for example, combinations like URP + shader graph + VR) for a cohesive product perspective. This significantly increased testing coverage in the Graphics | SRP codebase. We have also added a large number of additional tests for platforms coverage, especially VR. In addition to that, the team has spent over a year of engineer-months recently addressing large swaths of bugs, buckets of low user pain issues, stability and workflow improvements for both render pipelines to increase the product quality, starting with 19.3 and throughout 2020.2.

    Migrating Graphics Packages to Core Unity: One version for each version of the Editor

    For productions, you want to know that you’re always working on a recommended (verified) coherent set of Graphics functionality, including SRPs - that the new versions are sync'd to Unity releases, with the packages and the Editor matching versions. You have shared with us the frustration of trying to configure your projects’ packages. It is difficult to get a correct combination of Graphics packages and Unity versions.

    Our intent is to change that - to make it easy and seamless for you. With that, we want to share our current thinking - we would love to know what you think about our plans.

    Our goal is to ship verified graphics packages as part of core Unity. There will no longer be versions distributed via package manager. This means that each shipped version of Unity will have Universal Render Pipeline (URP), High Definition Render Pipeline (HDRP), Shader Graph, and VFX Graph that have been validated to work well with that release. This removes the headache of figuring out the matching versions - it will just work out of the box.

    Now, you may ask - how will this affect where we develop SRPs and graphics functionality? After all, you’ve shared with us how much you enjoy the full visibility of us developing it in the Graphics repo on github.

    We will continue to develop in the open on GitHub with the new plan of release. The only thing that is changing is that we are now distributing verified Graphics and SRP functionality with the core Unity.

    The pipelines and tools will become a core part of Unity. A big benefit of this approach is that it makes it easier to test the entire product of Unity for SRPs holistically. One version of SRP for one version of Unity. It also makes it easier for you to know what to support. A better integrated experience, with no need for dealing with package confusion; just like with core Unity.

    To answer some specific questions that will come up from this:
    • You will still be able to override the graphics packages with a custom fork or branch from the git repository - just like you do now
    • You will be able to configure your manifest to track a branch from GitHub- allowing for getting latest bug fixes before we ship a version of Unity

    We want to hear from you - what are your thoughts on this plan?

    Universal Render Pipeline Plans

    We view the Universal render pipeline as the successor to the default rendering pipeline in Unity (aka the ‘built-in pipeline’).

    One pain point we have heard loud and clear is that URP needs functionality parity with built-in before you will transition across. We agree - in fact, that is also how we are approaching URP evolution.

    The goal is to get Universal to full parity with the built-in render pipeline, and offer functionality beyond - improved visual quality, enhanced performance, more features, and rich artist tooling.

    That said, the journey of getting there is in progress but hasn’t been completed. We aim to have URP reach functionality parity with built-in (i.e the things you could do in Built-in, you can do in URP) for Unity 2021. To clarify, we will ensure that we provide the same capabilities for the functionality (for example, ability to support camera stacking in URP) but they may have different characteristics in URP versus built-in (due to design considerations such as performance or usability).

    Our prioritization is driven based on your feedback. We would love to better understand your priorities for feature needs, to help with our scheduling. So please let us know directly by voting on your most needed features or submitting requests for the ones that are not on our public roadmap yet here: https://portal.productboard.com/unity/1-unity-graphics. We are currently working with Ali Mohebali, our new SRP product manager, to revamp the public-facing roadmap to reflect our goals by the end of Summer 2020 in full details.

    Here are some of the features we are shipping for URP in 2020.2:
    • SSAO support
    • Mixed lighting modes including shadow mask, distance shadow mask and more
    • Deferred renderer support
    • Improved material inputs (detail normal maps, height maps, parallax mapping, other)
    • VR-specific post processing functionality for Hololens, Magic Leap, Windows MR (stability, quality, optimizations)
    • Across the board major swaths of bug fixes for stability, quality and performance, addressing key areas for URP

    Additionally, here are some of the features that we are currently working on for URP in 2021.1:
    • Light cookies support
    • Deferred renderer optimization for mobile platforms
    • Point light shadows and other shadow improvements for URP, such as shadow distance fade, shadow cascade blending, and more
    • Shader tier system for URP platform and tier scalability configurability
    • Additional post processing, including per-object motion blur, auto-exposure, and motion vector support
    • Virtual texturing URP support

    Reaching parity for URP with built-in is one of the top priorities for the Unity graphics team and we’re fully committed to this goal.

    Reliable upgrading from built-in to URP

    Many of you are interested to learn more about our plans for improving the upgrade experience from built-in render pipeline to URP. This is something we’ve been focusing on deeply and will continue to resource this heavily through 2020.2 to 2021 releases. The goal is to make the upgradability from built-in for standard supported features as smooth as possible.

    We are aware that we are not yet at this goal - currently the upgrade story is lacking which is not a happy place for us.

    How are we planning to improve this?
    1. Reaching functionality parity with built-in to ensure that all standard supported features are smoothly upgraded to URP. See the list above for our parity feature schedule. We aim to ensure that all features will be tested for upgrade as we develop and land them.

    2. Upgrader feature coverage: we have just recently completed an audit of the Asset Store by running a wide set of products through the upgrader. This highlighted a number of holes in our upgrader coverage that we are now addressing. For example, we found lack of support for nested prefabs materials upgrading, we found issues with upgrading default materials that we’re working to address, and more.

    3. Upgrader testing automation: We are adding automation to ensure we test upgradability for a set of representative projects nightly in our pipeline.

    Please help us identify any other holes that we need to address. Note that we will do everything we can to ensure that standard features are upgradable. Still, there will be a set of custom features we currently do not have a plan for an automated upgrade, for example, upgrading existing customer shaders to a URP version will not be automatable.

    High Definition Render Pipeline Plans

    Universal RP is designed to be the default great place for authoring graphics and deploying everywhere. While you may have felt gaps in URP due to missing functionality, we are closing these gaps. And the intent is for you to use the Universal pipeline to achieve beautiful visuals, great performance with maximal platform reach.

    At the same time, we positioned the High Definition render pipeline as the primary rendering pipeline for the bleeding edge graphics functionality for GPU-compute-capable, high-end platforms. HDRP has been designed to take advantage of more advanced GPU features, in order to deliver maximal GPU performance on these platforms (a deep dive on HDRP graphics architecture can be found in the SIGGRAPH 2018 Advances in Real-Time Rendering presentations, for those that may be curious).

    Reliable upgrading from built-in to HDRP

    We want to ensure you have solid pathways for upgrading your built-in projects to HDRP. Similar to URP, we offer the upgradability from built-in for standard supported features with the goal of making it as smooth as possible. At the same time, due to a fundamental design difference between the built-in pipeline and HDRP, there are elements that are not suitable for automated upgrades such as post processing and lighting. To help you with that, we provide “best practices” guides (for example, this guide).

    We’re aware that there are gaps for smooth upgrade from built-to HDRP and we’re committed to improving that experience. Please help us identify any other holes that we need to address. Note that we will do everything we can to ensure that standard features are upgradable. But there will be a set of custom features we currently do not have a plan for an automated upgrade, for example, upgrading existing customer shaders to a HDRP version will not be automatable, similar as for URP.

    We are working to ensure that material conversion from standard shaders will work reliably. We are also going to provide clear guidelines for how to author custom shaders or custom nodes for HDRP to help understand the important considerations (such as supporting software dynamic resolution, etc.)

    A few areas that are known gaps are support for terrain functionality and Shuriken for HDRP. For terrain, we are committed to providing HDRP support in the 2021 product cycle. Regarding Shuriken, the intent for HDRP is the VFX graph, which has been designed from the get-go to optimally fit the architecture. We are committed to add support for CPU particles to the VFX graph to ensure comparable functionality.


    HDRP Architecture Design and Extensibility Goals

    You’ve shared some frustrations with us about experiencing constraints for extending HDRP to meet project-specific needs. The main known constraints are limited injection points, lack of ability to customize lighting, and rigid encapsulation of API behind access modifiers.

    We want to share some of our thinking to help you understand how we approached the architecture design for HDRP and connect it to the choices for extensibility.

    As we mentioned, HDRP is targeted to reach maximal GPU performance on current and next-gen platforms. This is still a work in progress, as we’re maturing the architecture to full-production readiness. During this architecture maturation phase, we are constraining the set of available injection points to minimize synchronization stalls and performance issues. Another important consideration we are committed to is to ensure that any functionality injected to HDRP through these custom injection points will support both forward and deferred paths.

    We intend to expand this set once the architecture has reached production-readiness levels.

    HDRP was designed to ensure that the pipeline provides physically-based coherent lighting where each lighting feature supports all material permutations. This is a critical element of the HDRP design, necessary to achieve highest visual quality and predictable content behavior. Examples of that are: area lights, screen-space lighting, environment lighting, that work with all material permutations. Due to this consideration, HDRP does not allow modification of lighting inside the existing materials, to avoid creating lighting response inconsistencies and breaking this core design principle.

    What we have designed, though, is the ability to extend HDRP to implement custom lighting models by creating new forward materials, effectively allowing you to author entirely new BRDFs. This method ensures that these BRDFs will support all lighting types in a coherent manner. This functionality will be provided in the 2021 product cycle.

    Ultimately, we are still maturing HDRP for production-readiness. We would love to hear directly from you more feedback to help clarify which API we need to expose for HDRP customization so that you can achieve your project goals.


    Scriptable Render Pipeline Cross-Compatibility

    Build Once, Scale to Any Supported Unity Platform

    As we mentioned earlier, Universal render pipeline is designed to be the default rendering path for Unity, a replacement for the built-in render pipeline, targeting the widest reach of Unity platforms. The High Definition render pipeline is intended for projects that wish to take advantage of the bleeding-edge graphics functionality, squeezing every capability possible. Yet we aim to create a good flow between these pipelines. This has been started but we still have a lot of ground to cover. Content scaling and cross-pipeline workflows are an area where we are focusing a significant development effort going forward. We acknowledge that, at this point in time, this is not yet a great place in Unity, and we strive to do better in this domain.

    What do we want to achieve for cross-pipeline compatibility and content scalability for SRP graphics?

    We want to allow URP and HDRP pipelines to live in the same project. You will be able to have pipeline-specific assets such as Materials, Cameras, or Lights in different scenes while successfully building and running a project for a single pipeline. ShaderGraph shaders, for example, ensure that common cross-pipeline features across both pipelines will guarantee to work, while still allowing you to benefit from the per-pipeline functionality if desired. The first stage of this will be rolled out in 2021.1 where you will be able to target both render pipelines from a single Shader Graph.

    A well-documented public shader API. During the 2021 product lifecycle, we will separate SRP / Platform implementations into public and private header files so that it is clear what is safe to use and is on the golden path. This will include documentation of the public API points so that it is clear how to use each function and what limitations they have.

    Improvements to asset cross compatibility - We want to solve “I change pipelines and things still work.” You can author assets that rely on common cross-pipeline features, and have them work in multiple pipelines. We will have a “Best practices” guide for the set of common cross-pipeline properties that will be guaranteed to work. We will also provide clearly documented pipeline-specific functionality. You will be able to author one asset in the same project, that can target the pipeline-specific functionality, and still have it work. Materials, Lights, Camera, etc. This will be a combination of code and asset improvements as well as a 'Best practices' guide in the manual for developing cross pipeline content. This is also planned for the 2021 product life cycle.

    Shared asset and API interface for both render pipelines: API and asset improvements to allow features and assets that have alike purposes in both pipelines to have a common interface so as a user you can talk with systems the same way in both pipelines.

    We want to be transparent - bringing the two render pipelines into harmony will not be an overnight effort for us and it will take time for us to do this right. We are planning to roll this out in a few phases so that we can start to bring improvements to you as soon as possible, then start iterating on further improvements.

    Cross-Pipeline Shader Authoring

    With Scriptable Render Pipelines, our goal is to create a cohesive product where content will flow smoothly from one render pipeline to another, in addition to smooth upgradability from built-in, as we mentioned. One large area needing attention for this goal is how we handle shaders and materials. What we want to provide are these elements:

    • We want to author shaders once and target both pipelines.
    • We want materials to work consistently across both pipelines.
    • We want standard shaders that have been written for SRPs to continue to upgrade smoothly.
    • We want all standard built-in shaders to upgrade smoothly to either render pipeline.
    • We want asset store products to be able to target both SRPs with the same set of assets.
    • We want SRP shaders and shader graph assets to be viewed as reliable and have clear documentation.
    • We want great authoring experience for non-technical artists through materials and for technical artists through the ShaderGraph.

    We acknowledge that not all of the above goals are yet reached. That said, we are focusing strongly on achieving them starting with 2021, with the specific goals of creating a great experience in Unity for materials and shaders.

    Note that there are some goals that we currently believe are not easily achievable, namely the automated upgrade of custom shaders to either SRP from built-in. The reason for that is with the built-in renderer shader library there is not a level of abstraction that allows us to rewrite the backend for. That is: it is not possible to automate the translation of intent and correctly translate components and inputs to the new SRP shader API. Even with existing surface shaders there are extensive assumptions that they will execute on the built-in pipeline - They are designed specifically for built-in.

    As of recently, you noted that SRP shaders evolved far too much between SRP releases and this lacked upgradability. We have stabilized the shader API and are adding thorough testing - see the note above on “automated upgraded testing for shaders”. Starting with 2020.2, the introduction of shader stacks functionality will help stabilize the shader graph versions and allow easier configurability across multiple pipelines to target both SRPs in one shader graph, with the goal of full stabilization in the 2021 product cycle. In 2021, we are also providing functionality parity for upgraded standard shaders to SRP.

    Shader Abstraction and Materials

    You have also let us know that rewriting shaders from the ground up for each pipeline is not an acceptable approach. You’ve mentioned that there is no upgrade path from Standard to an SRP, and shaders written for one SRP (URP) won't work for another (HDRP). You are right that this is a gap. We agree - and, in fact, we want you to know that we’ve dedicated a significant amount of hours discussing possible solutions to this problem. We want to share some of our thinking to help align us on the same page.

    For the last few years our big push in Graphics has been about improving artist tooling and workflows. This has come in the form of Shader Graph, VFX Graph, improved editor tooling for graphics and better workflows that focus on content creators. The reason we put so much effort into this space is because this was a known gap for Unity, and we listened to the feedback of many developers that were frustrated with this lack of functionality in the product.

    While a lot of work has been done, we are still in the middle of this journey, to deliver awesome feeling artist tooling for shaders and VFX authoring. There is still a fair amount of work remaining to ensure that we have great UX and feature set for both tools, as well as material handling - the latter necessary for non-technical artists.

    On the other hand, this has meant that shader programming workflows have not been the main focus of our work. It doesn’t mean that we think these workflows are not important but we had to make choices, and these were the choices we selected.

    Right now we are in the planning phase of addressing the following problems:
    • Modernization of materials in Unity, including updating the material abstraction in Unity
    • Cross-pipeline shader abstraction for programmer workflows (in the spirit of surface shaders)

    We believe that providing the functionality above is crucial to make our ecosystem of asset store and custom render path development successful. A solid and flexible shader abstraction is key to enable this ecosystem. We are incorporating the feedback we gathered from you directly and from the previous forum threads (like this) right now and intend to share our design plans by the end of summer 2020. We are committed to solving this by the 2021 product cycle.

    In order to design a robust solution with longevity for our product, we need to keep a number of important considerations in mind, such as ensuring that it is extensible for advanced shader types (such as ray tracing, compute, mesh shaders), new shader functionality (such as VRS) and be well suited for future evolutions of shader and graphics API.

    Helping the Community to Be Successful with SRP

    Many of you shared the frustration of not having good guidance on how to get great results with URP or HDRP due to the lack of best practices guides or coherent tutorials. Without a doubt, this is needed in order to make you be successful with SRP.

    We believe this is important and we will be providing far more materials to help guide you to success in the form of blogs, samples and tutorials. There are resources available already, for example, Achieving High Fidelity Graphics with HDRP tutorial or Evolving game graphics with URP tutorial, and more in the Unite Now playlist. We are also working on new content to share with you in the near future. One of the elements we’re going to address is providing you with more real-world example tutorials with complex content and best practices guides rather than just basic functionality overview. We are also committed to improving the quality, coverage, and usability of our documentation.

    In closing

    We hope this allows you to get a better understanding of how we are approaching developing SRP and getting them production-ready. Our journey is very much still in progress, and we want you to know we truly value your feedback and we are carefully listening. Our success is in your success - only when you are happy to adopt it, we know that we have achieved a great result. And we won’t rest until we do!

    Yours truly,
    Natasha Tatarchuk
    Tim Cooper
    Sebastien Lagarde
    Felipe Lira
    Ali Mohebali
     
  2. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    315
    The idea of a 'core' Unity that just works with the specific URP/HDRP seems like a basically good plan. However, thinking of how I tend to work in projects, there is a potential problem. Will it be possible to change a project from one SRP to another without breaking everything?

    For example, developing an AR project with AR Foundation, the latest versions of AR Foundation usually include desirable new features, like cross-platform image tracking or face tracking. Many time, when I upgrade to a newer version of AR Foundation, something breaks if I'm trying to use URP. I would imagine that happening with a 'core' Unity as well, and thus I'm likely to override the core version using the manifest.

    In that scenario, it's pretty much the same as it is now, trying to match various packages and versions to get the 'right' combination. Right now, once I've upgraded to a new version of Unity or a new URP, many things get broken and so it's a scary thing to make switches. If we could reliable switch between versions of render pipelines, that might it a little less scary to try out a new feature.
     
    Tanner555 likes this.
  3. Elringus

    Elringus

    Joined:
    Oct 3, 2012
    Posts:
    557
    Thank you for sharing the plans, it solved a lot of confusion. But I still have one particular issue, that bugged me the most and it wasn't mentioned in the post at all: GrabPass.

    You've mentioned multiple times, that you're aiming for functionality parity with built-in for URP, but will it have anything to replace GrabPass? I mean not the current solution, where we can inject a custom pass via static config asset, but a dynamic way to store the frame content in a texture and use it in the next shader pass.
     
  4. francois85

    francois85

    Joined:
    Aug 11, 2015
    Posts:
    1,289
    I feel like shipping SRP with core will lead to having me switch Unity versions to pickup bug fixes. I think shipping SRP as core is more frightening, for me at least. How will this work with everything else being a package, we have more that just SRP to balance when it comes to packages and unity versions.
     
    phobos2077 and rz_0lento like this.
  5. Coroknight

    Coroknight

    Joined:
    Jul 10, 2012
    Posts:
    21
    They said in the post that you'll still be able to override SRP with different versions.
     
  6. francois85

    francois85

    Joined:
    Aug 11, 2015
    Posts:
    1,289
    I guess I assumed they will roll bug fixes into core and validate them. If im required to override SRP constantly then Im back to where I started if not worst off depending on the complexity of overriding the SRP.
     
    phobos2077 likes this.
  7. Coroknight

    Coroknight

    Joined:
    Jul 10, 2012
    Posts:
    21
    Overall this is a great post, however, it remains to be seen how Unity will execute on these promises. I think a lot of people have lost faith in Unity delivering because it always seems like things will be fixed next year.

    With that said, it seems like Unity is trying to deliver performance while trying to prevent users from shooting themselves in the foot. That sounds good on paper but it seems like their strategy to achieve this is to limit customizability (see the part about custom lighting in HDRP). I think this will just lead to more complaints and another post like this one next year with fixes promised in 2022.

    Just let us mess things up if we want. Sometimes people need to take shortcuts and pick the easier but less performant option in order to ship their games. Or they just aren't as experienced and just want to get something working. Or their use-case is unique and they need to make a tradeoff to achieve their vision.
     
  8. JoNax97

    JoNax97

    Joined:
    Feb 4, 2016
    Posts:
    323
    I have to side with unity on this one. They're trying to provide a starting point for many different use cases HDRP is the laser-focused approach for those looking for a very concrete set of features and aesthetic.

    URP is the one made of plasticine, meant to be tweaked and adapted to a variety of scenarios without breaking.

    And yet we constantly see people asking each one to be the other. We want HDRP to be as flexible as URP and URP to provide high end features out of the box.

    If HDRP is founded on the premise of physically correct lighting, I think it's fair not to ask for something that goes against this.
     
  9. jamespaterson

    jamespaterson

    Joined:
    Jun 19, 2018
    Posts:
    242
    Thanks for the long and detailed post and best wishes for the task ahead.
     
  10. Coroknight

    Coroknight

    Joined:
    Jul 10, 2012
    Posts:
    21
    In that case there seems to be a major gap. What if I want a system built on gpu-compute but I also want a custom stylized look?
     
  11. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,158
    For general use - if you update Unity you will get patches and fixes, no need to worry about updating packages manually, just like any core Unity feature. But if you are an advanced user or have an advanced use case then there are pathways that allow that that are no worse than now.

    The flow here will be from the repository into core. So bug fixes will exist publicly before they are available in core. If you want to lock in to a specific SRP version you can force your manifest to use as specific SHA from git (same as now). And if you want to lock into a specific Unity version you can have your manifest track a specific branch from the repository but keep your unity the same (also same as now).


    For URP and HDRP we have ways to sample either a created color texture, or inject an additional pass for capturing more textures at specific points. What specifically do you think is lacking with this approach so I can better understand your worry.
     
    R0man, Tanner555 and phobos2077 like this.
  12. ali_mohebali

    ali_mohebali

    Unity Technologies

    Joined:
    Apr 8, 2020
    Posts:
    2
    Hey everyone, I just wanted to drop a line and introduce myself. As it was mentioned earlier in the thread, I am the Technical Product Manager focusing on SRP. I collaborate with Arisa (@quixotic), the other Product Manager on graphics team and we are more than happy to collect your feedback here.

    Just wanted to reiterate that we, the Graphics team at Unity, really do appreciate your feedback. Your feedback has been and will be a great resource for us in this journey and helps us shape SRP experience in a way that addresses your needs in the best way possible.

    Just as a side note, best way to vote for your priorities and submit requests for features is through our public roadmap here: https://portal.productboard.com/unity/1-unity-graphics.
    We actively look at and organise your feedbacks coming through the board. We are working on adding more details to our boards to provide more clarity and visibility to our roadmaps. We will also be adding a dedicated board for HDRP, so watch the space for that.
     
    Last edited: Jul 3, 2020
    Mauri, RomBinDaHouse, dzamani and 8 others like this.
  13. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    989
    It's a huge post, which is great - must have taken a lot of work to put together, and I'm hugely relieved to read most of it. It says many great things - and it was urgently, desperately, needed

    (my only complaint is: why wasn't this post written 2 or 3 years ago [rhetorical question :)]? Many of the items in there should have been requirements from the start, including the stable APIs, the inclusion in nightly testing, URP-at-parity-with-legacy, etc).

    As someone who cancelled a render-heavy project specifically because of the nightmare hellscape that is SRP today, this post answers almost all the issues I had (and certainly: answers enough of them that in future I'd consider it possible/feasible to start developing graphics-heavy content in Unity again. There are some things I wish were better, but I can suck them down - what's here is more than enough to make it viable).

    As feedback, highlighting the major items I'm personally affected by:

    • "SRP API often breaks between versions" this has been a deal-breaker for doing any serious work with SRPs.
    • "...VR..." -- I do a lot of VR work, and spend a lot of time on the XR forums here. I don't use SRP on any VR projects, it's too much extra pain to maintain, but almost everyone is desperate to do it, to squeeze out even small performance wins - most games don't care about 10% frame-rate, but on VR that can be life-changing.
    • "ship verified packages as part of Unity" -- it's unfortunate but an absolute necessity as far as I can see: the current situation has been killing Asset authors, and without an asset ecosystem then most of Unity dies as a pro platform for game development.
    • "URP needs functionality parity" -- if you had done this one thing (made it your v1.0 goal from the start) then I think 90%+ of the major problems with SRP would never have happened. Without functional parity the recurring negative experience of devs is "upgrade to URP ... now your entire project is broken and you have no idea why. Do you have the time and manpower to re-verify EVERY line of graphics code everywhere (in both shaders and C#)? ... is it even possible, or will you discover that you just wasted months, only to find that some of the features you needed no longer exist? ... what happens with all the Assets you bought - are you going to demand their authors do the impossible? Who's going to pay for that?"
    Some thing I think you've not achieved enough in the post (which isn't a criticism, it's merely to say: I think this is proving very difficult):

    • "HDRP Architecture Design" -- I get it, and this confirms what I've always understood/believed. But most people do not understand what you've written, as you even point out. e.g in the VR forums it's common for people to ask why HDRP doesn't scale from nothing up to full graphics? Most developers just don't understand the point of HDRP and it's (deliberate) limitations. I get it, but I think you've got some tough challenge ahead, even here in this thread, to find ways of better explaining it to those who don't - and making sure that Unity staff stop writing thigns on the blog, saying thigns in public, etc that undermind that understanding.

    And the one strange big that seems to have been glossed over (going to re-read the post now just to check) that leaves me worried we'll stop using Unity beyond 2020:

    • Shaders. What about shaders? The most important thing to happen to the games industry in the last 30 years: hardware-independent programming languages for programmable pipelines. And they seem to have been glossed over. It is insane to imagine a future where studios aren't writing shader code. It's the opposite of what we need. Graph programming languages have never, ever, worked at scale (I've been following it since 1998, when we were promised that graph-programming was the future, and we had very expensive IDE's that did it). ShaderGraph is an awesome toy, a beautiful thing for prototyping ideas, and fantastic for reducing manpower costs at studios by a percentage. It's in no way a replacement for being able to write shader code as code.
     
  14. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    1,564
    The main aspect that still concerns me is the push to having two separate pipelines. I can kind of understand where it came from back when SRP were conceived years ago, but looking towards the future I'm just not sure how well it is going to pan out.

    Mobile devices are constantly improving and to me it looks like Apple are leading the way towards moving away from traditional pc/laptops and embracing mobile tech for everything, pushing the performance they can achieve. What couldn't be achieved in the mobile space last year, will become a reality a year or two down the line.

    With this in mind having two pipelines instead of an extendable single pipeline seems like a folly. I can't say for certain as I've not dived into the internals of URP/HDRP to see just how big a difference the architecture needs to be to make use of cutting edge technology. I just find it strange to throw away what was one of the greatest strengths of the legacy renderer in which you could so easily mold it ( even at runtime ) to fit the performance of the target platform.

    One might argue that the legacy renderers ability to do this was hindered by being non-optimal, but when I look at my client projects and the majority of products using Unity I rarely find cutting edge performance to outweigh ease and scalability of development.

    Yet it is exactly this ability that has been lost with the two pipeline approach. Being able to switch pipelines does not replace it as it would mean a developer having to likely maintain two different versions of a single project, each using a different RP, unless switching is so automatic and efficient that it can be done at build time between different platforms?

    Perhaps if URP and HDRP had feature parity I wouldn't be so bothered as then the cause of the different architectures is tied with the main difference, that of performance. If that were the case then I'd be happy to start all my projects in URP if that was extensible like the old legacy one was and as feature rich as HDRP, whilst knowing that I could switch to HDRP for maximum performance and power, but less extendable.
     
  15. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    989
    Ah, yes. Shaders are glossed over. What you've offered is:

    "intend to share our design plans by the end of summer 2020"

    Based on the lack of positive confirmation, a cynical/fearful reader might assume that shaders (not pretty graphs, I mean real shaders) are being killed. I can't believe that's the intent, I can't imagine how anything would continue to exist without real shaders, so I'd love something more concrete signalling what the range of your expected changes is here?
     
    Ruslank100, R0man, phobos2077 and 3 others like this.
  16. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    1,564
    I've got a tab in my browser of this page open ever since I first came across a link to it. Its really useful for tracking what progress Unity are making and getting an understanding of the state of UDP. Looking at it now I thought maybe I should vote for some features, only to remember that I'm pretty sure I already had.

    So I tried to find a way to look at the votes and comments ( e.g. on Blob Shadows ) as I was sure I wrote up quite a bit on why that , or more precisely projectors ) were important to me. Alas there appears to be no way to view your or anyone else's comments on a feature. So now I have no idea if I voted for a feature or commented on it.

    Is this something that can be addressed?
     
    R0man likes this.
  17. Elringus

    Elringus

    Joined:
    Oct 3, 2012
    Posts:
    557
    Grab pass allowed sampling not only at specific point, but at any time via a shader pass.

    Imagine a case, where we need to implement something like a blend mode effect for a layer in Photoshop. The layer in our case is a game object in Unity. If it's only one such object, it's fine: we can store screen content at after-transparent point with URP/HDRP to a texture and draw our object using that texture to blend the colors. Now imagine we have multiple such objects stacked on each other, that should blend not only with the after-transparent grab texture, but with each other as well. We need to grab screen content after each such object is rendered. That was very simple with GrabPass, but impossible with URP or HDRP, as we can't dynamically insert custom passes at runtime.

    There is a plugin I'm distributing on the Asset Store, which allows to use that kind of layer blending effect in Unity and it can't be implemented with URP or HDRP at the moment.
     
  18. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    1,564
    Concerning Shader Graph and upgrades

    I've been immensely disappointed and puzzled that Shader Graph cannot be used with legacy renderer. While I can understand to a degree that it might be complex to automatically upgrade an existing legacy or standard shader to a SRP, I do not understand the difficulty or resistance to having the Shader Graph produce legacy version shaders.

    From my point of view if Shader Graph could output to shaders for legacy renderer then i'd be using, experimenting and learning Shader Graph now, today. The obvious benefit of this is come the time when I need to switch to UDP or HDRP i'll already have built a collection of useful graphs that can immediately be used. This would greatly ease the transition to using the new SRP and i'd be up and running much quicker.

    As it stands every time I think of switching i'm faced with the fact that I've got a library of several dozen tweaked legacy shaders for projects that i'd have to port first before doing anything else. This increases my resistance to moving over to the new pipelines.
     
  19. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    989
    +100 this. It would be a very good way to increase adoption of shadergraph - and to reveal the problems with it.
     
  20. Onigiri

    Onigiri

    Joined:
    Aug 10, 2014
    Posts:
    124
    R0man likes this.
  21. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    989
    Yeah, statements like:

    "What has gotten harder is writing portable/upgradable shaders. The only option for that is Shader Graph"

    ...are why I even felt the need to ask for a clarification that you're not removing shaders. Because your own staff don't seem to get it. (that quote is from Unity "Tech and Rendering Lead", who doesn't seem to realise that saying the "only option" is ShaderGraph, for a problem where that obviously is not true, has a huge negative impact and leads people to - quite logically - interpret: "this person is actively trying to get rid of shaders", even though he said in the same tweet that he intends to actively prevent shaders being got rid of. If you say "I don't want to change things. But the only way forwards is to remove things", then people will assume you intend to remove them, even though you said you don't want to).
     
  22. De-Panther

    De-Panther

    Joined:
    Dec 27, 2009
    Posts:
    361
    Thanks for all the info.
    Getting URP to full parity with the Legacy RP(or built-in render pipeline) is important.

    Please give some love to WebGL. Using URP in WebGL projects is a mystery. You can't be sure what will work and what won't. And in some cases the performance is 30% worse in URP than in the Legacy.
     
    Ruslank100 and GuitarBro like this.
  23. Rich_A

    Rich_A

    Joined:
    Nov 22, 2016
    Posts:
    281
    You need to restore the lighting falloff in URP to the default legacy values, for realtime and baked lighting. It doesn't make sense to have the same lighting values for HDRP vs URP, or to break so abruptly with builtin. At the very least expose the functionality in the GUI instead of forcing us to hack the shader and place override gameobject scripts in every scene.

    I've summarised your the post and posted it to Reddit: https://www.reddit.com/r/Unity3D/comments/hkdygb/unity_formally_responds_to_srplife_movement/

    I'm guessing that some features like Camera Stacking and Point Light Shadows were removed from URP because they are fundamentally non-perfomant. If that's the case, it might make sense to require them to be manually enabled in a project somewhere in the GUI, or at least explain the fact that they are non-performant. Its not too difficult for example to use spotlights instead if you integrate them into your design from the beginning.
     
    Last edited: Jul 3, 2020
    Ruslank100 likes this.
  24. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    799
    I can't overstate how happy the mention of surface shader equivalent programmer workflow makes me. Lack of this has been our #1 gripe with URP/HDRP and knowing this gap might be filled in 2021 makes us seriously consider these pipelines for our upcoming project.
     
  25. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    215
    What about srp-API? Or this thread only for URP/HDRP? Please, stop mish-mashing between srp-API and URP/HDRP.
     
    R0man, CoastKid and liam_unity628 like this.
  26. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    215
    @Elringus
    100% no grab-pass in any unity in-house, or 3-rd party render pipelines. It's just impossible with modern graphics engine architecture. Only with ancient OpenGL 2 era architecture, when you draw objects one by one.
     
  27. CoastKid

    CoastKid

    Joined:
    Jan 8, 2013
    Posts:
    35
    Cynical/fearful reader here. You are right, I assume exactly that !
    I rely entirely on the HLSL and ShaderLab standard in my development. I do not want to see how it is get sacrificed to please the artists needs. It is already happened to ue4, setting up "real shaders" there is a nightmare, and it is the main reason I keep using Unity.
    I would like to keep full CODE control on all the shader stages (vertex, hull, domain, geometry, pixel) along with custom structs, buffers, funtions and so on...

    So I agree, we need more concrete signalling here.
     
    Last edited: Jul 3, 2020
  28. Tanner555

    Tanner555

    Joined:
    May 2, 2018
    Posts:
    60
    This is a good step in the right direction. All essential graphics features that aren't in preview should be integrated into Unity Core. I'm glad these packages are still open source and developers can use their own custom forks instead of the default one. I also think adding cross compatibility is a really good start for integrating HDRP and URP into one project. I do understand why two render pipelines are great for communicating compatibility and performance expectations. Although I think it would be best if URP and HDRP are integrated into one package. Maybe label the demanding features of URP as HDRP Graphics, like in shader graph, vfx graph, or in the post processing settings. These HDRP features could be color coded, and the developer themselves could check the platforms they want to use HDRP for (like HDRP for PS5 and URP for PS4 and Mobile). There should be a alternative URP feature for each HDRP feature used in all assets, and a warning should be thrown if the developer doesn't implement an alternative.

    Also, I believe there's way too many HLSL shaders in the Unity Ecosystem to ignore. Ambitious legacy projects can't just be upgraded to URP in a timely manner because most projects use a lot of HLSL shaders. There's thousands of assets, many Unity created themselves, that depend on legacy shaders to work. I propose Unity to work on an official HLSL to URP converter. This would help bring compatibility to older projects and tech demos that released only a few years ago. Or maybe HLSL shaders could work naturally with the new URP.

    When all these goals have been met, I'd like to see Unity upgrade many of their own tech demos. The Blacksmith, The Adam tech demos, The 2018 Book of the Dead demo, and the recent Heretic demo should all be updated to work seemlessly with Unity 2021 and beyond.
     
  29. GoGoGadget

    GoGoGadget

    Joined:
    Sep 23, 2013
    Posts:
    716
    First off: Thank you for this post. This is the first step I have seen Unity take to acknowledge what is has been a critical misdirection of development within the engine in the last few years.

    As an asset store author, this is absolutely the best decision you have made in a long time regarding the SRPs, and this alone will make authoring and supporting users much easier. Thank you.

    Just jumping in to say that this should include basic Post-Processing. A blit is a blit, a rendertexture is a rendertexture and that's all that base post-processing should be. When you broke Post-Pro between pipelines, that was the hardest hit for many asset store Post-Pro authors. When I discovered this, I almost gave up on my asset, which has helped hundreds of Unity users push the boundaries of the engine in terms of post-processing performance and visuals.

    I look forward to it!
     
  30. Devil_Inside

    Devil_Inside

    Joined:
    Nov 19, 2012
    Posts:
    1,038
    Last edited: Jul 3, 2020
    R0man, Ruslank100 and Peter77 like this.
  31. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,158
    There is a lot of comments on here about not clarity on our planning around shaders. I just want to call out this one point from the original post:

    We need a programatic way of expressing shaders (in text format) that can work across pipelines, have include files, use existing libraries - surface shaders are a good abstraction maybe there are better ones, we want to be sure we take the correct approach. When we have specifics we'll be sharing so that we can get early feedback from asset store publishers and other users.
     
  32. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,158
    Would a player setting here work for you? If you upgrade a project you get the old behaviour - if you create a new project you get the new behaviour. You can opt in to the new behaviour at any stage, but it might change how your content looks?

    These features are coming to URP (camera stacking already exists in 20.1 :) ). It is possible to build them to be performant and robust - it just takes a lot more effort than just copying the built in implementation.

    For example in the new camera stacking we did extensive design around user workflows to make sure it was clear what is supported and what is not supported. In built-in there are things that just don't work (try stacking a deferred camera after a forward camera) but Unity lets you do this and there is no warnings or similar. In URP we are really trying to make sure that it's not possible to configure things in a bad way so there are less surprises when you are developing content.
     
    GuitarBro, FROS7, Rich_A and 3 others like this.
  33. Acegikmo

    Acegikmo

    Joined:
    Jun 23, 2011
    Posts:
    1,235
    My main issue has been writing shaders for the new render pipelines
    1. Writing shaders by hand for URP/HDRP is completely undocumented
    2. Shader Graph is far from having feature parity with writing shaders by hand
    3. The code generated by Shader Graph is really difficult to read/use/parse as a reference for writing your own shaders
    These three things combined makes for a massive barrier of entry for writing more technical shaders, and that's like, coming from me who's pretty hecking used to writing shaders in Unity at this point - for someone less used to writing shaders, then, to them, whatever shader graph doesn't natively support is effectively not a feature that exists in Unity

    The way everything worked in the built-in pipeline was of course not perfect, but, it was at least always something you could find documentation on, or looking at others shaders as a reference, or Unity's built in shaders, and even though some solutions were a bit wonky, there was a solution to find and a way to do the thing. (Of course, some of this is a factor of time and it having existed for years)

    Anyway, thanks a ton for the writeup! Looks like things are moving in the right direction <3
     
    R0man, NateReese77, m3rt32 and 22 others like this.
  34. StaggartCreations

    StaggartCreations

    Joined:
    Feb 18, 2015
    Posts:
    1,091
    First of all, the incentive for open communication is much appreciated! It's a lot to take in!

    I feel that for the past ~2 years users have been on the receiving end of a graphics team's passion project. Ever evolving/refactored and largely undocumented. This in itself is not an issue, since I understand there is a need to battle test software in the hands of users. But the "production ready" label communicated something different entirely, though I observe this a more of a structural flaw between engineers and marketing. Granted, some mistakes were made, but we're all understandably human. In the end, issues wouldn't have come to light if Unity sat on the SRP for a few years before releasing it to the public.

    The rise of the SRP's did cause quite a bit of friction for me, since all my assets up to that point (largely graphical) would be rendered obsolete as URP would become the new standard. Seeing how many implementations between the built-in RP and URP are different, this meant reworking them from the ground up. Which proved impossible without breaking compatibility with the built-in RP. This meant having to build a separate version of an asset, or build a abstraction layer for C# and shader rendering. I eventually ended up doing both, but took up the majority of my time, yet resulted in nearly identical assets.

    I've grown more hesitant to release new assets, due to all the additional work involved.

    It's worth noting that the asset store backend poorly supports version-specific assets. And having to backport new features or fixes to X number of projects, test, and upload them erodes the motivation to work on something. Generally, publishers prefer to stick to 1 package for the minimum support version, and implement version specific code through define symbols. Fortunately, since 2019.1 it's possible to declare define symbols per package version, whoever thought of this is a hero!

    Also, for this, the Version.hlsl is a godsend, as it allows to implement code for a specific SRP version. The need for this can largely be mitigated by shader code abstraction, which becomes more important with this kind of set up. Fortunately, I see that has not gone unnoticed!

    This is still a paint point, since any asset that does anything remotely graphical (eg. rendering a height map from the scene) requires a widely different approach for things that amount to the same result. A more concrete example would be Camera.SetReplacementShader. In the built-in RP, this works as you would expect. However in the URP, this specifically requires a ScriptableRendererFeature, which can only be set up through a UI. Thus requires C# reflection to automatically set up without exposing the end-user to a set of instructions they first have to look for. My point is that (temporary) rendering in the editor has been inadvertently convoluted. Next to automatic setup (UX <3) requiring hacks.

    So any effort towards harmonizing this will go a long way!

    Shaders
    The differences between UnityCG and the URP shader libraries are understandably large. I've seen many people complain about the lack of documentation regarding this. Personally, I didn't mind, waning through all the shader code and figuring out what's what did take time, but I also found it a create way to learn how everything was put together. I found the URP library to be a create improvement, as implementing lighting was more straightforward. I don't, however, dismiss the need for documentation, I still often use the Unity manual for this.

    There's a definite push towards Shader Graph, but I feel this falls on deaf ears for shader programmers that prefer to write by hand. This probably affects assets store publishers more than anyone, since in some cases a shader needs:
    • Unity version specific code (not so much nowadays)
    • Platform specific code
    • Third party integrations (includes/pragma's)
    • Complete control over what's done on a per-vertex base
    • Tight control over keywords (using keywords in Amplify Shader Editor for example still adds redundant variable declarations or calculations)
    This is all in line with the principles of wanting/needing to support the widest range of use cases.

    Shader abstraction
    The obvious issue is that there is none. Again, this largely applies to asset store creators, looking to support multiple render pipelines with minimal maintenance overhead/file separation.

    I'm looking forward to seeing how Surface Shaders 2.0 are going to shape up! Though, it's a hard bullet to bite knowing that this will exclude the built-in RP and will require the minimum supported version for assets to be raised to 2020.x to support this. In light of moving forward, I suppose there is not other way around it but a different approach.

    Right now, there is a definitive lack of macros being used, making functionality like space transformation explicit. For example, built-in RP's UnityObjectToClipPos amounts to the same thing as SRP's TransformObjectToHClip. Yet, because the name and HLSL source files are different, these require two separate shader files.

    Abstraction through macros I think would also take some pressure of breaking changes, since the "front end" would remain unchanged.

    Update guides
    For the URP 7.2.0, an update guide was posted, outlining the made changes. This was a great incentive! As someone who has a hand-written shader on the store, I'm required to diff-check all changes to figure out why and where my shader broke.

    I want to encourage this more! From what I gathered the upcoming SSAO implementation sees several changes in how shadows are sampled, which should not go unmentioned. It took well over a month for many shaders to be updated to take the shadow changes from 7.2.0 into account (including Unity shaders such as terrain). These changes were not included in the update guide, so I wanted to bring this to attention and its consequences.
     
  35. Elringus

    Elringus

    Joined:
    Oct 3, 2012
    Posts:
    557
    I'm not asking to return grab pass. What I care is something that will allow re-implement what was possible with built-in. It's all about functionality parity, nothing more.
     
    Ruslank100 likes this.
  36. ali_mohebali

    ali_mohebali

    Unity Technologies

    Joined:
    Apr 8, 2020
    Posts:
    2
    Thanks for being active and providing feedback there. I can definitely see your comments on the Blob Shadows, so we do have it. We are using an external tooling for the public board and unfortunately it doesn't track returning users on the page. But you have a very valid point and I agree with you on that users should at least be able to see their votes and comments on a card. I'll forward your feedback to the ProductBoard team.
     
  37. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    1,564
    Cool thanks for checking. As a stop gap solution, perhaps emailing the user their comments would be a good option. At least that way I'd have a local record that I'd commented.
     
    ali_mohebali likes this.
  38. Goatogrammetry

    Goatogrammetry

    Joined:
    Apr 27, 2017
    Posts:
    171
    An artist's experience with HDRP:

    As an artist and a creator for the asset store, I had to release my massive photogrammetry pack in 2 versions, Standard and HDRP. It was a pain.

    Why? HDRP stores its textures in different ways. The auto-upgrade wouldnt work because the detail texture is completely different in HDRP, and its broken too (Roughness overwrites instead of multiplies). Then I realized, after I had already published 2 versions, that I could have made some Amplify shaders that grabbed the bitmaps in the old renderer's format and used them in the HDRP pipeline and grabbed the correct detail map. DOH!

    So why not, instead of making an auto-upgrader alone, make a "replace this with that" list that the upgrader can use to follow the instructions of the asset creator? Or for simple HDRP shaders, provide a method in the material menu that allows it to specify what channels it gets what data from.

    It was a nightmare for me to get people to 'register' my package's translucency index to the HDRP package. Why is that my business? You should do it. You have access to the meta files so just do it yourself! Computers are supposed to do things like that. Half the screenshots I see of my assets have neon green plants because they didnt follow the instructions I provided and probably just thought my art sucked. And speaking of registering stuff, I never did figure out how to get terrain to use a shader that had the depth-test turned on. I'd drag it in there and it'd reject it, so I rage quit. (default shader is outside the asset directory and cant be changed).

    Having 'default HDRP settings' in the project settings is confusing. People wont figure THAT out without an angry day of wondering why effects are happening that they didnt put in their sky-fog volume.

    I dont ever have luck with your 100,000 LUX lights. I try, it looks bad, I give up. I try again another time, it sucks, I give up. I'm not the only one that has said this. In fact nobody I know has gone with the massive-intensity+exposure crank-down method Unity seems to be pushing and I dont even have a clue what the benefit is.

    How about providing 10 foliage and grass shaders, 10 water shaders, 10 hair/fur shaders, 10 pre-adjusted glass shaders, and stuff like that so creators dont have to learn shader graph/Amplify just to release some assets? I released a couple bushes and one tree and lost 2 weeks trying to get shaders that would wiggle the leaves and I never did figure out how to make it work with a wind zone. You wont have good assets on the store if I have to re-invent the wheel for the simple stuff. Lots of artists have enough of a burden learning and using the insanely complex details of modern game art and damn it, I just dont have the time to learn to be a good coder along with that-- Not for the money I'm seeing.

    Oh and you're anisotropic shader is broken because if you mess with the 'tangent map' (undocumented) it changes the normals that the shadows use, which is like rotating the normal map in UV space. Its only supposed to affect the gloss!

    And would someone please program a detail-mesh shader at long last? I see the Heretic team write insane shaders yet I cant place little rocks in my map. Took me 4 hours to figure out why they were white-- No shader!

    I guess in the end my view of HDRP is that even after a year of developing for it, I have no idea how to use it and no idea why you arrange stuff how you do. Its a spiderweb of things pointing to things that will scare off newbies. I have no idea what best practices are. You need to watch a decent artist try to get an HDRP scene working and create some stuff for it without you interfering to see what I'm talking about. 9 out of 10 will rage quit and knock over your donut table on the way out. HDRP Unity is no longer the 'easy to get started engine'.
     
    SMHall, R0man, chingwa and 11 others like this.
  39. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,342
    First I'd like to thank you all for writing this. I feel like I've been screaming into the grand canyon for two years with the graphics team on the other side going "What? Did you say everything is great? Yeah, we think it's great too!". I've been maintaining compatibility with some version of HDRP and LWRP/URP for almost two years now, and it has basically prevented me from developing new features, and stopped me from making any new products for the UAS because of the support requirements. I've also removed a product because you closed off the API to the shader graph after I released a node for it, and essentially stopped working on two assets because maintaining compatibility with more than one SRP asset is impossible.

    This doesn't personally affect me much, but I'm curious as to why this is the case. As long as the update loop gets called and the shader is HDRP compatible, what's the issue? Conceptual purity? Being able to have particles which just work across all 3 render pipelines seems important for the next few years. So either having having the old system work in the new, or backporting VFXGraph to the current pipeline. Doing neither means every particle system breaks when you upgrade to HDRP, which is what you're trying to get away from.

    So while I totally understand this, there's some difference between conceptual purity and practical usefulness. As an example, users of mine often fight with Unity's lighting model with terrains, particularly with a sheen caused when viewing terrain at a glancing angle. There are two things which contribute to this; one is the minimum metallic value (0.04 or something) in the metallic workflow, and the other is from fresnel. To solve the metallic issue, I have an option to run the shader in Specular workflow mode, where internally everything is still computed as metallic workflow, then I do my own metallic to specular conversion and remove the 0.04 minimum value. However this still does not solve the fresnel issue.

    In my conceptually pure world, my terrain shader would not support UV scale (texel consistency), normal strength (blown out normals make the above worse), etc - but in practice artists want these controls. So anyway, I'm not saying exposing full blown per-material controls for the light loop make sense, but if there are details which allow for practical control of things like the above to be better controlled that would be useful.

    My first asset store product was done as a vertex/fragment shader generation system, and the experience of upgrading that through the 5.x cycle was so painful that I stoped developing that product and made MicroSplat and swore to only use surface shaders from then on. So SRP's were a massive slap in the face, returning me to Vertex/Fragment upgrade hell, but on two pipelines instead of just one. So for me, this is by far my largest issue.

    I think most stuff can be achieved using a ScriptableAssetImporter for any SRP, and a common library of functions to help parse files and stuff the resulting code/properties/cbuffer data into the correct places. Basically wrap blocks of stuff in BEGIN_STUFF END_STUFF blocks, and use some kind of parser.GetBlock("STUFF") to grab it and put it into the resulting shader file. (Ideally this allows multiple stuff blocks, so you can grab CBuffer properties from multiple files, etc).

    I do believe that the shader graph and lit shader should all go through this same abstraction layer. This is the only way to ensure consistency while enforcing maintenance and feature parity. Surface Shaders and the Standard shader do not do this, and because of that there are lighting inconsistencies between them. I know this will be a massive refactor, but it will be one that pays huge dividends in the end. I cannot stress this enough- if these operate as independent systems, there will be divergence and regressions, and none of the systems will reach their full potential.

    Surface Shaders had a lot of funky stuff and strange bugs, but over time I found ways to implement almost everything in them, and the feature set of MicroSplat is far greater than my previous product at this point. Here are some things to consider:

    Terrain/Object blending
    To blend objects with the terrain, I need to adjust the lighting such that we interpolate from the terrain normal to the object normal over some blend area. This effectively means modifying the resulting world space normal from the shader. The way I do this is certainly funky- but it works. Basically, I run the shader in the custom lighting overrides and blend the TBN matrix there and convert back to a final tangent space normal which the lighting system takes as input.

    This is obviously not ideal, but leads me to believe that all parameters should be passed to the inout so that if the user wants to modify them, they can declare their parameter as inout and do so. You wouldn't think the user would want to change the TBN matrix, for instance, and would likely pass that as read only by default. The current structure of the shader code makes this very hard in SRPs, since much of the raw data is computed via include files, and passed as read only through many different functions, or copied into structs and passed. As such, I currently don't blend the lighting in SRPs, because unrolling all of that code to add the modifiers would make updating to new versions harder.

    Tessellation
    In practice, tessellation causes a ton of issues- but people love it. Tessellation has been a problem with surface shaders, in that it forces you to not have access to your fragment Input struct in the vertex stages, so any system which needs to compute data in the vertex stage and pass it to the fragment stage won't work with tessellation. (I think the reasons are obvious- not having to generate code for that data to pass between domain/hull/tess stages, ect). Also, when Draw Instancing was written, no one updated the tessellation stages to pass the instance ID, so now I have to disable it if Draw Instance is enabled. So when considering things like how the user will access custom vertex data, or compute data to pass to the fragment stage, please consider that this should be viable with tessellation on as well.

    AppData, FragInputs, ViewDir, etc
    Surface shaders allow you to customize these structures in a lot of ways, and I'm betting that makes the parser a lot harder. It also has magic keywords in these structures, like viewDir, which changes what space it's in depending on if you write to o.Normal or not. That's all very funky and a cause of a lot of confusion, and this is a place where I feel the code output from the shader graph does better.

    For instance, the shader graph fills out a struct with all kinds of common things, like TangentSpaceViewDir, WorldSpaceViewDir, etc. I suggest that the ScriptableAssetImporter that loads a .surfaceShader file and converts it into the actual shader does is to scan for these names in a users code, and include them if they exist. If the user actually names a variable that in their own structures, that code would get stripped by the compiler if it's not used anyway. This means the user can just i.WorldSpaceViewDir when they need it, and it's just there, already computed for them when they need it, and not if they don't use it.

    The same system can be used for AppData and other structures. I do not think the user needs to explicitly name these variables, a fixed set of names will be fine. For arbitrary data passed between the stages, some kind of Custom0, Custom1 convention seems fine to me. This will hopefully keep the parsing code simple, while making more shader code consistent across everyone's shaders. Something like this also makes it easier to support tessellation, since all of those structures have fixed naming conventions, etc.

    Modern HLSL

    I'm a bit old school in my HLSL usage, but I believe it now supports interfaces and more modern constructs. These could be wonderful to enforce specific contracts between SRPs, and in custom shaders. For instance, having a common interface for SRPs to use between them, ensuring that both pipelines have a common set of functions for things like space conversion, getting the camera position, etc. Then if a user wants to write their own SRP, they know exactly what functions they need to account for to make shaders compatible.

    Customization

    Surface Shaders contained a lot of funky ways to override things, via pragmas and magic functions. A more formal contract here would be nice. Virtual/override would be amazing, but I don't think that's available in HLSL. Instead, something like:

    Code (CSharp):
    1. BEGIN_URP_LIGHTING
    2.  
    3. half4 LightPBR(inout SurfaceData s, inout LightData s)
    4. {
    5. }
    6.  
    7. END_URP_LIGHTING
    The parser can do a GetBlock("URP_LIGHTING"), and if it exists, we use our custom lighting. If not, we include the default model.

    Shader Graph vs. Text Shaders

    You will never reach parity with text shaders in the shader graph. For instance, if the user wants to get data from a compute buffer, you could add a bunch of nodes to do that. Or a new master node for terrain shaders. But with each of these, you add a mountain of new nodes and code to support the feature. In contrast, adding any of these to a text based system requires no new code on your end. Shader Graph should not try to reach parity with what text based shaders can do- it's an abstraction to make writing shaders easier for a specific domain and audience. Rather than spending thousands of man hours doing this, focus on opening the API for users to expand, and increasing the ability for shader graphs to easily work with text based code chunks. A large part of Unity's value is in your ability to expand it, and closing off the node API puts all reliance on Unity to provide every feature a person could want, while stifling innovation. A large part of graph workflows is having coders and artist work together on custom nodes, turning complex shader code into simple nodes. Neither of these is possible right now (And don't you can make the code in a bunch of nodes and turn it into a node- that doesn't provide options, a clean UI, or access to things like dynamic branching, compute buffers, etc).

    Grab Pass, RenderWithShader, Custom Passes

    My Trax asset (for MicroSplat) relies on RenderWithShader. I have been unable to get this working in URP/HDRP using CustomPasses and the other potential replacements for this. Conceptually it makes sense - insert some custom code into the pipeline at a given moment - but in practice I cannot get it to work correctly. Unifying this kind of stuff, making sure you don't have to use reflection to add data to the users configuration, etc, will go a long way. But ideally, this is dogfooded with practical examples a lot more. CustomPass and its URP equivalent are not really designed to render from different cameras, but you can hack the camera matrix to do this, but that's not very user friendly for the average user. And I can't even get the right results out of it, and I'm at least decent at this kind of stuff.

    Grab Pass is an interesting issue. Having a user just arbitrarily stall the pipeline when some object gets drawn is an issue I totally see you wanting to not have. But the basic tenant of "Hey, I want to put a sync point here where we read this stuff back" is a valid use case, and whatever hooks you add such as "After opaque objects" are not going to fully cover all use cases. So my one though here would be to have some mechanism to insert some kind of custom sync point into the rendering in a way that is predictable- perhaps by allowing the user to bucket objects into a before/after stage. On the other hand, as ugly and slow as it is, it hasn't stopped people from shipping products with it in all the time it's existed. Not every product needs to run as fast as it can.

    Misc

    As an example, the camera has a background color property. You edit it, it works. But in HDRP, if you set it via script, nothing happens. This is because the editor script makes it seem like that field is still being used, but actually when you edit that in the editor, it edits a color field on the HDRPCamera component, which shows no fields in its editor. This is extremely confusing. If the HDRPCamera component is going to own this data, then it should be the one showing this data in the editor. Stuff like this makes it extremely confusing to work in the SRPs, and when each SRP does different stuff like this, its maddening trying to work across them.

    Honestly I think having your teams split between shader graph, SRP, URP, and HDRP has done you massive harm. These teams not only need to be working together, they need to be using each others stuff, and agreeing on how these things are implemented as one team. How many people at Unity have implemented something that has to work across all 3 current pipelines? Kept it working through changes? You're own assets in the store have 2 star ratings because you can't keep up with the changes we are expected to keep up with, and we don't even have documentation. Your teams should be taking demo's like the URP boat demo, porting it to both SRPs, and keeping it as a live working example through changes. Your team needs to feel the same pain we do.
     
    Last edited: Jul 3, 2020
    R0man, Rowlan, Emiles and 40 others like this.
  40. Flow-Fire-Games

    Flow-Fire-Games

    Joined:
    Jun 11, 2015
    Posts:
    232
    This is great news but please adopt Shader graph as the main official way of using and creating shaders just like Unreal does since Unreal 2.

    This especially should include one for the Terrain system as well. Just give beginners default shaders made with graph they can learn from and change, but the custom unity shader approach is just one more layer of complexity, confusion and incompatibility that just needs to go. The time spent making a custom shader nobody can inspect or later use because it dosnt fill any specific requirements could also be spent on making a new shader graph node and you dont have people asking "when is this coming for graph as well?" - We are looking for almost a year for a tech artist and are willing to pay good money but its just not realistic to have one. Rendering engineers can still write their own but that's like a unicorn also. I'm not saying kill writing custom shaders, no, improve the foundation - I'm saying totally kill unity made custom shaders entirely for everything but very specifics like text rendering.

    Lets be real, nobody is using the standard / terrain shaders in professional capacity aside of background decoration pieces because if you have the simplest custom requirement its already not viable - and beginners definitely dont want to use channel packing to just test something. Stay on one shader pipeline, just do all things with the graph and cut the extra work and the absolutely unnecessary complexity. I could make terrain materials 15 years ago in Unreal and its still not viable in 2020 in Unity. I know your team is hard at work at a new great terrain shader which then again is not compatible and not extendable and dosnt fill any specific requirements again and outdated the day it releases, no matter how great it is, thats just reality. You cant make a shader for us like you cant make a game for us. You can make great helpful examples in graph which double as standard however and you can make great feature compilations in nodes.

    HDRP lighting is also an absolute pain and insane mess of confusion. Exposure working inversely. Different values in default than in sample. No link between physical camera exposure / light and the "other" values. Auto exposure on by default. It took our environment artists 2 weeks and we still are in utter confusion if this is acceptable or correct now after looking at example scenes and the new tutorial multiple times. Something needs to happen with that exposure approach UX.
     
    Last edited: Jul 3, 2020
  41. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,342
    So by your argument, should they ditch C# and go Visual Scripting and C++ Plugins as well?

    Unreal is heavily hampered by its reliance on shader graphs. The shaders over there are often much slower than they could be if they were written in code. As such, their terrain system does a ton more work, generating hundreds of variants of every terrain shader so it can be performant, and artists have to be careful with how many textures they paint in a given area.

    The correct answer is to open up the Shader Graph API so that programers can write their own nodes, making whatever task you might need to do easier, instead of relying on Unity to provide all of that functionality. And to create an ecosystem where code and graphs work together well, and either is viable. You will never write a terrain shader in current shader graphs (UE4's, ASE, etc) which is even 1/10th the speed of MicroSplat, nor has its complexity of features, and you will always be hampered by whatever feature unity hasn't added yet with the API closed.
     
  42. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    989
    Great description here:

    I've been using Unity for almost 10 years. In that time, the recurring reason for studios I've worked in (and former colleagues in other studios) to use it has been the massive easy + cheap extensibility. Whenever something in Unity sucked, or was missing, or broken at a fundamental level ... if you're a studio (i.e. have skilled programmers + people + money to spare): no problem, you write a replacement system/sub-system, and all is great. I think a lot of people don't realise quite how much this keeps commercial teams using Unity. It's no use for Indie teams and freebie developers, but they usually benefit from it anyway via trickle-down from the AssetStore.

    It's a surprisingly good feature that was a bit insane as an idea originally, but history (your users!) has shown that it's actually highly valuable. At first I feared it, then I got used to it, now I love it. But it has long felt like the SRP teams removed GrabPass for nothing more than ideological reasons: "what is this ugly thing doing in our codebase? Ugh! Kill it with fire!".

    ...and then all we got was back-splanations for why all the people (legitimately) using it were "wrong", and didn't understand (their own games/code/etc, apparently. Hmm. Dubious).
     
    Cynicat, R0man, forestrf and 9 others like this.
  43. a436t4ataf

    a436t4ataf

    Joined:
    May 19, 2013
    Posts:
    989
    At time like this the forums beg for an "I don't like" button.

    (I'm sorry to hear you're struggling with shaders, and hiring, but that puts you in a tiny minority in the professional games industry. Even 1-person indie developers have been fine with shader writing for more than 10 years now. It's really not hard to learn, and modern DX11 ShaderModel semantics make it pretty easy to write maintainable, clear, code that's also performant - it's got easier and easier over time)

    PS: yes, there are some very badly written shaders out there. Even some of them from Unity teams. But - for at least the last 5 years - that's a problem with the authors, not the language.
     
    R0man, Ruslank100, JBR-games and 11 others like this.
  44. Crayz

    Crayz

    Joined:
    Mar 17, 2014
    Posts:
    145
    This is much needed. Work on integrating all core features into the core editor. Consider TextMeshPro replacing the default Text and resolving other similar fragmentations.

    Complaint: the only reason I have TMPro in my library is because one asset calls for it. To change an elements text I call either GetComponent<Text> or GetComponent<TMP_Text> to achieve the exact same result. It's not a fluent or predictable workflow and project bloat is becoming a problem.
     
    R0man likes this.
  45. Flow-Fire-Games

    Flow-Fire-Games

    Joined:
    Jun 11, 2015
    Posts:
    232
    Im not saying kill off custom shaders, the foundation surely needs to be improved and this is a key feature of unity
    Im saying kill off unity made custom shaders almost entirely - no time should be spent on making the next huge new and instantly outdated unity terrain shader or making the next unity standard shader nobody will be able to use properly because it fills zero custom requirements and is neither user friendly. This time should be spent upgrading the foundation, adding new graph nodes and providing example shader graph shaders which double as standard shaders. Shader graph should be extended and yes opening the writing of custom nodes.

    Unity has been extremely crippled and got a bad reputation as bad looking engine simply because of the old standard shader and lack of graph until shader forge appeared, please learn from that huge mistake. No more time and focus wasted on standard shaders nobody actually uses.
     
    R0man, MothDoctor, IrrSoft and 3 others like this.
  46. Sam-Schmidt

    Sam-Schmidt

    Joined:
    Jul 1, 2015
    Posts:
    6
    Yeah this is pretty delusional.
    Im as close to the one man dev team as you can get. I can make AAA baked assets, I can make optimized workflows for the team. I can program, I can edit sounds, everything and launched to critical success.

    Theres just no reason to write shaders yourself for 95% of all things.
    Seeing results instantly is giving you far better results and uncomparable iteration times.
    Having a programmer and artist collaborate on a custom coded shader is a crazy factor slower than the artist doing it himself. The difference is monumental. Not to forget the modular approach, the compatibility, portability and whatnot.
    For most cases performance is not an issue either way and you can still go through the compiled code.
    We have 4 programmers now and why should they ever be bothered with shaders? ´
    They have no time for that. You just cut the feature then.

    Yes the 20 render engineers in the echo chamber here will sure tell you something else, but the other 99% will and want to rely on shader graph. Artists should make shaders, programmers should write nodes. (Tech artists can do both and custom shaders, sure) On unreal I can look up unreal 2 documentation and get a working effect. It works and is consistent. They have one focus and just add more nodes and polish. No distraction, no diversion. Sure the foundation is probably flawed but thats not the point. You just need a good foundation and then a good high level editor on top and then just polish and expand both.

    And absolutely no, people have not been fine writing custom shaders for 10 years, this is terribly wrong.
    Unity has a stigma of a bad looking engine because this has been exactly not the case and standard shader was what most people used. A couple few companies probably hoard 95% of all rendering engineers and tech artists are very rare so lets not be disillusioned here. Virtually every normal <15man team will have one normal 3D artist set up the shaders. Shader graph needs to be the core. Upgrade the foundation so its better and easier to make extensions for shader graph (and custom shaders) but dont ever try make a new standard shader, this approach is terrible and nobody should know better than Unity given the stigma they recieved.
     
    Last edited: Jul 3, 2020
    kkrg001 and MothDoctor like this.
  47. gecko

    gecko

    Joined:
    Aug 10, 2006
    Posts:
    2,093
    Well, plenty of us rely on people like Jason Booth to make amazing shaders, and so I absolutely support whatever Jason says he needs in order to continue making amazing shaders. And why the heck wouldn't Unity feel the same way? (e.g Ballmer: "Developers! Developers! Developers! Developers!....")
     
  48. BattleAngelAlita

    BattleAngelAlita

    Joined:
    Nov 20, 2016
    Posts:
    215
    lol
     
    CoastKid, phobos2077 and IrrSoft like this.
  49. jbooth

    jbooth

    Joined:
    Jan 6, 2014
    Posts:
    4,342
    Which gives me a lot of job security when your game runs like poop and you hire me or someone else like me to fix it.

    Shader Graphs are cool, but they only cover some use cases well. I'm all for Unity having a great one, but it won't replace hand written shaders any time soon. Same for visual coding, AI graphs, etc.
     
    R0man, Dawie3565, Ruslank100 and 16 others like this.
  50. Sam-Schmidt

    Sam-Schmidt

    Joined:
    Jul 1, 2015
    Posts:
    6
    Yes, nobody is arguing to remove hand written shaders in any way. Update the core so this is even easier. Being able to write C# and custom shaders is key of Unitys appeal for many. It just shouldnt be in any way or form the face of unity own shaders, standard surface / lit, particle and terrain shaders need to die and never be resurrected outside of graph.

    I bet 100$ that there is a new big hard coded terrain shader in the works which I will complain about not being editable nor viable, in 2030 at current rate. Even amplify shader supports terrain.


    Stop waste resources on custom stuff which is incompatible and uncustomizable.
    Why is a new hard coded terrain shader even being considered, why is the demo team making entirely custom shaders which are then thrown away instead of adding nodes or graphs to learn from. Why are the sample teams making projects so fundamentally incompatible and custom that everything is thrown away in the end and virtually impossible to open only months later. Stop the hacking together, invest these resources on a common structure. Yes maybe the demo looks a bit worse but all that effort is not thrown away.
     
    Last edited: Jul 3, 2020
    R0man, landonth, Tanner555 and 3 others like this.
unityunity