Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Feedback Wanted: Scriptable Render Pipelines

Discussion in 'Graphics Experimental Previews' started by Tim-C, May 9, 2017.

  1. fortgreeneVR

    fortgreeneVR

    Joined:
    Sep 5, 2018
    Posts:
    50
    Hi Tim,
    Will this work with -force-d3d12 and Project Settings->Player->Window Graphic API's -> D3D12 (Experimental)
    and Unity 2018.3.0b7 ?
    Thanks.
     
  2. andybak

    andybak

    Joined:
    Jan 14, 2017
    Posts:
    569
    Did the lack of syntax checking in your IDE not prove problematic? Or did you configure it to somehow syntax check embedded code?
     
  3. YJack

    YJack

    Joined:
    Apr 7, 2010
    Posts:
    44
    Hello guys,
    I'm facing some difficulties on HDRP.
    1 - I just can't figure out how to properly setup Exposure and Multiplier on Procedural Sky or HDRI Sky. It's not clear what each param does and how to keep it physically accurated. The examples in the docs usually just say exposure = 0 and multiplier in a crazy number from 0.02 (night) to 20000 (day).

    2 - .Without camera aperture, the only tool to compensate light intensity are on post processing. But what how to calculate the physical value of this compensation?

    3 - If you calibrate directional light Intensity for something like the sun (about 100000), you lose most debug tools for lighting since everything goes to all white.

    4 -I can't figure out a way to hide Collider being used by volumes without disable them (and lose the volume effects I want to observe).

    5 - Post Processing V2 Debug (Post-Processing related team, not HDRP) is messing up the UI Layer and being positioned in an arbitrary position and size. Most Debug Tools (As histograms) I want to use along an UI image as reference and I can't do that with that issue.

    .6 - Looks like AO debug from HDRP have several bugs related to update the result.
     
  4. laurent-h

    laurent-h

    Joined:
    Sep 29, 2016
    Posts:
    78
    Hi, I personally tend to use multiplier at 1 and just tweak the exposure. It's in the end just a matter of finding the right value.
    If you use the Render pipeline debug window (window / analysis / Render pipeline debug) you can go in the lighting tab and set the "lighting debug mode" to "lux meter". The lux meter turns the image into lux values received on each pixel, which means it's white if you use realistic light intensities. In order to read the values you get 2 options.
    Then go to the rendering tab and you can either :
    - enable the false color mode, the different color threshold are tweakable and allow you to read what are the approximate lux values for each pixel
    - or under Color picker set the Debug mode to Float4 and then hovering in your scene view will give you the received lux values for the pixel at the tip of your mouse cursor.

    You can then monitor the amount of light received by a surface and adjust the sky exposure (or multiplier) until you get the amount you want.

    The camera and exposure settings will change in the future so that they can be correlated with real life camera settings. In the meantime I recommend using the lighting range you desire to use and then tweak camera exposure in order to get a correct looking image. You shouldn't be bothered by the fact that the exposure values will not look like anything you would see on a real camera for now.

    We're working on fixing this one place at a time but it's taking time as there are many places to change. Right now the reflection probe preview has an exposure slider, and you can use the Render pipeline debug window Lighting overrides in order to get a "lighting only" view that is seen with postprocessing enabled (tab lighting check the "override Albedo").

    Above the scene view there is a "Gizmos" button (next to a search field), clicking on it expands a list where you can disable gizmos per entity type. You can go there and disable post process volumes and volumes.

    For the postprocessing debug since the code is open source I imagine you could customize it to render the histogram where you want it. If you don't want to change the code I'd recommend arranging your UI around it, that's probably the fastest thing you can do.

    For the AO debug I am not sure it's working well, using the Lighting overrides in the render pipeline debug window to get a "lighting only" view as explained above might be the most efficient way to judge what the AO is doing on your image.
     
    YJack likes this.
  5. Brikeck

    Brikeck

    Joined:
    Oct 15, 2018
    Posts:
    8
    I am not a programmer or a dev, more the artist. I am using HDRP and recently switch because of the shaders. I have a question you all may be able to help with, actually 2.

    1. I DLed and am trying to get FXAA Post Effects Base to work and run. I add the component to the camera, but there are no options to enable or edit settings. I know HDRP has no AA in it currently, correct? I am rendering out camera paths as an initial scene passthrough and the AA is bad. If there is AA in HDRP how do I enable it or where can I get one to do it. I am not a scripter, so writing my own is out of the question. C/P i may be able to do.

    2. Emissive UnLit Mats, do not create GI. WHY? I made it pink and got it to work ONCE, and only ONCE. How can I get Emissives to cast and create GI, real or baked?
     
  6. Grimreaper358

    Grimreaper358

    Joined:
    Apr 8, 2013
    Posts:
    789
    HDRP always had AA
    FXAA
    SMAA
    TXAA
    MSAA - Now in the latest version

    To enable post-processing and antialiasing that's done the same way it has always been done with Post Processing V2 (Comes with HDRP as well as LWRP) unless you've never used it.

    This is the best video I can find going over the setup and options for it

     
    Brikeck likes this.
  7. Brikeck

    Brikeck

    Joined:
    Oct 15, 2018
    Posts:
    8
    Totally missed this, I thought it would be a component. Thank You so very much.
     
  8. Brikeck

    Brikeck

    Joined:
    Oct 15, 2018
    Posts:
    8
    You said HDRP has MSAA in the new version, when I search for the HDRp in the asset store to update to the new version with MSAA, how do I update the version of the HDRP, it is not showing in the asset store
     
  9. Grimreaper358

    Grimreaper358

    Joined:
    Apr 8, 2013
    Posts:
    789
    Guessing you're new to using any version of Unity 2018. In Unity 2018 onward these features are updated through the Package Manager

    Window -> Package Manager find HD Render Pipeline then update to the newest version available (This is different for each version of Unity 2018.1, 2018.2, 2018.3 etc) Currently with 2018.3 beta the latest is 4.1.0 and MSAA is only supported in Forward Rendering for opaque objects
     
  10. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    2019.1 Alpha + HDRP SRP batcher option causes crash in play mode.
     
    MadeFromPolygons likes this.
  11. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    I asked this in LWRP thread and I want to ask it here too:

    - Given that Core SRP / Shader Graph will leave preview only in 2019.1, where does it leave us, who decide to stick with 2018 LTS?

    - We have been using LWRP and SG since 2018.1, and are committed to using them given the performance boost and the decoupled pipeline architecture.

    - Watching Unite LA talks, we are not sure how Unity will rollout bugfix and whether there can be 2 supported releases (one for LTS and one for Latest Stable).

    Felipe Lira mentioned he would bring it up in future discussion, I hope Unity team can establish a roadmap on this :)

    Once again, thx for your good work on SRP!
     
    Shorely likes this.
  12. arnaud-carre

    arnaud-carre

    Unity Technologies

    Joined:
    Jun 23, 2016
    Posts:
    97
    Could you report the bug using Unity Bug Reporter? (joining your sample project)
    Thanks
     
  13. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,225

    As it stands this leaves you in not the best position. To bring SRP out of experimental we made a (large) number of core API changes (Converting things to native array, making things alloc free, requiring new mono). What this means is that in a lot of situations it's very difficult for us to backport things as it means redoing large parts of the code in our 18.3 branch to match the old API. This is one of the specific reasons that we are not leaving preview in 18.3, we want to maintain velocity and not have to fully support two release streams. When 19.4 rolls around we will be in a much better position for this. We are putting a stake in the ground and saying that SRP is ready for prime time in the 19.x release stream.

    This isn't to say we won't backport _some_ things (probably for the next 2ish months, generally critical bug fixes), but preview features are not fully supported in LTS releases. One other thing to note is that porting an SRP project from 18.3 -> 19.4 LTS will likely be a big delta in code change as we will continue to develop during 19.x
     
    hippocoder likes this.
  14. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    Thx for your reply. I always know there are risks in relying on SRP preview, particularly on issues where core Engine changes will inevitably make backport difficult. But allow me put my concerns and suggestions here:

    - Unity 2019 has an ambitious roadmap, it's very exciting for sure (ECS, SRP etc.), but we can't commit to using it for production: many features simply take much longer than expected to mature, and unlike in-house engine, we don't get to say "well, let's ship this for now".

    - Our decision is based on working with Unity 2018 during this TECH release circle, where we went through many 2018.x beta releases, and SRP were never quite there.

    - Perhaps 2019 will be different, but given the roadmap, I am skeptical.

    - This is not to say Unity should slow things down, not at all.

    - I am saying, perhaps you can survey your users on this: do they want to keep using Unity 2018 LTS, or are they all happy to jump onto 2019?

    - My belief is you will find many that wants to use SRP in 2018: we certainly do.

    - While we are happy to have adopted LWRP early, gave feedbacks and submitted a handful of bug reports, we simply can't go through this again in 2019.

    - It sucks that SRP will never be "supported" for us, but I hope some compromise can be reached:

    - Like, can Unity offer a "lite LWRP", where it will be feature-limited when compared to 2019.x LWRP, but at least you can commit to fixing some reported bugs.

    - Otherwise we will end up with a "all-or-nothing" LWRP, where 2018 LTS + LWRP preview aren't supported at all.

    - And SRP was the key feature of 2018.x TECH releases (see 2018.1, 2018.2 release blog post).

    - If we end 2018 LTS without at least offering some SRP / LWRP option, how are people going to feel about 2019?

    I really think this decision is important, not just to us, but to Unity too.

    Please consider survey or more internal discussion.

    Many thx!

    David
     
    Shorely, Astarorr and sand_lantern like this.
  15. Brikeck

    Brikeck

    Joined:
    Oct 15, 2018
    Posts:
    8
    Thank You all for your replies. I am not sure where I should be posting my questions pertaining to the use of the HDRP SRP, so if this is the incorrect place please let me know.

    We are a small team and none of us are programmers, just artists, and we are making the move to unity as a render agent to achieve the highest quality renders with the least amount of time spent rendering. My question is, in HDRP SRP, there are rectangle and line lights that cast and give very strange results, I.E. rectangle lights shine through static opaque objects. In mirror materials, it as if there are no objects in front of said rectangle lights. Line lights, results are the same.

    Is there a script that I can add that will forcibly add shadows to rectangle/line lights?

    Emissive materials in HDRP SRP do not light the scene or affect GI. I got it to work once, and have never been able to get it to work again. How do I get emissive mats to generate and contribute to GI?

    AA, MSAA has not setting features and thus is not reducing my aa to an acceptable state. Is there an AA script I can add to my camera with refining options? Can I use Unity Tech PP Stack along with the HDRP SRP to more controll my AA issues?
     
    Shorely likes this.
  16. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,982
    Based on that, would you recommend to move to 2019.1 once stable, or stay on LTS release, for commercial projects that are performance hungry?

    It seems like no matter which way we go, there is great risk :/
     
    Shorely and sand_lantern like this.
  17. allenwp

    allenwp

    Joined:
    Sep 4, 2013
    Posts:
    46
    (EDIT: I've reposted this on the Template Feedback thread, since this post is all about my confusion on templates, which I previously had not separated in my mind from the new render pipelines.)

    Hey there, I just have some general accessibility feedback surrounding the new render pipelines and project templates:

    Background:

    I've been working in Unity for over 5 years. I've shipped and maintained a game on 8 different platforms, including all the big VR platforms. I've written code that saves out multiple lightmaps to a single prefab to allow switching between them at runtime at a performance that runs well on a Note 4 in Gear VR mode.

    With this background, I don't consider myself an expert -- but I also have a bit of an understanding of how things work in Unity.

    Feedback:

    I find the new project templates difficult to approach and learn. My biggest problem is that I don't deeply understand how the resulting project will be different if I choose to start with one template over the other. I think the documentation for this is actually fine and as clear as it could be. But documentation isn't good enough on its own. As a developer, I need to be able try things and see what the difference is, at a very deep and technical level. Diff tools are always helpful for this. And being able to switch a project back and forth between configurations is also imperative to having confidence in my understanding.

    I'm not sure what the best way to approach this accessibility would be, but I think the end-goal for these new features should be the following:

    It should be intuitive to a developer to configure an existing empty (basic 3D) project to be equivalent to a project template without needing to start the project with that project template.

    If I had a way to get to this point, I feel I would be able to deeply understand these new templates. But without any direction on how I could do this, I feel lost when trying to learn the impact of these new templates on a project. No documentation describing the different templates will solve this problem -- I need to be able to go through the steps of configuring the project myself and see how things change (using diff tools, etc.). Right now, it is definitely not clear how I could do this.

    Thanks for your work,
    Allen

    Update:

    I figure it could be helpful to Unity developers to share my experience as I try and figure this out myself to highlight where the pain points are:

    Recreating the "3D with Extras" template:
    • Pain point 1: After reading the documentation on Project Templates, I have no idea how easy it is to change a "3D" project into one that uses a template. I also have no idea how easy it might be to change a template project back into a bare-bones "3D" project if I choose I want to strip out the extra stuff later in a project.
    • Started with two projects, one using "3D with Extras" template, one with no template ("3D")
    • I noticed there was a Post Process Volume script, attached to the object in the Extras scene, so I thought I could start with trying to replicate this.
    • Editing the post-processing script revealed that it existed outside the Assets folder in a package cache. This lead me to believe this post processing script was likely in a package I could install
    • Pain point 2: I brought up the Package Manager and couldn't find the Post Process package. This was because, in Unity 2019 alpha when using "3D" template, the "Show preview packages" was turned off in the Package Manager. This made it appear as if I would need to go somewhere else to get this magical package that appeared only when I used the "3D with Extras" template.
    • Now that I had found the preview packages and matched up my project to have the same packages as the template project, I copied over all the assets from one project into the other.
    • At this point, I could tell I was still missing something, though no errors were given and all the project assets and packages matched:
    upload_2018-11-15_13-13-16.png
    • It seemed my post process effect wasn't working correctly, so I started to look at that, and found this error immediately:
    upload_2018-11-15_13-15-39.png
    • This lead me to review all my project settings. When doing that I think I found the differences between starting with or without the template.
    • After changing the project settings, I think I was able to get my 3D project to be equivalent to the "3D with Extras" template:
    upload_2018-11-15_13-28-42.png

    Recreating the HD RP template:
    • After my experience with recreating the "3D with Extras" pipeline, I started with the packages. This time I knew about the "Show preview packages" option.
    • Pain point 3: For some reason, the HD Render pipeline package is named differently when you use the template vs when you use the Package Manger on a blank 3D project??? Edit: turns out this is just before you import it.
    upload_2018-11-15_15-25-2.png
    • Next I copied all the assets over and replicated the project settings.
    • I think that seemed to be it:
    upload_2018-11-15_15-40-3.png

    Summary:

    To be honest, it wasn't as bad as I thought to recreate one of the templates from the basic "3D" template. There were a few pain points, but the only real thing that stuck out was my own lack of understanding on what a template really was. Somehow it wasn't entirely clear to me that a template was nothing more than:
    • An additional one or two packages that could easily be added or removed through the package manager
    • A set of starter assets, some of which all developers will immediately delete
    • A few small tweaks to a project's settings
    Now that I've understood this, it has become obvious to me that which template I start with really doesn't matter and I don't need to predict the future before starting work on a prototype.

    The one big issue:

    The major issue I want to end on was a fuzziness in my understanding between the render pipeline packages, the post-processing package, and the project templates. Could you use a render pipeline without starting with the template for that render pipeline? Is a render pipeline and a template the same thing? If it's a project template, then why is there one for a specific package? If a render pipeline is a package, then why is there a whole project template for it? These questions made this all very confusing and unapproachable to me.
     
    Last edited: Nov 15, 2018
    Shorely and sand_lantern like this.
  18. sand_lantern

    sand_lantern

    Joined:
    Sep 15, 2017
    Posts:
    210
    I agree with your experience. When I first started trying to migrate my project over, it took a lot of time to actually figure out what things I needed to change. I ultimately had 2 projects open and had to compare them object to object to really figure out what was needed. It would be nice if there was a better migration guide or some tools to help people making the switch.
     
  19. allenwp

    allenwp

    Joined:
    Sep 4, 2013
    Posts:
    46
    Hey again. I've edited my original post with a collection of all my thoughts. I think I've addressed where I was confused and why. I hope it's helpful in improving the accessibility and usability of Unity. Recently I've been very unhappy with the "jump in and prototype something" experience because I get hung up on not knowing what template to start with. I hope this can be improved to keep Unity as a simple, easy to use prototyping tool.

    Cheers,
    Allen
     
  20. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Great feedback, would be awesome if more people did the same especially before HDRP is released.
     
    Elecman likes this.
  21. Grimreaper358

    Grimreaper358

    Joined:
    Apr 8, 2013
    Posts:
    789
    Yes, just create a regular Unity project with the 3d template or 3D with extras then download your desired render pipeline, setup the pipeline asset, assign the pipeline asset, change your project settings (linear color space) and you're good to go.

    It's specific because you have to have a renderer for your project and since SRP will offer 3 which two is available now
    each with there differences, so you choose the one that will fit your project depending on it's needs and platform.
    HDRP - High Definition Render Pipeline
    LWRP - lightweight Render Pipeline
    2DRP? - (2D Render Pipeline - No actual name on it yet) There will be a 2D render Pipeline at some point

    So Project Templates is just to have a project setup with your desired render pipeline quickly. (Only problem I have with this is the example assets - going in and deleting them after) If you setup a render pipeline from scratch in an Empty Project you have to do a few steps. After a while, it's not something you want to be doing everytime you create a new project so the templates help with this. In the future, we will be able to make custom templates. This is more on the template side of things than HDRP.
     
  22. SebLagarde

    SebLagarde

    Unity Technologies

    Joined:
    Dec 30, 2015
    Posts:
    934
    Hi, we have tested Emissive material on 4.2.0 and it work. Enable Emissive on material + have an non black emissive input


    With MSAA you can only setup the multiplier (2x , 4x, 8x). that's all and it is expose on HDRP asset.

    MSAA is only about geometric aliasing, not abou shading aliasing.

    You can enable FXAA on top of that with Postprocess
     

    Attached Files:

  23. Brikeck

    Brikeck

    Joined:
    Oct 15, 2018
    Posts:
    8
    4.2.0?
    I am using HDRP 3.0.0-preview in Unity 2018.2.1.... and it is not working, baked or realtime...
     
  24. SebLagarde

    SebLagarde

    Unity Technologies

    Joined:
    Dec 30, 2015
    Posts:
    934
    4.2.0 is the next version to come, but I guess it was working since 3.3.0 and 2018.3b1.
    The package is still in preview so we don't provide update for older version like 2018.2. It is possible that at the time it was not working.
     
  25. Brikeck

    Brikeck

    Joined:
    Oct 15, 2018
    Posts:
    8
    Ok, I wasnt aware of the 2018.3 beta program and am installing it now. How do I get the HDRP 4.2.0?
     
  26. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
  27. ChannexDK

    ChannexDK

    Joined:
    Apr 14, 2015
    Posts:
    15
    Some feedback on the current SRP design: It would be awesome to have a bit more rendering and culling control. We have a scene with quite a bit of foliage which is sort of streamed into the scene. Naturally we have opted for a solution where we use Graphics.DrawMeshInstanced, since using gameobjects would kill performance (quickly). To be able to support culling and LOD, I feel like I have basically re-written a lot of functionality that already exists behind the scenes in Unity as well as creating my own rendering thread.

    It seems like it would be fairly straightforward to add an API of sorts that allows injecting custom rendering into a scriptable rendering pipeline where we can leverage the existing culling/LOD infrastructure that must clearly already exist native land of Unity? I remember seeing a thread about persistent drawcalls before SRP was born. That could be one way of going about it? Submitting a list of renderables (mesh/materials) that then gets culled/drawn?

    It would also be nice to have some more control over UI rendering (like I have mentioned here: https://forum.unity.com/threads/lwr...5-1-0-19-1-are-out.562291/page-4#post-3924151).

    (Also BatchRenderer.Flush seems a bit on the slow side, right now?)

    Apart from that, our studio is looking forward to releasing our VR game that uses the new SRP system, early 2019 (we keep our fingers crossed that we get a non-preview version before launch). It has really enabled us to push performance and pull off some neat rendering tricks, compared to our earlier releases.
     
  28. tibi_fake

    tibi_fake

    Joined:
    Jul 31, 2018
    Posts:
    7
    2018.3.0b12
    enabled PerObjectMotionVectors
    unity_MatrixPreviousM is updated only once, when entering or exiting play mode.
     
  29. tibi_fake

    tibi_fake

    Joined:
    Jul 31, 2018
    Posts:
    7
    I solved it using MatrialPropertyBlocks, but it would be nice to use the engine provided matrices, if they are calculated already, plus now I have to set that for all my renderers, as FilterResults doesn't actually expose which ones are visible. Any idea @Tim-C ?
     
  30. Kolyasisan

    Kolyasisan

    Joined:
    Feb 2, 2015
    Posts:
    397
    I've been wondering, will it be possible to use custom lighting models (like blinn-phong) in HDRPs shaders while still retaining all the shading/lighting calculations being deferred?
     
    P_Jong likes this.
  31. pastaluego

    pastaluego

    Joined:
    Mar 30, 2017
    Posts:
    196
    Is there any way to use SRP for layer culling with something other than Unity's Layers?
     
  32. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,225
    Can you log a bug on this, we should support it.
     
  33. XRA

    XRA

    Joined:
    Aug 26, 2010
    Posts:
    265
    any word on if ScriptableCullingParameters cullingPlaneCount is supposed to allow for 0 to 10 planes? I'm trying to do some clipped portal frustum culling.

    It looks like you've got a constant engine-side that the culling planes must match, if trying to change cullingPlaneCount there is an error (at least it looks to be named as a constant, kPlaneFrustumNum)

    Assertion failed on expression: 'params.cullingPlaneCount == kPlaneFrustumNum'
     
    equalsequals likes this.
  34. equalsequals

    equalsequals

    Joined:
    Sep 27, 2010
    Posts:
    154
    Not sure if this is specific to SRP or Package Manager, but it seems odd that one can elect to pull in an SRP, which depends on SRPCore and ShaderGraph and neither of those packages are automatically pulled in, resulting in a significant amount of error spam.
     
  35. watsonsong

    watsonsong

    Joined:
    May 13, 2015
    Posts:
    555
    The dynamic batching in shadow caster seems has no effect. And the frame debugger seems can not show any dynamic batching draw call.
     
  36. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    Hi all, I realize searching documentation between LWRP, HDRP, Shader Graph haven't been too easy lately due to documentations are hidden inside each packages' folder.

    So I made this site: https://bitinn.github.io/ScriptableRenderPipeline/

    Based on latest documentation from the SRP GitHub repo, it's easier to search and browse, hope this help someone.

    I also made a PR to fix some broken documents, let me know what you think:

    https://github.com/Unity-Technologies/ScriptableRenderPipeline/pull/2609
     
  37. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,982
    You sir are a scholar and a saint.
     
  38. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    Huh? what happened with the wiki?
     
  39. equalsequals

    equalsequals

    Joined:
    Sep 27, 2010
    Posts:
    154
    Question - is there a reason why Camera.cameraType can be treated as System.Flags? I can see no conceivable situation in which a Camera could be, for instance, both CameraType.Game and CameraType.Reflection. It seems like an oversight or perhaps a dirty hack to skirt around some API issue. It may seem a bit nit-picky for sure, but I could see some problems coming from this in naive implementation.

    EDIT: In fact, I already do in LWRP. I guess I'll open a bug...
     
    Last edited: Dec 12, 2018
    Shorely likes this.
  40. bitinn

    bitinn

    Joined:
    Aug 20, 2016
    Posts:
    961
    The only "officially maintained" doc is the ones in SRP repo (within each package folders, not the GitHub wiki), and they can be accessed by visiting Package Manager UI -> select a package -> view documentation.

    Thing is, each package comes with their own version of the doc, and current release (4.6.0-preview, 5.3.0-preview) contains quite a few broken links and images, as well as missing doc, which is why I made that PR.

    In the meantime, I need a way to quickly reference something in LWRP and Shader Graph, hence a quick site until Unity team got around to release a fixed version...

    https://bitinn.github.io/ScriptableRenderPipeline/

    (Shader Graph node should really stop linking to GitHub Wiki, those are outdated. I guess Unity team are just too busy to update them.)
     
  41. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    ah i see, good point.
    Great Work for the site btw
     
    MadeFromPolygons likes this.
  42. kelloh

    kelloh

    Joined:
    Mar 2, 2015
    Posts:
    29
    I've got a use case for which the Scriptable Render Pipeline isn't quite "scriptable" enough: multi-view rendering for a hologram called The Looking Glass.

    599660-looking-glass-display-2.png

    Right now, rendering using the Looking Glass SDK involves a Camera.Render() call for each view (can be something like 32 times).

    A couple problems with this approach:
    • Similar to the way VR rendering used to be inefficient until single pass rendering was implemented, this way of using Camera to render each view individually results in tons of state switching for the GPU (compared to the ideal setup).
    • Shadowmaps are rerendered for each view, when theoretically they could be shared among all the views.
    What would be ideal was if SRP provided a way to multiplex draw calls exactly like this image from this page:

    But ScriptableRenderContext.DrawRenderers doesn't allow for that kind of draw-call multiplexing.

    Right now in my 32-Camera.render() code I have

    Code (CSharp):
    1.  
    2. foreach camera view
    3.     setup culling for camera view
    4.     foreach shadowing light
    5.         render light shadowmap
    6.     foreach batchable material
    7.         render mesh batch into view
    8.  
    When what I want instead is something like:

    Code (CSharp):
    1.  
    2. foreach shadowing light
    3.     render light shadowmap
    4. setup culling for all camera views together
    5. foreach batchable material
    6.     foreach camera view
    7.         render mesh bash into view
    8.  


    Looking at the Frame Debugger on a complex project I have for the 32-view Looking Glass, it's obvious that things could be *so much more performant*.

    Are there any planned changes to the SRP API that might help me out?
     
  43. XRA

    XRA

    Joined:
    Aug 26, 2010
    Posts:
    265
    @kelloh a way to do this is similar to single pass stereo, you'd need a RenderTexture with
    volumeDepth set to 32 and dimension set to Texture2DArray, then you would target it with:

    Code (CSharp):
    1.  
    2. //NOTE: CubemapFace.Unknown and -1 means target all slices
    3. _cmd.SetRenderTarget(_multiViewID,0,CubemapFace.Unknown,-1);
    4.  
    If each slice needed to be in a different orientation you'd provide a ComputeBuffer globally which has the required per-slice matrices (though in this case it seems the camera is the same throughout, which makes it simpler).

    At the very least a ComputeBuffer of Vector4 Planes would be set globally, which describe where to clip the geometry at each slice. This would set SV_ClipDistance0 & 1 in the vertex shader. (defined in the vertex to fragment attributes)
    Code (CSharp):
    1. struct VaryingsMeshToPS
    2. {
    3.     float4 positionCS   : SV_Position;
    4.     uint instanceID     : SV_InstanceID;
    5.     //ETC
    6.     float clipDistance0 : SV_ClipDistance0;
    7.     float clipDistance1 : SV_ClipDistance1;
    8.     uint slice          : SV_RenderTargetArrayIndex;
    9. };
    10.  
    The shader used for this pipeline would need to be setup with instancing in mind, the general idea is that you use something like Graphics.DrawMeshInstancedIndirect, with the IndirectArgs instance count set to the real object count multiplied by the volumeDepth slice count. Using the Graphics should push it through so that you can use ScriptableRenderContext.DrawRenderers later in the pipeline (if called in Update etc). Not sure if CommandBuffer.DrawMeshInstancedIndirect behaves the same.

    In the vertex shader, modulo the instanceID with the slice count to get the slice index for assigning SV_RenderTargetArrayIndex and setting the clip distances.
    The geometry should then render into the slice and be clipped within it. I think the shadows pass would take place before this without any slices, I imagine a directional light would be fine since it is from the Light's POV for example & then any objects sample the shadow map when they render.

    Code (CSharp):
    1.  
    2. uint sliceIndex = input.instanceID % VIEW_COUNT;
    3. input.instanceID /= VIEW_COUNT;
    4.  
    5. output.slice = sliceIndex;
    6. output.clipDistance0 = dot(positionWS, SlicePlanes[sliceIndex*2+0]); //near
    7. output.clipDistance1 = dot(positionWS, SlicePlanes[sliceIndex*2+1]); //far
    8.  
     
    Last edited: Dec 21, 2018
    kelloh likes this.
  44. kelloh

    kelloh

    Joined:
    Mar 2, 2015
    Posts:
    29
    Yes! Thanks for writing this up! I got super excited when a colleague showed me SV_RenderTargetArrayIndex. You can even pull off emitting extra geometry for each slice in a geometry shader, if your platform supports them.

    But I'm also very interested to find out if it is possible to solve the problem in a more general "Unity-esque" way--one that fits naturally into Unity's MeshFilter -> MeshRenderer pipeline, and ideally that does not involve rewriting your shaders (past adding support for instancing, maybe, which is a well documented thing at this point). I'm comfortable doing an optimization pass using CommandBuffer.DrawMeshInstancedIndirect, but I was hoping there'd be a way to leverage the work done for single pass stereo into a more generalized multi-view solution. Or at least to let Unity folks know some of us are pining for it.
     
  45. XRA

    XRA

    Joined:
    Aug 26, 2010
    Posts:
    265
    @kelloh yea definitely, I think that must be what is happening in context.StartMultiEye(camera) somewhere internally Unity must be doubling the instance count (at least with stereo instancing), it would be great if they could just expose that where we could say context.StartMultiEye(camera, 32) etc without having to manually issue the draw mesh calls.

    If per-camera view and matrices are needed then maybe it would be similar to how Matrix4x4's can be provided to some of the Draw Instanced calls, except in this case it would be camera matrices in an array matching the multi eye count.
     
    kelloh likes this.
  46. michael_unity988

    michael_unity988

    Joined:
    Aug 30, 2018
    Posts:
    9
    Can it possible to render scene instead unity? or override mesh renderer?
     
  47. huwb

    huwb

    Joined:
    Oct 19, 2013
    Posts:
    24
    Hi all, first off SRP is far and away the best graphics coding environment I've worked in, really intuitive, clean, and fast, super nice work!

    I make middleware and rather than modifying or writing new pipes, I aim to plug in to the existing ones. I would like to ship middleware that can support LWRP and HDRP out of the box. And if the user has their own pipe, ideally i can still have a pretty good chance at working by default - for example i might insert something after the predepth pass which is likely to work in most renderers.

    On 2018.3 I'm experimenting locally with having an event after the render setup is complete, and adding a function InsertPassBefore<BeforePassType>() to ScriptableRender, so that i can insert my own passes as required. So what used to be:

    _mainLight.AddCommandBuffer(LightEvent.BeforeScreenspaceMask, _bufCopyShadowMap);

    Is now inserted retrospectively:

    renderer.InsertPassBefore<ScreenSpaceShadowResolvePass>(mySampleShadowsPass);

    A variant of this worked well for me locally, and seems like it would work well in general. Is this something we can have added? I'm happy to work up a PR against 2018.3 if it would be useful.

    If supporting this kind of functionality is not planned, then it would be good to understand how middleware like asset store stuff will ship against SRP in the future.

    Oh and please forgive me if i've missed something obvious, i did search around but didnt find anything, please forward me if there is already something about this.

    EDIT: Turns out i did miss it - I've just seen IAfterDepthPrePass and similar which can be attached to the camera, which gives me the hook i need in this case. Sorry for the noise! I'll think through how best to use this - I'm not sure right now how i can make sure this script is always assigned to the relevant camera to ensure i get the callback (or maybe i should assign it to all cameras?). The one aspect I liked about the above suggestion is I got an event for each render setup and then I can decide which to use.. But in any case it's good there's an option there for this stuff and I can make something work.

    I didnt see the same hooks in HDRP, is this kind of functionality planned for this pipeline as well?

    Thanks,
    Huw
     
    Last edited: Jan 1, 2019
  48. Danua

    Danua

    Joined:
    Feb 20, 2015
    Posts:
    197
    In unity 2019.1a013 after installing VFX graph package i get this error:

    Library\PackageCache\com.unity.visualeffectgraph@5.2.3-preview\Editor\Expressions\VFXExpressionTransform.cs(64,47): error CS0117: 'VFXExpressionOperation' does not contain a definition for 'InverseTRS'
     
  49. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    it's possible they've changed the API for that. Right now bleeding edge 2019.1 alphas work with latest github master and hdrp-master. HDRP is in the middle of changes right now so getting a new version to package manager could take time (unless they do some minor fix release for api changes).
     
  50. jjxtra

    jjxtra

    Joined:
    Aug 30, 2013
    Posts:
    1,464
    Two feature requests:

    1] Move RenderPipeline.cs out of the Unity engine and into the render pipelines core module. Does the core Unity engine need this class?
    2] Add two new events to RenderPipeline:
    public static event Action<Camera[]> endFrameRendering;
    public static event Action<Camera> endCameraRendering;

    There are many use cases to push and pop off or save/restore state, but without knowledge of the end rendering events this is tricky.

    Thanks for taking the time to read.

    - Jeff Johnson