Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Official Lightweight Render Pipeline is Evolving!

Discussion in 'Universal Render Pipeline' started by Tim-C, Jul 9, 2019.

  1. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    158
    I think the Mali 400 should support the "GL_ARM_shader_framebuffer_fetch_depth_stencil" extension so just use that. Unity supports framebuffer fetch for color automatically by declaring it "inout" but maybe not for depth/stencil values. In that case, you may have to write a GLSL shader and enable that extension manually. Has nothing to do with whether you're using URP or not.
     
  2. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    I'm not used to write low level shader (relative to surface and standard shader interface we have) and they brag it compile and adapt to each platform for people like me. Since we are supposed to go through shadergraph, I was wondering if they do that optimization for me when compiling to midgard. I had no idea about open gl extension until you bring that up and I looked it up. I have no idea how to use that either, especially in the context of unity, it doesn't look like anything the format of unity code I read would help me find context.

    For me when I say open gl es 2.0, it's about the inherent limitation heuristic that spills in the sheltered garden unity does for us (like limited texture sampler, no texture read in vertex pass), not full knowledge of the feature set. Also I haven't checked (yet) if they have soft particle shader in lwrp because it would need that extension to be performant. It's a global issue in all tile based architecture, so probably not just midgard.

    I try to read documentation on the target gpu when possible, but it's more as an education since I mostly get the high level idea, but I probably can't implement the low level code as I don't even know the proper process and all initialization, since years of going through standard shader didn't teach me about the boiler plate, I see and don't understand when I see non unity shader.

    I have just start downloading the URP source code, and I had a panic attack when I couldn't find on internet what a frak is that real type, up until I found the define hidden somewhere, which I kinda understand at a high level, but wouldn't be able to use on my own (I have so many question). I don't know much about hlsl except what I can infer from knowing programmation in general.

    Everything is so scattered, I only understand the core aspect that look like stuff I already did or know and ignore most of the boiler plate that certainly would be important for me to cross the threshold to actually use the source code. I don't understand the flow of the "file" organization either.

    I'm just worried that I'm stuck in a place where I'm not competent enough to do stuff I can conceptualized, but with no clear boundary of what's possible inside unity, the question is kinda strategic in that it will clue me about how much I need to know and how much unity handle for me as promise? In case of if it's too low level why bother and start using custom engine? What does it do for me?

    I mean I learnt shader through old unity, it allow me to get stuff done without having to consider everything, I was sheltered up until I could handle more complex thing, so I'm making it my way down through more opaque thing. But I would like to know what exactly I need to learn, what is the boundary of this new way of doing shader. I mean, my experience is that past a certain proficiency with shader, people get less patient when you start trying to do "wacky" thing, so you hit a wall of progression.

    This anxious stuff because they basically reset the whole knowledge base. My project isn't ready yet, I'm not sure I can commit to stay in old code by the time I sort out the artistic problem.
     
    sniffle63 likes this.
  3. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    You're not the only one.
    My advice is: stay away of URP or HDRP until they release a fully detailed and comprehensive documentation. (this could mean YEARS)
    And if they don't ever release it, then just stick to the built-in renderers.
    They will eventually have to find a solution once they see their users are not using their shiny new renderers and are going back to built-in.
     
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Speaking of which, instead of sleeping that night (it's 6:17 as I wrote this) I tried to find whatever information I could, even using the "hidden" unity 2020.1 manual as reference.

    I want to use the project I'm preparing to test, it's a hypothetical crude GI solution, compatible with ogles 2.0, it's basically a crude runtime baking solution ... as a test bed of how "flexible" urp is as a built in replacement.

    I'm presenting the step I need I plan to do with shader:

    1 - Inspired by overdraw visualization shader, I devised a cheap voxelization that simply blend fragment hashed position to a voxel grid, as bit depth number, which allow to get texture size² x 32 voxel in a single render. Decoding the bit is done with a LUT of all 4bit numbers (there is no bit operation in ogles 2.0) and unpacking the final number into two base 16 numbers to query the bit in the LUT. Similarly this would allow to make a ray marching, using octet as morton cube, to get a 8 samples deep octree stored using the mipmap.

    2 - The goal is to have a space partition where each empty voxel will allow to spawn and query cubemap probe that capture the scene as UV and depth texture of close scene chunk (depth texture still capture the whole far field).

    3 - The lightmap UV lay out is used to generate, and bake data, in many textures that act as a lightmap gbuffer of the environment, that store normal, albedo, world position, shadow masking data, and a hashed index of that fragment to the relevant cubemap, with fragment hashed to non empty voxel assign to the closest neighbor. Since we compute light on the lightmap, it can be done async and object just sample that texture result, instead of computing light every frame.

    4 - shader pass that accumulate result of sampling the Lightmap gbuffer by using the cubemap UV as an address, to sample back each point's lighting over the hemisphere from the accumulation, using the normal as a reference direction to sample that hemisphere, in a box projected fashion based on the size of the voxel. That is, it's lighting graph that map each point of the lightmap to another point of that same lightmap, such has it recursively update, using probe's uv address as visibility structure.

    5 - hashing dynamic object position to the voxel grid and use the box projected sampling of the UV of the cubemap to sample the light in the accumulation texture. Each area has it's own lightmap GI set up, and is surrounded by a close skybox that capture lighting from other chunk for ray that don't fall inside the cubemap when querying it.

    Observation:
    a. It's not clear if URP is compatible with custom render texture, most pass above are done through that.

    b. it's not clear if I can do camera render to texture, it's needed for the cubemap, ad hoc unity cubemap don't have an option in built in either to update render with arbitrary shader.

    c. we cannot do camera shader replacement necessary to capture the UV and depth in cubemap nor get the cheap voxelization (as the feature comparison tell us vs built in), it's unclear if there is an alternative, combine with b. and we can't make custom probe update.

    So since it's just not basic blending of texture, I can't do much of something, while I can with other engine and built in to some degree. URP limit innovation if no is the answer to a, b and c.


    I noticed unity released a blog post and a video that addressed some of my early complaint too, they have hlsl! include with custom node, and they do a demonstration of custom lighting (why they don't have lighting node of their own implementation I don't know). I find it to be quite messy though.

    https://www.youtube.com/watch?v=_jTXd3x6gOY

    I'm a huge proponent of some visual scripting, though I never said that typical node base where the solution, and unity won't wait to find the right paradigm anyway.

    EDIT:
    Personal note about URP:
    https://docs.unity3d.com/Packages/com.unity.shadergraph@6.9/manual/Simple-Noise-Node.html
    Don't use fracSin noise :eek: that's the kind of hidden stuff I noticed URP won't optimized
    https://www.shadertoy.com/view/4djSRW
    https://www.shadertoy.com/view/XdGfRR
    This seems so much better than a ugly sin based hashed that is expensive because trig.
    Thank god they share implementation in the node library documentation. URP is too high level sometimes.
    More noise warning goodness:
    http://byteblacksmith.com/improvements-to-the-canonical-one-liner-glsl-rand-for-opengl-es-2-0/
    https://www.shadertoy.com/view/ltB3zD

    Also I need camera stacking since I want to make space game that are fast in render ... cry in shader
     
    Last edited: Sep 21, 2019
    AcidArrow likes this.
  5. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    970
    I can answer one of those, yes, render textures work the same as before. We use them heavily.
     
    AcidArrow and neoshaman like this.
  6. sniffle63

    sniffle63

    Joined:
    Aug 31, 2013
    Posts:
    365

    this is my main problem, the normal built-in pipeline can achieve way more than LWRUPLMNOP is going to be able to do for probably the next 5 years.

    Actually no, my biggest problem is a random pointless name change that updates a system to having less features and not working.
     
    superjayman and atomicjoe like this.
  7. sniffle63

    sniffle63

    Joined:
    Aug 31, 2013
    Posts:
    365
    I say this as respectfully as i can, are we meant to actually be using this or is this like super early Alpha? I dont see it being possible to use this in a game thats more than a prototype (i actually dont even se eit working in a prototype). My conclusion from reading multiple forum post, the official post and dong some quick test, is this isnt usable and wont be for at least 2-5 years :/

    is this just a marketing move for people who are NOT actually using the engine?





    Agreed, i dont see how we are suppose to use this even if we wanted too...
     
    Last edited: Sep 22, 2019
    atomicjoe likes this.
  8. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    Exactly: the marketing team is 4 years in advance of the actual development team...
    I really think marketing and corporate politics are eroding Unity's image.
     
    noio, Hypertectonic and sniffle63 like this.
  9. EthanHunt

    EthanHunt

    Joined:
    Oct 9, 2012
    Posts:
    14
    Just checking, when will Shadowmask mode be available for this URP? Without Shadowmask mode, I cannot take URP seriously.
     
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Still hasn't slept lol it's 3:37 today morning, I Binged everything I could to understand what's going with urp, on, I have been anxious because it's sold as the future replacement of built in but it's full of red flags ...

    Now I know why, hear me out:

    They are solving the wrong problem, but worse it's a culture problem, not a competency problem, it's actually a variant of conway's law ( https://en.wikipedia.org/wiki/Conway's_law)

    Basically this is designed by tech head whose perspective on art is through academic technical paper about lighting

    exhibit 1
    That's the definition of a svbrdf (Spatially Varying Bidirectional Reflectance Distribution Function), they are not trying to give you control over visual, they have made the perfect instantiation of an academic paper, that's why the term "exact" come so often, but also why they don't want to mess their lighting model. Because from their perspective they already gave you everything you need, the most optimized academic lighting possible. Which is why you can't have vertex control too, it doesn't fit their model of what shading is. That's the wrong problem they try to solve with shadergraph ...

    They don't understand art
    so they don't get why you would want to mess their beautiful algorithm to make a wrap light to soften the mood of a scene ... good art to them is exact light in their view, their is so many wording that allude that. And also why they can't stop talking about photorealism too, that's the pinnacle of visual. They have a shallow and narrow understanding of visual.

    Since they only know academic paper, they are oblivious about how much light is faked, even when captured by camera in film production, there is a huge lighting rig that is designed to do stuff we can do by manipulating the lighting model directly, like said wrap lighting.
    These quote aren't bad in themselves, it's the subtle gradation of all lighting presented (which you can't access and modify the output), from top being pbr, and simple at the bottom. It permeate not only the documentation but all their talk.

    There is a subtle class hierarchy based on lighting model ( below is played up for fun and effect):
    - hdrp with hollywood envy, copy the tools without truly understanding their purpose (the vectorscope don't even have a flesh line, and you can only apply globally a color lut unlike in a real movie production where you track some area to apply a different one). That's the high class, they want to court upper class people like disney and all to show they have value, they try really hard to shelter this class with the dirtyness of the back kitchen by sheltering them from term like pdf (probability distribution function) by making the tools as tactile and close metaphorically to what this class is used to, hence accurate camera and light transport, so they can say "see just like your expensive movie! you don't have to "get" what different about real time, it's just fast and cool!".
    - there is the urp, the proletarian offering, it's a display of how smart they are, "we know better than them what's a good shader, it's the best light on the market period, it's performant, buy!"
    - there is the thing for hobo, it's the scrap, it's not even photorealistic, the hardware is weak, be glad we have something for you!
    - Then there is the hippie and kids who like cell shading stuff, "these guy they are funny, it's not even realistic shading, it's not serious stuff like we do here, hey let's show them they can do their kiddie stuff with a super ugly presentation of dumb posterized pass ...we are so in! Yo!", they simply don't get what's the deal here.

    The last one about cel shading is especially telling:
    https://www.youtube.com/watch?v=joG_tmXUX4M
    Contrast with this

    This last one, despite being with japanese still go through has being packed with great insight that respect peoples, the art and the techniques. It goes through technical aspect, contrast to many practices, uncover limitation due to human perception, and derive advices to use.



    They may not realize, but the way they talk and the way they present thing send huge undertone that there is two type of people for them: smart senor tech artist who know math, dumb artist who can only handle toy interface we need to helicoptere mom over their head.

    The problem is that they think we can solve all visual and art problem with renderpass and svbrdf. Despite the fact that renderpass don't allow you to pass per object material parameter in the override...

    It's pretty obvious they had to course correct with the very last cell shading video with Minion's art. In unlit documentation it is stressed how much this not for light, and now they use it as the custom light goto, with a very messy way to edit custom light node.

    It hint at something that all unity initiative have, the silver bullet syndrome, which finally wasn't the announced panacea, and end up being a franken bloated monster because it was forced to stray from the narrow focus it started on, and cohabiting with the solution it was supposed to replace (when it's not outright canned like Unet). That's a corporate wide culture problem. Surface gonna stay with all it's fault.

    Basically URP is the equivalent of sitcom lighting, a standardization of looks such as everything that will use it, will look the same. That scene like a tender moment will have the same light that a war scene (we can't move the light terminator to soften). And they probably think that if you use URP and design for low hardware, you don't have any artistic ambition to begin with, you are not HD (Hollywood Desirable).

    Now the source code folder structure is a huge organizational mess, after diving in it, if you want to do the same, I think that <Name>RenderPipeline.cs is the starting point, basically the two keyword to look for generally seems to be RenderPipeline and ScriptableRenderer, everything else is support definition of elements used in the pipeline, they are the thing to look after. I encourage you to look at the Keijiro retro pipeline https://github.com/keijiro/Retro3DPipeline
    You will be surprise! Apparently the shader is actually doing all the real work.

    Also apparently they use .shader, so it's not all .hlsl, so standard unity style coding probably will stay, but I can't say. I should probably learn more about how to use these ogl extension too.

    The renderpass thing is great for what it does, the svbrdf graph however is a huge regression from everything that predate it, from strumpy to amplify with a stop at forge:


    In fact the proper thing to do is to make an equivalence of the renderpass for managing everything about the light. Ideally we wouldn't need URP or HDRP and not have to go deep all the times in the source code mess ... We would just plug the lighting library and manage light type with nice plug and play interface that expose their variables, while still being able to create our own light model as a master node (JUST LIKE STRUMPY).

    I mean we don't seem able to do something like the video below (light with no ndotl) with URP (and it doesn't seems planned at all) WHICH IS MORE PERFORMANT that their lighting if used in the same single pass...


    It's just that lighting control is HUGE in making visual, not just managing svbrdf ... Hiding light behind a stupid wall of very opaque source code don't help.

    In fact I would suggest them to release a minimal SRP, that is a template to extend and present most important aspects, without having to wade through all the permutation of a complete srp like Universal and HD.

    Also they need to fix shader replacement asap, can we do a renderpass on a render texture? (apparently no https://www.reddit.com/r/Unity3D/comments/c64mir/camera_replacement_shader_and_lwrp/)
     
  11. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    970
    YMMV, depends on your needs, expectations, and how much you are willing to modify things yourself. We use it in production but we are willing to fix stuff ourselves.
     
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    URP is production ready.
    It's just not "universal" and have a very narrow focus on "sitcom" lighting. Don't expect to do anything that don't follow their vision of shading. Within those constrain it's great, the problem are just the constraint.

    There is also documentation:
    start here: https://blogs.unity3d.com/2018/01/31/srp-overview/
    then here: https://docs.unity3d.com/Packages/c...e.Rendering.Universal.ScriptableRenderer.html

    A minimal srp example is also available:
    https://github.com/stramit/SRPBlog/tree/master/SRP-Demo
    Compare with the retro3D keijiro srp too, it's also minimal, refresh link: https://github.com/keijiro/Retro3DPipeline
     
    Last edited: Sep 22, 2019
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    EDIT:
    While it's about URP I have a partial answer to my question about Tile rendering optimization, it's been a while I hadn't look at the documentation but there is a memoryless flag
    https://docs.unity3d.com/ScriptReference/RenderTextureMemoryless.html
    I just need to find a way to see if it's useable in URP, and if mali 400 qualify (is there vulkan on that?). edit: (I guess not? https://developer.arm.com/solutions/graphics/apis/vulkan ) but then is it still possible to hack that with the extension? :oops:

    But anyway, it's clear to me now that URP isn't a replacement and isn't what I need, it's philosophy is not compatible with control. I'm starting to look at custom SRP and maybe salvaging the huge work done on the principal unity SRP to jumpstart proper Pipeline with anything I need. Ideally a plug and play SRP that leach off the library of the official one.

    I need to find and understand access and control of rendertexture, cubemap and probes. Any one has a pointer to kickstart?

    Also I need a confirmation than .shader aren't going anywhere :sweat: !

    THEY def need to separate 3D urp from 2D urp, they are doing a big generic black box monolith again, that's bad, it's built in 2.0, the bad part, if they continue that way. URP should have stayed a small philosophical sitcom lighting, (I say sitcom lighting if you know the history of its creation, it's not that derisive) ie a way to get fast pseudo realistic lighting (ie no custom control, not a replacement for everything and anything) and be advertise that way. Expending that philosophy into "universal" philosophy is bad, they are at odd, and we see that in the reaction because that's a bad place to expend from toward supporting broad range of creativity and artistic endeavour, it's a flawed view on custom rendering. Shadergraph, currently, is also fundamentally flawed in its approach.

    edit
    This is how I would brand te offering
    Canvas SRP: for 2D
    Creative SRP: for people who want creative control
    lwrp: for efficient scalable realistic lighting
    Hdrp: the hollywood class renderer.

    The renderpass thing is great, they need something similar for:
    - custom texture buffer managament
    - custom light management (ie fine control of the lighting affecting objects group)
    - Shader graph input that allow to get the light parameters to modify per light type and how to blend them with material (basically surface ++)
    - vertex manipulation graph
     
    Last edited: Sep 22, 2019
    noio and Immu like this.
  14. Vallar

    Vallar

    Joined:
    Oct 18, 2012
    Posts:
    177
    So I recently read the blog announcing the whole "Standard Pipeline is going away, it is now URP not LWRP". I have two questions:

    1- What are the difference in features between the URP and the Standard one (for example lack of AO in LWRP)?

    2- When are the features missing from URP going to be implemented to match that of the Standard one? If that isn't planned then what is the alternative?
     
  15. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    It's here
    https://docs.unity3d.com/Packages/c...l/universalrp-builtin-feature-comparison.html

    My opinion is that URP is an incredibly narrow pipeline (and shadergraph is narrow too), if you want to be creative, you might find some roadblock, if you just need efficient realistic light with sensible constrain, that's the best you can get.

    If you want to get creative you'll need to do a custom SRP, it will be less scary once they figure out how to do proper tutorial, but there are a few blog everywhere. With recently added a hack extension of node to inject proper code, so that mitigate some of the complaints about lack of shading control.


    tangent:
    URP is optimized to tile rendering through renderpasses
    https://docs.unity3d.com/ScriptReference/Rendering.ScriptableRenderContext.BeginRenderPass.html
    So that answer my question neatly, if mali 400 support, it will do probably.

    Still looking for emulating replacement shader though ...
     
    Last edited: Sep 22, 2019
  16. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    Here is the updated comparison chart:
    https://docs.unity3d.com/Packages/c...l/universalrp-builtin-feature-comparison.html

    edit: whoops, didn't see your last post neoshaman
     
  17. Vallar

    Vallar

    Joined:
    Oct 18, 2012
    Posts:
    177
    @neoshaman and @atomicjoe thank you very much for the link. That helps a ton.

    Am I correct in feeling that the Standard pipeline had lots of stuff "out of the box" like AO, shadows from multiple sources, etc... and URP is basically "you gotta roll your own thing or wait for the eternal "WIP" to finish and hope it is not half baked like most other stuff" for many of these things?

    This just feels like a really quick marketing "let's advertise cool stuff" when the actual application of these things demand FAR more than what the current tech is capable of (or a graphics programmer taking the time to wade through the not so great documentation to figure things out)?
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Pretty much, if you want to make a simple artistic lighting, you will have to roll an SRP, and that mean figuring out the entirety of light architecture and optimization per platform.

    Well that's not true anymore since they made custom node and an example how to inject custom hlsl, and how to pick up light parameter given by their shader library. Which is like an odd last minute patch that is tonaly different with the cleanest they were going for.

    Built in was great, despite the limitation, because basically:
    1. it handle the complex problem of fragmentation and light architecture up front. 2. it handed to you only the common data necessary. You had to learn a bit about the different implementation limitation (deferred, forward) and that was it. And then you would do a custom process on top of that to get specific lighting and material.
    3. it pick that and then parse it to make it work with the fragmentation.

    SRP is an attempt to give you control over 1 but in doing so, it mean that every architecture will be unique and incompatible. So if you want a different chair in the house, you have to burn the house and rebuilt it.

    URP don't give you control about 2 up until the "patch". So you were limited to whatever light they gave you and couldn't modify it, I think they expected you to go through their neat renderpass stuff, but that solve entirely different problem.

    What they really need is to give a nice interface for 2, that is exactly what they show in their custom toon shading with minion video, but properly implemented. It's the best of both world, you don't have to rebuilt the house to get specific light effect, and you get to be efficient because you didn't break their smart lighting architecture.

    And that was the surface shader philosophy. Current Built in Surface itself did dark magic stuff in the background, hence why it needed to be depreciated, it tanked the flexibility, but they over corrected IMHO. Most of the stuff out of the box of standard were made on the course of long years. The problem really is that they were too protective of their efficient™ implementation in official srp, that they sheltered us from adding what's missing, result they don't have the options (yet) and it's difficult to extend it easily.

    To be frank it was designed as lwrp, ie a focused alternative that did one thing very good, it's only the universal rebranding (with the specter of built in depreciation) that completely mess their message and create anxiety. That's most surely a management and marketing thing.

    In order to transform that oopsies, they need to create a strong interface to manipulate the lighting parameters, geometry, post processing and custom texture rendering features. Ie be truly universal. And the next step is to figure out at least a way to make srp building less of a pain, especially managing custom light architecture that work with device fragmentation.
     
    noio and Immu like this.
  19. liiir1985

    liiir1985

    Joined:
    Jul 30, 2014
    Posts:
    147
    The problem you descreibed, I think it's also the result of lacking documentation about it.
    I don't see any problem if you want to make a different lighting model, and it's not difficult neither. In fact, you can do this by both shader graph or shader code.
    For surface shader, with the structure of urp's shader library, it's totally ok if you only implement fragment shader, and do what surface shader does, or change the vertex shader a bit to fit your requirement with much less effort than that was neccessary in the builtin pipeline, I wouldn't say it's less work than surface shader, but not more, it just looks a little bit different than surface shader.
    You can also make your own master node in shader graph to control every detail of the code it generates, and then provide it to your TA, so then can make more effects base on it.
    Finally you'll have the option to make your own renderer, to fit your really special need, by reusing the common modules that's needed in most render pipelines, and just implement what is different.
    These things are all streamlined and relative easy to achieve via URP. The only problem currently is you have to find out how to do it by yourself(by reading the source code), Once you figuered out how to do it, it's really not that difficult

    I think the main problem about urp currently is really the documentation and feature parity, there are still some crutial features missing, like camera stacking which is vital for UI(although you can activate it by yourself, but it's not what every end user should do)
     
    Last edited: Sep 23, 2019
  20. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    since you seems to know teh way of teh land, do you have any pointer on how to add a replacement shader that render into a custom texture target?
     
  21. liiir1985

    liiir1985

    Joined:
    Jul 30, 2014
    Posts:
    147
    You can do this with custom reader feature, inherit ScritableRenderFeature, make a public material field. and make a new RenderPass via inheriting ScriptableRenderPass, and add this pass to the renderpipeline inside the render feature.

    Inside RenderPass, you can then Get an temporary RT to store your result, and setup a filtersetting(cull layer ,etc)
    and then render the objects by using context.DrawRenderers inside the execute method of the render pass( you can take a look at the implementation of RenderObject, it has everything you'll need to implement what I described)

    After you implemented these 2 small class, you can add it into your srp setting file, and drag the material you want to override for every object to it. And the rendered result will be stored in the RT you specified, and is avaliable for use in the later passes of the rendering(including shader graph)
    You can control the whole process to be rendered at the exact time point inside the renderpipeline you want and with exactly the order you want to have, to ensure everything is rendered properly. It's not like in the builtin, it's rendered at some random point in the update loop

    And, you won't need to write more than 50 lines of code to do this, most of them are just boilerplate for render feature, which could be generated automatically, if unity could provide a code template for it later.

    If you only want replacement shader, you can easily add a render object, and specify the material you want to use for which layer and at which render stage
     
    Last edited: Sep 23, 2019
  22. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    I really just need it on demand, not each frame. I basically try to make a light map based lighting solution. You render/update when needed, then only sample it per object.

    The real trick is that I want to add a global illumination approximation, which need two distinct shader replacement:
    - one to render a voxelization of the scene at initialization time to do a quick space partition. I use a trick to fake bitfield render using transparency.
    - then another one to render the scene with uv colors of the lightmap and depth data, but in cubemap in the empty voxel cell.

    Once this initialization/update pass is done. I do multiple async render (framerates independent) that essentially take, for each point of the lightmap, after direct light computation using gbuffer like data, a box projected hemisphere sample of the cubemap closest to that point. Then we use the sample data to fetch back the light map to accumulate light in that original light map point.

    The accumulation part seems feasible with just customrendertexture. But I couldn't figure out how to do the initialization. I mean can we have multiple srp and rotate them at will?
     
  23. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    It's clear to me that URP is not actually "universal" but a very specific renderer for specific projects. It's still the "lightweight render pipeline" with a fancy new name from marketing.
    And you know what? It's FINE for me.

    If they want to fracture their user base into diferent renderers, that's their problem...
    But what we REALLY REALLY NEED is a DETAILED DOCUMENTATION of the whole Scriptable Render Pipeline.
    Not the HDRP or the URP. The actual Scriptable Render Pipeline framework.

    If they are forcing me to switch from built-in renderer to an inferior solution like URP and break compatibility with all the custom shaders I have made over the years and all the shaders I bought on the asset store, I might as well make my own Render Pipeline.

    But I need an actual, plain english and detailed documentation of the whole process of making my own render pipeline. Forcing me to find it by myself diving into the code of URP is NOT the solution.

    We need full docs of the entire Scriptable Render Pipeline, in plain english, explaining ALL the internal steps and concepts, with code snippets and examples.
     
    Last edited: Sep 23, 2019
  24. liiir1985

    liiir1985

    Joined:
    Jul 30, 2014
    Posts:
    147
    Of course you can do it on demand, since it has culling mask, and you can use it for filtering, or you can just simply add a switch to turn that pass on or off, it's all under your control
     
    neoshaman likes this.
  25. liiir1985

    liiir1985

    Joined:
    Jul 30, 2014
    Posts:
    147
    Although I'm not with your opnion about URP being a very specific renderer for specific projects, but more documents on SRP itself indeed would help alot

    But even though you want to implement your very own pipeline by using SRP, URP is still a very good start point for customization. Its framework will help you a lot
     
  26. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,893
    with a bit of hlsl code even when using shader graph you have full access to all lighting inputs which are:
    struct Light
    {
    half3 direction;
    half3 color; // actually color is stored as half4 in _AdditionalLightsColor[] – so we still have access to the alpha
    half distanceAttenuation;
    half shadowAttenuation;
    };
    if you want you can go even further and calculate your own distanceAttenuation.
    equipped with this information you can implement most likely any lighting model you want.
     
    neoshaman likes this.
  27. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    no light position :( edit: I guess that's teh struct of the main direct light, for other type of light I need the position due to lightmap gbuffer not having any reference spatially, I store the xyz in texture of each pixel to (potentially I haven't done it yet) compute light and shadow. Edit2: I guess I could just capture the relative direction at data baking time instead of teh position? but then that make more stuff to update by rebaking all the pixel data, instead of just light passed to the shader.



    Then they were super bad at communicating their own point then, because almost all of that is redacted and hidden in strata of complex code, and the talking point seems to avoid that (see when a question is asked in one talk and the presenter is kind of like, there will be no surface like stuff).

    I'm reading all srp I have shared now, I get some local stuff, I get the bigger picture (cull, draw, render), but can't get close to any practical level of doing myself. And the fact it's leave out from their easy front facing tools that supposed to liberate us, is concerning ...

    edit: They should probably further "extract" the cull, render, draw into their own file for accessibility, smashing out huge boiler plate code with actual running code is hard to read, BUT then it's already sufficiently fragmented as is, it's a no win situation and I bet we aren't supposed to be so early looking at it, only hardcore pro, but the news of depreciation kind of have created a situation of stress.

    edit:
    is the culling mecanism exposed or it's on the c++ side and access through function call?
     
    Last edited: Sep 23, 2019
  28. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,893
    well, of course:
    float3 lightPositionWS = _AdditionalLightsPosition[perObjectLightIndex].xyz;
     
  29. liiir1985

    liiir1985

    Joined:
    Jul 30, 2014
    Posts:
    147
    You have 2 functions inside "Lighting.hlsl"
    int perObjectLightIndex = GetPerObjectLightIndex(i);
    float3 lightPositionWS = _AdditionalLightsPosition[perObjectLightIndex].xyz;

    Then you have the light position you need

    Well, like what I said, you need a full documentation. Otherwise the code and library are all well structured, you just need to know where to find them, and what are they for
     
  30. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    I was looking at the .cs still
    By the way what the frak is occlusion probe, can't find any reference by unity about it, it's just mentioned in script not explained.
     
  31. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    Occlusion probes are not exposed in the API.
    They are only generated by the built-in lightmappers and are used to fake shadows from lightmapped static geometry over dynamic objects. Without them, dynamic objects illuminated from a mixed light would not receive shadows from fully baked objects (lightmapped).
     
    neoshaman likes this.
  32. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Last edited: Sep 23, 2019
  33. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
  34. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    2,978
    @neoshaman
    Regarding Mali-400 and render passes:
    * There's no Vulkan on Mali-400. Vulkan feature set has parrity with OpenGL ES 3.1, Mali-400 only has ES2.
    * It is, in general, possible to read the pixel contents that are in the framebuffer at the moment. Unity only supports GPUs that have the GL_EXT_shader_framebuffer_fetch extension, here's a list: http://opengles.gpuinfo.org/listreports.php?extension=GL_EXT_shader_framebuffer_fetch (that's for OpenGL, if you run Vulkan it should just work).
    * We are considering adding support for other similar GL extensions for reading the current framebuffer contents.
    * RenderPass API allows you to stay in this fast local memory you mentioned, if our implementation supports it (currently, it's Metal and Vulkan only, we're considering adding OpenGL ES support). If it doesn't, it will automatically fall back to render textures.
    * Even if we add support for the extension that is available on Mali-400, you're limited (a lot!) in what you can do with it - only a single attachment (so no MRT), only a handful of formats supported, and, afaik, no MSAA in this case.
     
    neoshaman likes this.
  35. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    No Tegra nor Nvidia Shield present in this list.
    Does this mean framebuffer fetch is not available on Nintendo Switch? (it would have been REALLY handy :p )
     
  36. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    2,978
    @atomicjoe This is OpenGL ES, not OpenGL :)
    No, I don't think Switch has something like that.
    And it seems that Tegras don't have this extension available (or any other similar counterpart).
     
  37. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    Actually, I'm compiling for Nvidia Shield TV on Android Vulkan, since it's the closest to Switch I can get without a developer kit.
    It's a pity Tegras don't support this, it would have speed-ed things up for refractions :(
     
  38. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    I'm very well aware of that, It's like getting blood out of stone!

    But it made me do some serious thinking outside the box, which is why I focus on it. I'm just salty I won't need the bit comparison without bit operator beyond it though lol All example I have seen use expensive LUT.
    Code (CSharp):
    1. bitTest = (bitmask / powerOfTwoPosition) % 2
     
  39. Shane_Michael

    Shane_Michael

    Joined:
    Jul 8, 2013
    Posts:
    158
    Lots of old Tegras supported the Nvidia flavour of the extension:
    http://opengles.gpuinfo.org/listreports.php?extension=GL_NV_shader_framebuffer_fetch

    It has since been deprecated on the X1 when blending was moved to a separate hardware unit. You have access to "NV_blend_equation_advanced" but the framebuffer fetch is gone, unfortunately.
     
    atomicjoe likes this.
  40. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    @aleksandrk Thanks again for the info, I have finally narrowed down the information I needed, welp at least I'll profile it, mali just don't like reading the depth, it's already a problem with soft particle warn ARM in its unity guide v4, it's for something that is applied every frame and can potentially cover significant part of the screen. :oops:
    https://static.docs.arm.com/100140/0400/arm_guide_for_unity_developers_optimizing_mobile_gaming_graphics_100140_0400_00_en.pdf
    Given that's the latest update, I guess there is no memoryless access ... Or maybe they haven't checked with unity yet lol.

    Speaking of extension I Looked at it and I'm particularly interested in those, though I'm not sure what technical impact that does mean yet.
    https://developer.arm.com/ip-produc...mali-gpus/mali-utgard-architecture-extensions
    GL_OES_depth_texture
    GL_OES_depth_texture_cube_map//I need that probably
    GL_OES_depth24 //any 24bit texture are rumored to be bandwidth inefficient by ARM. IDK
    Those last one have me paused:
    GL_EXT_multisampled_render_to_texture //seems to hint at memoryless type of code, I don't fully understand the scope yet of what it does
    GL_ARM_shader_framebuffer_fetch
    GL_ARM_shader_framebuffer_fetch_depth_stencil
    Looks like they don't have the generic GL extension and have a custom one?
    I'm confused lol. That's memoryless! :confused:

    I also found arm's white paper on shadowmap, maybe it can help I don't know yet.
    https://community.arm.com/developer...-realtime-shadow-rendering-with-opengl-es-2-0
    It seems to deal with depth texture and buffer best usage ...

    I'll probably have to try to implement it myself, if it's possible with the level of access unity provide. Though I never made anything in Glsl, I'm trying to catch up now. I'm also trying to get comfortable with SRP ...

    The other use of depth are less critical because I want to use them in an async texture update in custom render, where the update can be spread on many frame as needed, I haven't tested on device yet.

    I don't really need MRT as is, I just need some baked depth data not direct read of the depth buffer, I can probably compute it with fragment position baked into colors channel.

    If I can have multiple RenderTexture, not at the same time, I can rotate to spread the rendering, it's fine. In fact I only need render to texture from camera very infrequently:
    - one to bake the vertex data to UV (position, worldnormal, albedo, etc ...),
    - one to voxelize and
    - one to initialize a cubemap.
    The cubemap is the most expensive as I use it to initialize a 2D octohedral cubemap array, by transferring each render to the atlas, but still not every frame, and in priority relative to where the player look at. Potentially in fixed environment, this can be baked at build time with better data.

    Then I need to chain multiple CustomRenderTexture that don't use camera rendering:
    1. using the baked LightMapGBuffer (LMGB) data I compute the direct lighting to another lightmap accumulation texture. Only when light change.
    2a. Using the normal from the LMGB, I hash a voxel index to find the right octahedral to sample (a single raycast equivalent on the hemisphere above the normal), then I use that sample to find the right pixel to sample on the light accumulation texture (advance implementation should probably sample some LMGB data of that point too, to compute better BRDF response).
    The depth is used to compute the attenuation of each hit, or in advanced implementation, the mipmap level to simulate a cone (carefully as of bleeding at uv edges). Then I accumulate in the lightmap texture. I repeat at a low update frequency update, It's spread on many frame, until the entire hemisphere is done, then I start again.
    2b. If we are on fancy machine, I can pass a list of primitive (How I don't know, but it was on the HDRP lol), since I have the stored position of the point and the hit, I have a line I can test against, therefore compute the contribution and occlusion from dynamic, probably can be done on weak hardware too with a single sphere.
    3. Then static objects just sample the light accumulation, and dynamic box sample the cubemap to query the light accumulation.

    There is many permutation of that idea by moving some part around, but the general idea is that It's a lightmap that sample itself using another visibility adress texture in some form. The voxel and cubemap just allow for runtime update and are compatible with weak hardware. It's only two sample (fetch) per update in the simpler case (omitting sampling the albedo and getting only intensity, no brdf computation of the hit), on gles 2 that's 4 rays per update per lightmap pixel. It's mostly texture expensive, as adding more precision require even more and bigger textures (for example using 3D textures instead of lightmap to decouple from geometry, the cubemap depth allowing to avoid bleeding, by using the 3D position as address and sampling on a sphere instead of hemisphere, or adding SH data at pixel to get directionality, more cubemap capture of only key dynamic objects, depth tested against the static). But then many games and GI solution use very low rez lightmap already, enlighten is at 1 pixel per meter or less. So it might not be a big deal?

    I just haven't seen the final visual result yet (noisy? bleedy? patchy? Especially with the very rough geometric approximation of using box projection and fat voxel of this particular pipeline), as I was looking to see how unity change thing around, to test the idea.
     
    JoNax97 likes this.
  41. denwik

    denwik

    Joined:
    Sep 22, 2017
    Posts:
    27
    Can i use Bloom from post procecing in a mobile game without eating up the battery?? :)
     
  42. aleksandrk

    aleksandrk

    Unity Technologies

    Joined:
    Jul 3, 2017
    Posts:
    2,978
    @neoshaman
    GL_ARM_shader_framebuffer_fetch and GL_ARM_shader_framebuffer_fetch_depth_stencil can give you access to what was previously written to the framebuffer. You would need to write GLSL code to use that, as Unity doesn't use those extensions at the moment (only the EXT one).
    This is correct, ARM made their own extension and didn't implement the common one (EXT).
    Also, keep in mind that if you write shaders that require those extensions, they obviously won't run anywhere else, just on ARM chips :)
     
    neoshaman likes this.
  43. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,893
    Looking into spot light cookies:

     
  44. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,893
    is there way to access the "cullResults" as used by the lwrp or urp?
    i need the number of "visible additional lights which cast real time shadows".

    i can do it in the pixel shader like this:
    float numOfShadowcastingLights = 0;
    for (int i = 0; i < _AdditionalLightsCount.x; i++) {
    #if USE_STRUCTURED_BUFFER_FOR_LIGHT_DATA
    if ( _AdditionalShadowsBuffer.shadowParams.x > 0.0h) {
    #else
    if ( _AdditionalShadowParams.x > 0.0h) {
    #endif
    numOfShadowcastingLights++;
    }
    }

    but i loved to just push it by c#

    @aleksandrk any thoughts about this would be be appreciated.
     
    Last edited: Sep 29, 2019
  45. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    I don't if that help But I read how catlikecoding did it, and he is closely mimicking LWRP in his custom render pipeline example but with a few extra (all light cast shadow)

    https://catlikecoding.com/unity/tutorials/scriptable-render-pipeline/spotlight-shadows/
    Part 5 more precisely and the next tut of the series

    use the code tag!
    Code (CSharp):
    1. if (light.lightType == LightType.Spot) {
    2.                     …
    3.  
    4.                     Light shadowLight = light.light;
    5.                     if (shadowLight.shadows != LightShadows.None) {
    6.                         shadow.x = shadowLight.shadowStrength;
    7.                     }
    8.                 }
     
  46. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,893
    thanks for the tip: it is always worth having a look into catlikecoding's stuff.
    but actually i do not want to write a custom rp but use lwrp or urp.
    so an api or public variable would be handy.
     
  47. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Well using the info in catlike coding I have parse the code, and they only ever seem to have a bool to set shadow on additional light, they don't seem to have a count, and the setup of light only activate that bool based on the detected type (not point light shadow, no additional directional as there is mainlight). In Universal renderPipelineCore.cs you can see all the data laid out simply, But I looked in forward renderer and UniversalRenderPipeline, and I haven't seen anything that discrimate, so The number seems to be constant and the same than define, that is 4 for gles 2.0 due to lack of bitmask and int array (I guess to have a bitLUT, YAGNI!!!), and 8 for everything else, per object, and then 32 to 256 for the entire renderer (as UBO vs SSBO stuff). All documentions reference 1 dir and 4 spot shadow, they don't mentioned anything about "per light shadow" setting, since they use shared texture and loop, it's probably to avoid more branch, also because the shade in screen space (potentially)? In UninversalPBRSubShader.cs, they define the keyword and interface for shader, _ADDITIONAL_LIGHT_SHADOWS is just a bool keyword, so that's probably what you need to check with ? Shadows.hlsl obviously manage shadow, so maybe that's there? Every additional shadows are either in a structured buffer or in array, like _AdditionalShadowParams, and it seems to only encode shadow strength and softness. Looking at shaders, most of them deal with the shadowcaster pass, so nothing important. Within Lighting.hlsl, the segment that is tiggered by _ADDITIONAL_LIGHTS just do a loop based on all lights count, and what's pass is basically just the position, direction and attenuation.

    I would say that, if you want to detect which spotlight (aka additional light since that's the difference with main light), it's global bool. I don't seen anything, save for coding a custom srp.

    Maybe I'm wrong and the dev will answer properly.
     
  48. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,893
    somewhere the number has to be taken into account because if you only have one shadow casting light its depth texture will fill the entire shadow map whereas if you have multiple ones their depth textures will be atlased.
     
  49. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    It's PROBABLY the number of CULLED light, it's not which one cast shadow, once they are culled, the texture adjust to what remain. I was basically saying there is a fix limit on number of light in "a scene" (32 or 256) and "per object" (4 or 8). I see no other reference in the code.

    Catlike does teh same thing
     
  50. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,492
    Okay, I have a question too, how much control do we have on culling? It seems we only get culling result.

    Let's say I want to render a dungeon, I want to specifically:
    - group elements per room/tile,
    - sort room/tile groups front to back.

    When rendering a room/tile group
    - render dynamic objects front to back, (room being generally convex, they most likely stand in the middle, covering everything at the edges)
    - render decorations (they tend to accumulate on edge like wall, etc ...)
    - render the room geometry (cover most space, most likely to be covered by everything).
    - render next room

    I'm not sure how to do that currently with any srp.