Search Unity

Any Runtime/Dynamic Baking Options - or are any coming?

Discussion in 'Global Illumination' started by Arthur-LVGameDev, Feb 4, 2020.

  1. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    85
    Hi,

    I apologize in advance for starting a thread with such a seemingly broad question, but I've searched high & low trying to find good (even just workable) solutions and I'm at the point where I feel like I might just be missing something.

    Some Context
    We're building a fully-procedural management/tycoon style game in 3D. Players are able to freely build and destroy structures, change flooring and wall paint colors, build multiple stories, place objects, and the like -- and AI agents/characters then interact with the built structure. Our camera is relatively "freely controlled" by the player and it can tilt / orbit / translate. During gameplay, the player will typically be viewing around and into their structure from an angle; the structures have roofs but we hide them for the "current floor", so that the player may see into the structure to build / tweak their structures to be more efficient and better suited to the AI, etc.

    The problem(s):
    Most everything is good -- except for lighting. We can render thousands of high-variation agents and objects via DrawMeshInstancedIndirect and ComputeBuffers, can allow the player to build massive structures, etc... But the lighting, it continues to be a struggle!

    Our setup as of right now is described below, but I can't help but feel like there's got to be some better way to do this, and to achieve a better end-result as far as lighting/GI goes, though I'm at a loss because of our inability to "bake the scene" -- everything is done procedurally (structures are runtime-created meshes/quads, etc; objects are placed by the user; etc).

    Our Lighting Setup
    • Unity "Built-in Renderer" (legacy) via 2019.2.x latest.
    • Deferred rendering path.
    • 1 Directional Light shadow caster; provides time-of-day & exterior sun shadows.
    • Ambient mode set to 'Color' & medium gray; avoids over-exposing the outdoors at night & avoids always over-exposing the indoors.
    • 1 Directional Light, shadow caster, cull-masked to ONLY impact the indoors [objects, floors], aimed directly "down". Provides shadows for indoor objects and basic light (does not hit roof, does hit floor).
    • Structure [wall] meshes receive only exterior light (interiors, including walls, are blocked/shadowed by roof); since the interior walls receive no light from the "directly-down" facing interior Directional light we have added a custom shader that take an HDR "_InteriorAmbientColor" / essentially an intensity multiplier so that we can make the interior wall brightness roughly match the floor/object/agent light intensity.
    • Post-effects: AO and slight tweaks to Color Grade.
    This all works. And it works OK. But it somehow feels somewhere between deficient and suboptimal. We have user-placable lights for indoors as well (point lights placed just under the roof with relative low range & intensity), though they're pretty much exclusively for mood/aesthetic/effect; that works OK too on mid-tier hardware, and fine on high-end hardware / can place quite a few of them thanks to Deferred rendering.

    Must be a Better Way?!
    Still, I can't help but feel that there must be a better way! Our geometry is nearly static, at least for long stretches of time; the player may build 20 new things, and then nothing again for 10 minutes or more. Surely there's some way for us to effectively bake/'cache' shadowmap/lightmap data at runtime.

    I've looked into this quite a bit, have checked out and played with SEGI and a few others, but really haven't come across anything that seems geared toward solving this problem. To be clear, there are two overall classes of problem that I feel we face:
    1. Seemingly no support/solutions for fully procedurally-generated games / runtime lighting support -- at least beyond simply using purely dynamic lights or hacking in extra "ambient" colors via custom shaders. It seems that all of the "lighting goodies" (and the 'goodness' of which seems debatable at times) are limited to "in the scene" projects, or at least projects where multiple prefabs are assembled to create the "procedural geometry" (ex via lightmap stitching).

    2. Inability to have "multiple different" lighting setups simultaneously -- for instance, with a camera positioned above & angled-towards an open-roof structure, while the outdoor areas remains visible as well. We're able to make it work, but man would it save a ton of time if we could do something like have two distinct lighting setups (via two scenes, for instance), and then use LoadAdditive() to bring them in. Then, instead of using a custom surface shader to handle a "second ambient" setup we'd be able to natively have each scene's ambient configured appropriately ('indoor' scene and 'outdoor' scene) and use the scenes 'separately'. This may not be the optimal solution & definitely isn't the only one, but is just an idea.

      Bonus points: Within a single DrawMeshInstancedIndirect() call [that we might use to render 1k agents, for instance] -- is there a way to change (via compute or surface shader) which of the 4 culling bits are flipped, or is it too late at that point? I've read through the Deferred shading code, though it is admittedly difficult to follow; if we could dynamically change agents between "indoor" vs "outdoor" lighting without a second DrawMeshInstancedIndirect() call though, it'd save us a lot of CPU time and halve our draw calls; worst case we'll likely hack around it with a custom shader and little-bit-brighter actual ambient color so they don't stand out too much.
    All that novel written -- am I missing something? Better ideas or solutions? Would be incredibly appreciative of any ideas / thoughts / guidance on either of these, and really even just overall on lighting these types of procedural / fully runtime-generated games (fully procedurally generated / runtime).

    Thank you!!
     
  2. rasmusn

    rasmusn

    Unity Technologies

    Joined:
    Nov 23, 2017
    Posts:
    50
    Thanks for your comprehensive description. The fully procedural use-case is something that has been on our radar for some time but currently we have no good solution for it. The reason for this is 1) that we have to balance our resources on many other tasks and 2) this is a difficult problem.

    This is not to say that it cannot be solved - partially or completely. As you say, there "must be a better way". Theoretically, it is of course possible to bake progressively at runtime and fade in the results. Originally, we wanted the progressive lightmapper to be able to do just that. But as is often the case, the devil is in the details, and during development we had to make the difficult case to limit the progressive lightmapper to be editor-only (i.e. not available at runtime). We still want to enable runtime-bakes eventually, but this will not be possible anytime soon I'm afraid.

    We are currently reworking our light probe solution. One thing we are considering in this regard is to enable a volume of probes to have several "lighting treatments" or "lighting setups". Then at runtime, the user can programmatically interpolate (think day/night cycle) or switch between these treatments. This sounds a bit similar to your use-case, but this would probably only be one piece of your puzzle.

    We are also working on a replacement for Enlighten that has traditionally been our realtime GI offering. This new solution will probably (I cannot promise anything currently) support some limited form of dynamic geometry but probably won't scale to your case of a complete procedural level. (It will of course support dynamic lights, just like Enlighten). But at least something to be aware of.

    So summing up, I am afraid I cannot give you the answer you hoped for. Realtime GI in a completely dynamic scene is a difficult problem. There are solutions (such as SEGI) that are acceptable for some use-cases, but usually these solutions come with trade-offs. Maybe they only support dynamic light and no dynamic geometry, or maybe they do not scale beyond small scenes. For Unity we need something that is general because of the wide variety of use-cases that we must support This requirement for generality makes the problem particularly difficult (compared to solving a more specific use-case).

    By the way, you should check out the game Claybook if you haven't already. The people behind have published some technical videos about their solution.
     
    Last edited: Feb 5, 2020
  3. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    85
    Thank you very much -- really appreciate the super prompt and thorough response. While it's not the answer I perhaps want to hear, it at least confirms that I'm not blatantly missing anything.

    I definitely understand, and especially regarding the need to develop with a "generalist" mindset / audience in mind. That said, I'd far prefer to handle "higher-level" stuff like NavMesh ourselves (is just an example, not even a great one, though is one in which we inevitably handle it ourselves regardless) and be able to rely on the engine for some of this "lower-level" (read: hard) stuff more so. I definitely understand though, and I'm sure the overall split of user/customer profiles likely quite easily justifies the approaches being taken on the whole. :)

    Regarding Enlighten:
    I've read up on it a decent bit, and with the understanding that it's being deprecated in favor of the in-house progressive stuff you guys are working on. That said, my understanding is that even if it wasn't being deprecated, that it's still only baking indirect light/GI but that all [possibly only non Directional] direct lighting continues to be fully dynamic -- is that correct?

    It'd be understandable if it was only baking the indirect, and that's probably the majority use-case, though (and as you probably fully grok), what I'm really after is more along the lines of [low-ish quality] runtime baking of all lighting / even direct light, to basically open the flood-gates on user/player "built" (placed) lighting. The theory being we'd render it as a dynamic light "additively" and then, as you said, once the baking was finished fade it in -- and for our game we could probably even get away with lights simply not "flicking on" for 10+ seconds or so of real-time even, etc... :)

    To summarize:
    I'm not missing anything, and it sounds like our best bet is basically to use >=1 "workarounds" to pseudo-fake it (ex the 1-2 directional lights we're using now and/or potentially go all out with secondary "_IndoorAmbienceColor" and look towards fake/blob-style shadows even), and beyond that just allow players to place/build low-range non-casting dynamic lights.

    I watched the video BTW, and while I'm intrigued I also think that it probably is at least a bit beyond our time/budget constraints (not to be confused with performance/frame-time budgets, which it appears to do exceptionally well on). Especially since it primarily, in our situation, amounts to environment/aesthetic value, at least once we've achieved a minimal level of "light" capability. Not to discount aesthetics too much, obviously! ;) . I've also seen the lightmap stitching stuff that some larger studios have used for their procedurally-generated games, though it doesn't really suit our needs all that well from what I can tell / seems more well-suited to automated procedural generation vs player/user-controlled.

    Anyways, thank you again for taking the time to respond -- it's actually quite helpful just to know we should continue down our current path without continually looking over our shoulder wondering if we're missing something blatantly obvious. =D

    PS -- Bonus Question
    I probably should start a new topic for this, it's more technical/specific, but if you happen to know: is there a way for me to "switch layers" [flip the bits of a DrawMeshInstancedIndirect()] instance within the shader [compute or surface or vert/frag ], for purposes of switching which "lighting scheme" it's receiving? Or is it "too late" in the rendering pipeline, by the time I'd be able to flip those 4 bits to be as we need them/based on data from ComputeBuffer?

    Situation Context: Our characters use ~12 models (so 12 DrawMeshInstancedIndirect calls) and we use buffers to animate them, swap textures, and colorize them; if an agent walks from "indoors" to "outdoors", I need the agent to begin getting outdoor light/shadows. InstancedIndirect takes the layer as an arg for all instances drawn IIRC -- so it means I have to double my calls, if I can't change it at "render-time". Doubling the calls isn't the worst part really, it's updating / "moving" the agent's C#-side render data from one of our InstancedIndirect "structures" over to another, and doing so in a way that isn't poorly-performing. If I could flip the bit in a shader, it'd be a relatively massive win, all things considered. =D
     
  4. rasmusn

    rasmusn

    Unity Technologies

    Joined:
    Nov 23, 2017
    Posts:
    50
    I think there may be a few misunderstandings here:
    1. In Unity you can mark a light as Realtime, Mixed, or Baked. "Realtime" means that they are not included in a bake. "Baked" means that all light, i.e. direct and indirect, are baked. "Mixed" means that only indirect is baked and that direct light is added in the shader at runtime. This concept of Lighting Modes is orthogonal to whether you are using Enlighten or Progressive Lightmapper.
    2. In Unity you can use Enlighten either for baking light or for realtime GI. The new Progressive Lightmapper is only for baking lights. In the Lighting window you can choose either Progressive Lightmapper or Enlighten as your "Bake Backend".
    In light of these clarifications I hope you see, that whether or not a bake is limited to indirect light only, does not depend on whether or not you use Enlighten. It only depends on which light modes you have set on your lights.

    Yes, of course. I think the video give you an idea of how difficult this is to realize.

    Hmm, I don't know that. To me it doesn't sound that expensive to move an agent from one struct to another. This of course assumes, that you only have a few (say, less than ~100) agents switching structs each frame. But maybe this is because I don't understand the full picture.

    For a more in-depth answer I recommend you check out the forum General Graphics.
     
  5. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    85
    @rasmusn -- Thank you again.

    I think that I mostly understood this, but honestly that is far-and-away the most succinct way of putting, and I now fully understand it is indeed orthogonal. The overall terminology used, which I'm admittedly fairly new to after working with 2D exclusively for my first ~5 years of my working with Unity, can be difficult to decipher at times -- even just deciding whether to post this question in the "global illumination" forum (vs Graphics) was a question-mark in my mind. ;)

    I fully understand Mixed vs Baked now (thank you), though it's really somewhat tangential to our "issue" in that what we really are seeking is the ability to use "Baked" lights -- albeit with very low quality & on technically non-static geometry, and controlled/updated via API at runtime. Probably quite the ask, likely worthy of a chuckle somewhere, I'm sure! =D

    To clarify on one point, even if tangential to our project but for the sake of my understanding/for the future -- the 'Realtime GI' (which is powered via the Enlighten backend), it's still only taking into consideration lights that are set to mode 'Mixed' and presumably the 'realtime' aspect of it only evaluates 'against' (ie catches light bounces off of) geometry that is marked as "Lightmap Static" -- is that correct?

    It's not that expensive -- but it isn't free, either. It's not a ton of data to "move" but what ends up happening is that it not only doubles the count of InstancedIndirect calls (and some of the buffers, etc) but it also means that each InstancedIndirect call is then handling ~half of the instances that it did previously / each call becomes less dense & more sparse. Our "split" between indoor & outdoor agents is not actually half, so in practice we end up with 10+ extra InstancedIndirect calls that are only shipping 10% of the instance-count versus the "indoor" call.

    It's just quite a bit of extra overhead (ComputeShader driven animations, etc.) given that the sole purpose is to flip a bit, and really it probably makes more sense for us to just ship an extra float in and "darken/lighten" the agents if they walk outdoors & not worry about the structure's shadow, even if it looks a little bit off.

    I really appreciate your taking the time to provide super knowledgeable & insightful answers -- I've noticed it seems like more time/attention being paid in general to user questions on the forums, and I definitely sincerely appreciate that. I do try pretty hard to avoid [ab]using it, by asking dumb/obvious questions, so will do bit more research before posting it over there but will do so if I can't find a better route in the meantime. :)

    Thank you again!! :)
     
  6. rasmusn

    rasmusn

    Unity Technologies

    Joined:
    Nov 23, 2017
    Posts:
    50
    You are almost correct. When you set light mode to "Mixed", Unity will assume that the light cannot move and change. Therefore, there's no need to do any realtime GI (thus no need to use Realtime Enlighten). Instead, Unity will bake the indirect portion of the light (into lightmaps and/or light probes). It may perform this baking using Enlighten or Progressive Lightmapper depending on your project's settings.

    Remember that Enlighten can be used for realtime GI and for baking (depending on your project's settings). Only for lights with mode "Realtime" will Enlighten be used for realtime GI. I understand why this can be confusing. I personally think all these settings is part of the downside of trying to support many different use-cases. We could provide simpler settings, but then we would have to give up supporting a wide variety of use-cases. It's a trade-off.

    Thanks for mentioning this. I will forward your message to our team :)
     
  7. uy3d

    uy3d

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    129
    There could be something worth considering, depending on the type of game you're building. What Unity allows you to do is update the SH coefficients of light probes at run time. However, you cannot update the positions of probes. What this means is that if you know beforehand the size of a scene, you could flood fill it with probes in the editor and bake them, so the positions and tetrahedralization are precalculated and saved with the scene. By default you could just bake the ambient probe into all probes.

    Now while the game is running, you could for example pick a probe position, render a lowres cubemap from that probe, convert it into an SH representation and then update the probe with the new values. The logic on which probes to pick, drawing the cubemap, time slicing etc. would all have to be handled by you. But once the probe is updated, all dynamic objects would pick up indirect lighting from the updated probes. You'd still potentially have to deal with the usual issues like light leakage that you wouldn't be able to fix so easily, as the probe positions cannot be modified at runtime anymore.
     
    rasmusn likes this.
  8. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    85
    Thank you @uy3d!

    I put together a super-quick test scene of this, obviously skipping over some (read: the entirety of) the SH math, my goal being primarily just to quickly see code have an immediate impact on indirect light, and I was able to get something going that is pretty intriguing and could potentially be the long term solution.

    I've got to brush up on light probes, but it does sound like this fits our use-case -- we do indeed know what "area" the player will have in front of them/be building upon. It looks like probes can be re-tetrahedralized at runtime, which accounts for walls coming/going and similar occasional changes to light-blocking & without needing to move probes at all. Going to look closer at this route for sure, it really does seem to fit our use-case incredibly well given that we know precisely when/where changes occur.

    Out of curiosity, why isn't this the "default" (or even standard/built-in) approach? Is there any issue or scalability concern, or some similar gotcha perhaps? Beyond the math, it almost seems... "too easy" heh. I may quickly know the answer to that when I try to throw 30-60k of them down though, hah. =D

    Again, not very familiar with probes so maybe extracting the output for use by DrawMeshInstancedIndirect is more difficult than it appears, though presumably it is (or can be) wired up the same as the standard shader is.

    Thank you again -- while our "extra ambients" is an okay solution in appearance, it's not awesome or overly impressive as far as modern expectations are concerned. This could very well change that, or at least make a dent in it! :)

    Edit: The "re-tetrahedralization" I mentioned above isn't actually needed I think, though does appear to be available at runtime but I doubt it would acknowledge updated connective geometry. We'd just need to "cut-off the light-flow" between two walls via setting the colors appropriately on either side of a newly-placed wall, as best I can tell. Still actiely exploring this option, it appears to be spot-on though -- it's essentially "fully procedural localized indirect light" as far as I can tell so far. Still learning & testing, though! Ty!!!
     
    Last edited: Feb 13, 2020 at 6:41 PM
  9. uy3d

    uy3d

    Unity Technologies

    Joined:
    Aug 16, 2016
    Posts:
    129
    Be aware that re-tetrahedralization is used to merge probesets from additively loaded scenes. It does not re-evaluate the position of objects around the probes. If your probe density is too low and someone puts a wall between two probes, you'll get light leaking in from the outside. The current implementation of probes doesn't have any mechanism to handle proper occlusion of probes that would address this issue.
     
  10. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    85
    Yeah, have realized this to be the main obstacle; and then figuring out if it can be scalable enough to cope with the higher-density that will be required as a result.

    From your mention, it sounded like probe positions could previously be altered at runtime? Now that would truly have made it too easy! ;)

    I haven't had a chance to tackle this directly yet, we had to make some changes to be able to bake probe positions & not be moving things around -- inverting the parts that we "do vs don't move" during scene setup essentially. Commit just went up which addresses that, so we'll be diving more directly into this the next few days -- will report back and update you on how it goes (and hopefully not to ask dumb questions). Fingers crossed!

    Tyvm -- am fairly optimistic!!
     
  11. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    85
    Alright, quick update and a couple of quick questions if I may. :)

    The Good News: I've got a working and "full" proof-of concept going for this! May be some quirks with the SH calculations, but is fully functional and works pretty much exactly as you said. It's pretty clear that it would allow us to really control the lighting, and likely has the highest "ceiling" as far as how great & "alive" we could make things look with it.

    The Bad News: It'll need to get a bit faster to truly be viable. Will need to shed about 80ms from each cubemap capture haha (I laughed, though the humor went over my wife's head). ;)

    Brief Additional Context: Haven't optimized the cubemap capture at all beyond dropping resolution super low. Probably is substantial gain to be had via some very basic simple culling for the cubemap render and similar "easy" wins. Will have to look at Frame Debugger to see what other "big" wins may exist though.

    Also, quick/rough diagram of my general plan WRT probe placement (note: outside corners are suspect/may not be able to share). I think that'll work and reduce worst-case time complexity substantially; less clear are the internal implications of "4X" probes, though. ;)

    Couple Qs [towards gaining ~80ms :D]:
    Note: Questions updated/edited on Feb 16th:

    1) When setting directly into SphericalHarmonicsL2 with the [,] setter -- what's happening after that / what's picking the data up? Is it being pushed to the GPU in a buffer or similar, or is it generally being used/eval'd on the CPU?

    I'm wondering if I can keep the entirety of the operation on the GPU: CubeMap render => compute shader to sample & calc SH => data "set". (I did check the C# source and couldn't find an answer, though didn't dig through the light shaders.)

    Worst case I can probably calc the SH coefficients in a ComputeShader & readback from a buffer; but if it's getting shipped to the GPU immediately anyways then it'd be great to not handle/readback the data on the CPU.

    2) If the data is needed back on the CPU side, then what's my best bet for cubemap render/sync -- am kind of guessing I should look towards AsyncGPUReadback, which I've some limited experience with (for a prior 'timelapse' feature). Is there similar existing API specifically for cubemaps, or perhaps a better solution altogether? I'm sure I can speed it up quite a bit as-is, though I suspect there are far faster ways to capture cubemaps than the standard 'RenderCubemap' methods, and async readback likely makes a lot of sense if it does indeed have to end up on the CPU side.

    Simple cull-layer selection and switching to usage of the RT-argument variant of cam.RenderToCubemap() largely mitigates my issue alongside a Graphics.Copy, and I can additionally amortize the faces if needed. Async readback would probably still make sense, though ideal world would be to compute the SH coefficients entirely on the GPU.

    3) More generally, I'm wondering if your original intent/recommendation was to use the existing "AddLight" API vs calculating the SH coefficients ourselves. I didn't profile the AddLight API, just assumed I'd need to calculate the coefficients though it now looks like I probably could just use existing API.
    I think this was a dumb question. ;)

    4) Bonus: Is runtime probe API on the roadmap at all for the future? Especially (or only even!) runtime-friendly API for managing probe adjacencies, that'd be pure gold! =D

    Unsolicited 2-cents:
    This is actually a pretty awesome system overall, assuming the scalability unknowns are surmountable as they appear to be so far. Part of me doesn't want to say this, as it's a competitive field where you need to "earn" it -- but I'm also always happy when I can find answers online so, IMHO... This capability would warrant documenting in a "how-to"/use-case style, and ideal world Unity might consider adding 1 method to really increase developer usability/UX -- LightProbe.CalculateCoefficients(RT cubemap, v3[] results)) -- I sincerely think doing so would mean a very-usable "fully dynamic runtime GI" solution would likely become immediately more accessible to substantially more developers/teams.

    Thank you! :)

    Seriously, really appreciate the idea and the guidance, will continue to report back (and will try to sleep on potentially-dumb Qs before posting them) -- but I think it has the potential to turn out really great, far exceeding our original [and fairly-low] expectations for our game's lighting! Thank you!!
     
    Last edited: Feb 16, 2020 at 11:49 PM
unityunity