Hi, I apologize in advance for starting a thread with such a seemingly broad question, but I've searched high & low trying to find good (even just workable) solutions and I'm at the point where I feel like I might just be missing something. Some Context We're building a fully-procedural management/tycoon style game in 3D. Players are able to freely build and destroy structures, change flooring and wall paint colors, build multiple stories, place objects, and the like -- and AI agents/characters then interact with the built structure. Our camera is relatively "freely controlled" by the player and it can tilt / orbit / translate. During gameplay, the player will typically be viewing around and into their structure from an angle; the structures have roofs but we hide them for the "current floor", so that the player may see into the structure to build / tweak their structures to be more efficient and better suited to the AI, etc. The problem(s): Most everything is good -- except for lighting. We can render thousands of high-variation agents and objects via DrawMeshInstancedIndirect and ComputeBuffers, can allow the player to build massive structures, etc... But the lighting, it continues to be a struggle! Our setup as of right now is described below, but I can't help but feel like there's got to be some better way to do this, and to achieve a better end-result as far as lighting/GI goes, though I'm at a loss because of our inability to "bake the scene" -- everything is done procedurally (structures are runtime-created meshes/quads, etc; objects are placed by the user; etc). Our Lighting Setup Unity "Built-in Renderer" (legacy) via 2019.2.x latest. Deferred rendering path. 1 Directional Light shadow caster; provides time-of-day & exterior sun shadows. Ambient mode set to 'Color' & medium gray; avoids over-exposing the outdoors at night & avoids always over-exposing the indoors. 1 Directional Light, shadow caster, cull-masked to ONLY impact the indoors [objects, floors], aimed directly "down". Provides shadows for indoor objects and basic light (does not hit roof, does hit floor). Structure [wall] meshes receive only exterior light (interiors, including walls, are blocked/shadowed by roof); since the interior walls receive no light from the "directly-down" facing interior Directional light we have added a custom shader that take an HDR "_InteriorAmbientColor" / essentially an intensity multiplier so that we can make the interior wall brightness roughly match the floor/object/agent light intensity. Post-effects: AO and slight tweaks to Color Grade. This all works. And it works OK. But it somehow feels somewhere between deficient and suboptimal. We have user-placable lights for indoors as well (point lights placed just under the roof with relative low range & intensity), though they're pretty much exclusively for mood/aesthetic/effect; that works OK too on mid-tier hardware, and fine on high-end hardware / can place quite a few of them thanks to Deferred rendering. Must be a Better Way?! Still, I can't help but feel that there must be a better way! Our geometry is nearly static, at least for long stretches of time; the player may build 20 new things, and then nothing again for 10 minutes or more. Surely there's some way for us to effectively bake/'cache' shadowmap/lightmap data at runtime. I've looked into this quite a bit, have checked out and played with SEGI and a few others, but really haven't come across anything that seems geared toward solving this problem. To be clear, there are two overall classes of problem that I feel we face: Seemingly no support/solutions for fully procedurally-generated games / runtime lighting support -- at least beyond simply using purely dynamic lights or hacking in extra "ambient" colors via custom shaders. It seems that all of the "lighting goodies" (and the 'goodness' of which seems debatable at times) are limited to "in the scene" projects, or at least projects where multiple prefabs are assembled to create the "procedural geometry" (ex via lightmap stitching). Inability to have "multiple different" lighting setups simultaneously -- for instance, with a camera positioned above & angled-towards an open-roof structure, while the outdoor areas remains visible as well. We're able to make it work, but man would it save a ton of time if we could do something like have two distinct lighting setups (via two scenes, for instance), and then use LoadAdditive() to bring them in. Then, instead of using a custom surface shader to handle a "second ambient" setup we'd be able to natively have each scene's ambient configured appropriately ('indoor' scene and 'outdoor' scene) and use the scenes 'separately'. This may not be the optimal solution & definitely isn't the only one, but is just an idea. Bonus points: Within a single DrawMeshInstancedIndirect() call [that we might use to render 1k agents, for instance] -- is there a way to change (via compute or surface shader) which of the 4 culling bits are flipped, or is it too late at that point? I've read through the Deferred shading code, though it is admittedly difficult to follow; if we could dynamically change agents between "indoor" vs "outdoor" lighting without a second DrawMeshInstancedIndirect() call though, it'd save us a lot of CPU time and halve our draw calls; worst case we'll likely hack around it with a custom shader and little-bit-brighter actual ambient color so they don't stand out too much. All that novel written -- am I missing something? Better ideas or solutions? Would be incredibly appreciative of any ideas / thoughts / guidance on either of these, and really even just overall on lighting these types of procedural / fully runtime-generated games (fully procedurally generated / runtime). Thank you!!