Search Unity

Change Light Probe Group Position At Runtime

Discussion in 'Global Illumination' started by l33t_P4j33t, May 1, 2020.

  1. l33t_P4j33t

    l33t_P4j33t

    Joined:
    Jul 29, 2019
    Posts:
    232
    My Game involves loading random scenes additively and then arranging them randomly at runtime.
    lightmaps work perfectly, but light probes don't move with their parent.

    any solution?
     
    Last edited: May 1, 2020
  2. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    228
    In 2019.3+ I think you may be able to do this via calling LightProbes.Tetrahedralize() static method. I'm not positive if it will honor updated world positions or not, but in theory it may very well work. It's probably going to be relatively slow though, and don't recall if there's an async method variant or not.

    LightProbes.Tetrahedralize() Docs:
    https://docs.unity3d.com/ScriptReference/LightProbes.Tetrahedralize.html

    Edit -- There is indeed an async variant:
    https://docs.unity3d.com/ScriptReference/LightProbes.TetrahedralizeAsync.html
     
  3. l33t_P4j33t

    l33t_P4j33t

    Joined:
    Jul 29, 2019
    Posts:
    232
    That function is for merging light probes from different scenes when you preemptively know how the level is supposed to look like, so you set up the scenes so that everything gets loaded in different places to make one giant level. the way i have it set up is everything gets loaded at (0, 0, 0), and a new position for each room is set at random.

    When i run Tetrahedralize, it's just merges all the probes at (0, 0, 0) to make one dense ball of probes without updating the probe position to their new room offset

    so no, it doesn't honor the updated position

    currently, my only hope seems to be to wait out for dots implementation of GI and hope that i'll be able to offset the light probe position there
     
    Last edited: May 1, 2020
  4. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    228
    Gotcha. In that case, I don't believe it's doable. We needed to do something very, very similar for our project -- we tried just baking essentially "lots and lots" of positions, to cover all places that we might "additively" put stuff, but it was way too clunky and didn't work overly well.

    We ended up writing what amounts to a custom implementation of LightProbes (ie SH), and really we use it primarily via our 'custom' LightProbeProxyVolume. We determine position within our custom probes/LPPV ourselves and ship the data via Texture the same as Unity's LPPVs do, but we're able to do it on a larger scale and control it at runtime with our system (and no pre-baking of positions [tetrahedralization] is required). It wasn't a massive project to do so, but it wasn't a small undertaking either...

    Unfortunately, when it comes to procedural generation -- or really any kind of moderate-scale changes happening at runtime -- there just aren't a whole lot of "tools" in the Unity kit that are of use.
     
  5. l33t_P4j33t

    l33t_P4j33t

    Joined:
    Jul 29, 2019
    Posts:
    232
    ah that's unfortunate
    it seems like a pretty major oversight, considering how many people have tried making a procedural game, judging by my vain google search attempts
    somewhere in the light asset file, there is an unexposed vector just waiting to be changed
     
  6. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    228
    If you look at my post history, you'll find the thread where some Unity devs helped me while building the aforementioned system [they helped my efforts tremendously, btw]. There's also some code included, and links to some documents & presentations about how the systems work, and while it's not a super brief read it likely can provide you some context as far as how/why the SH system works the way it does.

    My best guess is that the runtime Tetrahedralize() method is taking a "shortcut" -- only finding the "delta" between the two pre-baked spacial indices. That would explain why you can't edit positions & wholly re-index at runtime, because it's really just merging two pre-baked "indices". I may be completely wrong, that's just my guess; if it is indeed running the "whole shebang" then I'm not sure why you wouldn't be able to add/edit/remove LPs/positions freely during runtime.

    That said, it's pretty game-dependent; if your game follows something of a grid, you may be able to get decent results via "prebaking" moderate numbers of probe positions (our failed efforts were >= 256x256x10 ish). Worst case, you may want to explore going down the path that we did -- essentially re-writing LPs and LPPVs yourself, at which point you get the ultimate control over them, though you also become responsible for "spatial assignment" of LP<=>object. With LPPV-style approach, that's easy though, and you can get per-pixel lighting (and even do more levels of SH/slightly higher quality 'resolution' as a tradeoff with VRAM/GPU performance). :)

    I too wish there was a better or easier answer for runtime/procedurally-generated stuff; alas, we can only play the hands that we're dealt! ;)
     
  7. Stroustrup

    Stroustrup

    Joined:
    May 18, 2020
    Posts:
    142
    i found a good simple solution, each mesh renderer has a transform override for light probes property if you look closely. this means that you can create a new light probe sampler gameobject and its light probe contribution will get copied onto the player, wherever he is.
    have the sampler follow the player, but at an offset where the light probes are
     
  8. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    228
    Yeah, that's one way to handle it, though you lose per-pixel SH when you do that -- depending on what you're lighting, that may or may not be an issue. You can also directly pass in custom SH data to any given MeshRenderer, which is very similar to overriding the position, just that you'd need to find/calculate the SH data yourself.

    LPPVs are best of both worlds, but have some limitations & can be problematic performance-wise -- but they do give you per-pixel shading, though the Unity implementation's of LPPVs is slightly lower "resolution" SH as it doesn't use one of the bands.

    Honestly, if you've found something that suits your needs and didn't require tons of work -- go with it. It's tough enough to "hack" decent looking stuff together, and if you have 'custom' needs at all then you can quickly find yourself in a spot where you have to go "full custom". Any time you can solve it & get a decent looking result quickly, with the built-in stuff, then I'd encourage sticking to that path if at all possible! :)
     
  9. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    What do you mean you lose per pixel SH? Doesn't unity just find the position at runtime, then upload that to the gpu? And then just choosing a different position would be just changing what unity uploads for the shader to use?
     
  10. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    228
    Yes -- but that's *per object* SH -- the entire object is getting the SH for 1 position [the SH is uploaded, NOT the position]. If your object is large, it could be spanning multiple LightProbe areas that have different light -- resulting in strange looking, 'un-smooth' light / no transitions.

    With LPPVs, you're getting true per-pixel interpolation of SH, the resolution of which is determined by the LPPV resolution settings (can be configured higher than inspector allows when set via code). Because SH can be linearly interpolated, the standard texture filtering interpolation = you get "perfect blends" between the SH.

    This works because LPPVs are really just creating a Texture3D with float values for the SH bands, and each band is linearly interpolated; the GPU then samples that [filtered] texture, so even a sample "between pixels" will be interpolated = perfectly 'smoothed' transition.
     
  11. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    228
    BTW, you can see this because you can actually upload/override SH for a renderer Renderer -- I believe via MaterialPropertyBlock -- which is essentially all you're doing when you set the override position, you're telling Unity to upload the SH values from a specific position [instead of the transform.position].

    If your objects are relatively small & you don't need the resolution, then this works just fine -- it's only an issue if you want blending between light/SH from multiple probes *on a single object*.

    Basically it's:
    1). LPPV -- Each pixel is sampled [on the GPU via Texture3D] and each pixel is then interpolated individually.

    2). Position [or SH] override -- 1 SH value is interpolated [on the CPU] and uploaded to the renderer; the entire object will use that SH value.
     
  12. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    731
    ohh right because you are talking LPPV yeah makes sense. I thought you were just talking about light probes. Thanks for the response!