Search Unity

  1. Check out the Unite LA keynote for updates on the Visual Effect Editor, the FPS Sample, ECS, Unity for Film and more! Watch it now!
    Dismiss Notice
  2. The Unity Pro & Visual Studio Professional Bundle gives you the tools you need to develop faster & collaborate more efficiently. Learn more.
    Dismiss Notice
  3. Improved Prefab workflow (includes Nested Prefabs!), 2D isometric Tilemap and more! Get the 2018.3 Beta now.
    Dismiss Notice
  4. Improve your Unity skills with a certified instructor in a private, interactive classroom. Watch the overview now.
    Dismiss Notice
  5. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice

Assets HXGI Realtime Dynamic GI

Discussion in 'Works In Progress' started by Lexie, May 24, 2017.

  1. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    The stuff I have now would be suited for consoles performance. It's hard to find time to finish it as it doesn't have the range we would like for our game. When ever I have spare time, I try to get it closer to a production ready asset, but its taking a while.
     
    blackbird likes this.
  2. Tasmeem

    Tasmeem

    Joined:
    Jan 14, 2010
    Posts:
    111
    Cool, Can't wait!

    Did you ever consider a screen space GI approach? Or are you aware of any existing ones in Unity?

    It might not be accurate but it might be a good compromise until the real thing comes along.
     
  3. ekergraphics

    ekergraphics

    Joined:
    Feb 22, 2017
    Posts:
    255
    UniEngine is impressive, but $10,000 dollars per year is a bit much even for us as a company... so I guess if Lexie is still wondering how much he can charge for this at the asset store, that would be the ceiling. ;)
     
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    Screen space GI would be immensely limited, GI rely on surrounded geometry clip by screen space. Screen space reflection can be seen as a type of very close screen space GI but since distant influence (in screen space) are super costly it's not use as a global one.

    SEGI started as a screen space GI crude approximation, apparently it used to worked like AO sampling color along with the depth, and some similar trick with shadow mapping (sampling color).

    The closest simple cheap crude approximation would be cube map reflection IMHO. I'm investigating this.

    Basically Real time GI is figuring out visibility inside a lighted space, and updating the ray along side that space. Solution like SEGI and HXGI basically reconstruct in real time the visibility structure using voxelization.

    Other solution bake the visibility in different way. Enlighten maintain a list of most potent visible surface inside each surface, when light hit a surface, the surface update its direct lighting data, and then query the lighting data of the surfaces in its visibility list to update it's indirect light, so the update happen over multiple frame. In the division, Ubi soft's engineer use a similar solution but instead of storing visibility into lightprobe and query surfaces data to update the probe.

    If you can solve the occlusion/visibility query at runtime, you can basically solve GI in a cheap way, and the crudeness will be about the resolution of that visibility structure. There is many solution to solve the occlusion and visibility problem for rendering.

    My reasoning with cubemap is that rendering to a screen (and a cubemap) is a specific case of the visibility/occlusion problem. Box projection allow you to project back, in the environment, the lighting accumulated into the position of a cubemap at the resolution of the cubemap. So If I feedback the result of the lighted environment back to the cubemap, would I have a close convex space GI?

    I don't belive in a single solution, if my solution work great at close distance around the point of relighting, it would need to be supplied by other solution too, for distant rendering, for contact rendering (screen space reflection), for concave space (maybe a simplification of the segi/hxgi model), etc ...
     
  5. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    screen space GI is only worth using on top of either baked light maps or a low res GI solution. By its self it causes too much issues.
     
  6. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    You will need a lot of cube maps for anything more complex then a cubed room. Rendering a cubemap (6 renders) for all the probes can be pretty costly with any reasonable size scene, Even if its just the once when the level loads. You would most likely have to bake them offline.

    You will also need a lot of space to store all those G-buffers. Even at low res it adds up pretty fast. the last call of duty did a similar system. Check out their videos.
     
    neoshaman likes this.
  7. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    I didn't intend it to be a full GI solution with many cubemaps, just one located around the player, just a small experiment to have me starting and try stuff before ramping to solution I don't understand yet. I'm well aware of the limitation and don't expect really good result, but maybe it might be usable for more illustrative rendering rather than realistic one?

    That said, I was also wondering, when making pcg game, can we also find rules to generate occlusion/visibility query directly into the generation pass? Or can we design around space that make them more easy to compute/generate?

    edit:
    Limitation are: flickering, sudden influx of occluded light due to moving the probe (partial solution: temporal aliasing's retroprojection like trick?)
     
    Last edited: Jun 8, 2018
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
  9. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,004
    Yes, personally I follow this thread because I am a fan of Lexies work, experiments, etc, whether they end up as released products or not. There are plenty of other places I can look if I just want the usual chatter about how this unfair world doesnt give us the perfect GI solution for all purposes on a plate.
     
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    Point being this technique is still rather slow and need baking prior to gameplay :p
     
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    23,561
    Mod note: If you do not intend to purchase the asset, do not post here. If you own the asset, only post relevant questions that will need answering how to use it. All others are free :)
     
    MarkusGod and chiapet1021 like this.
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    There is still no asset to buy though...
     
    arnoob, iamthwee and Tzan like this.
  13. Yuki-Taiyo

    Yuki-Taiyo

    Joined:
    Jun 7, 2016
    Posts:
    68
    @Lexie

    Have you considered a financial support to yourself and your asset through Patreon, giving access to a bêta version of Hxgi to people interested by it ? It could be an interesting solution financially and you wouldn't have to provide full support to people until you're ready to launch it on the Asset Store.
     
    MarkusGod, neoshaman and jefferytitan like this.
  14. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    1,238
  15. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    64
  16. Demhaa

    Demhaa

    Joined:
    Jun 23, 2018
    Posts:
    14
    Despite not currently working on it, how less intensive is this asset compared to SEGI?
     
  17. Bud-Leiser

    Bud-Leiser

    Joined:
    Aug 21, 2013
    Posts:
    86
    Looking good!
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    I have been thinking some more about rtgi, even though I'm not implementing anything yet (assume diffuse light and not specular):

    The take away is that RTGI is local and "view dependent", it also have a gradient of quality from close high frequency to far low frequency. That insight is important, because, from a game perspective it made realize that we don't need to try a global approach. Most attempt "try" to be as global as possible and are only local as an approximation, but there is basically two types of GI in game lighting:
    a. Diffuse light from scene light are bound by the attenuation, that is diffuse GI cannot go further than the bound of their radius. They are generally close to geometry, they have high locality.
    b. Environment light by the sky (sun and skydome). Which is distant and affect the whole scene, it's a global lighting.
    Most attempt consider both light as "roughly" similar and put them inside the same structure, which generally encompass as much as possible of the scene.

    The most used data structure for most RTGI seems to be irradiance volume (and variant), that is a 3d texture with light informations inside. It's very cheap to compute, it's basically a look up based on world position. The difficulty comes from injecting and updating light, which is where most techniques differ (cone tracing, reflective shadow, map, voxelization, etc ...).

    But do we need an all encompassing data structure when light have high locality? Can we decouple Global sky GI from scene light GI? The great think about that type of data however is that the source of the lighting don't matter as long as you can inject it, which make texture light automatique. Global light seems like it can be handled through "mostly" AO.

    I was initially focusing on convex shapes for local light, which allow to decompose space such as we can cover all light path inside the shape, but light is "radial". Convex shapes guarantee than any point are link by a single line, which satisfy sampling. But if we want to place a minimum number of probe such as it cover all light path, radial allow for better heuristics. That is for any centroid in an empty place, assuming that centroid is like a light, all empty place which are occluded from, in the "shadow" of, that centroid are good candidate for new sampling point. There must be way to find the minimum number of useful sampling point to cover all light path.

    Does It turn the occlusion problem into a pathfinding/propagation problem? Essentially we are finding only the occluded space that can potentially have some bounce, using "shadow" (aka occlusion) as a heuristic. "View" is itself a occlusion/pathfinding and radial problem, if we can find the path from light to view (that is the space overlap), does that make a solution from RTGI that only solve for view?

    For example let say we use voxel as a space partition, we can aggregate empty voxels into a bigger boxes to simplify the structure, the centroid of the boxes would be the optimal placement for capturing all light path. We can then try to select the bigger box, expend on radially reachable neighbor from the centroid, forming a radial unit. What would remain would the occluded box from the centroid point of view, which is where we would optimally place a new sampling centroids. Using local light as the starting centroid might also help, we basically terminate when the light path inside each radial unit exceed the original light radius.

    Which led me to discover this mathematical problem, which (or varient) may or may not be useful?
    https://curiosity.com/topics/the-il...-a-well-known-mathematical-problem-curiosity/

    1. I was a bit uncomfortable with the dismissal of long baking time, for some reason. But I couldn't fully see the rational until now.
    a. Baking is usually applied to the whole level (globale) at a "relatively" high resolution and high precision.
    b. Run time is generally focused on "seen" geometry with a decreasing "fall off" from close to far regarding resolution. That is it's local, we don't care about the whole level. It's also using a lot of approximation and shortcut.
    c. This does not make baking methodology any useful, because if baking time is 10h, even 1% locally isn't fast enough. What it does is recast the problem slightly differently.
     
  19. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    480
    Those are basically the same. The largest by far issue to solve for realtime-ish global illumination is light transport. Light transport basically means "how much light moves from position x to position y (indirectly)". If you're not trying to solve light transport every frame, you already baked all lights into your geometry and nothing is dynamic (until rebaking).

    >""View" is itself a occlusion/pathfinding and radial problem, if we can find the path from light to view (that is the space overlap), does that make a solution from RTGI that only solve for view?"

    That seems correct. However solving "light to rendered pixel" is what shadowmaps do currently. You need to solve "light to random wall to rendered pixel" - preferably using multiple random wall steps. This makes the path search space massive.

    >" Global light seems like it can be handled through "mostly" AO."

    That is only a good approximation if your "indoor" environments are small with lots of openings (small-ish houses). If you want a (large) cave or hangar that becomes dark inside this does not work.


    "Local light realtime global illumination" seems semi-solved. If you can live with some constraints (~10 meter radius max, light transport is delayed/slow, takes a real lot of memory <or> limited view range) - that seems to describe HXGI as it is/was working. It is however very hard to scale up (since radius * 2 == volume * 8).

    Making the volumes have multiple level of details (either sparse voxels or overlapping larger grids) does probably help a bit performance wise but the complexity and therefore glitches goes up a lot.

    "Skylight realtime global illumination" seems a lot harder still. The average distance of light travelling goes up 50x+. The average "amount" of light sources goes up 50x+ (the entire sky). So as you said, you'll likely need special handling for this compared to "local" lighting.

    And some other fun challenge: if you spread anything temporally ensure you can sample reasonably even in geometry sparse regions for (quick) moving objects.

    (I really should make more coherent posts)
     
  20. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    yeah another assumption was only static geometry (for now).

    The thing is that I think we can just sample the direction of skydome (assuming it's encoded in something like cubemap) and use AO to mask out where it does not contribute (which is how we do it now). Except in this case there is no bounce, but then we move from global to local, Even then we can use cubemap/probe with extra data (like depth) to have local environement (basically what we already do.

    I think it can be "solved" "at cost" with refreshed local probe (basically we reproject the probe back on the environment, then use the result as a new input to accumulate in the probe). And since we know what are occluded "space" (rather than geometry) we can start to propagate sampling volume and find a new centroid base on estimated energy spend (that is there is a certain penetration possible inside the occluded region based on how light would lose power on bounce).

    Essentially I'm moving the focus from geometry (aka surface) to the space between them (aka a set of potential path). Generally GI is thought as surface emitting and receiving, which is a hard problem, because ray are basically random from the geometry point of view. However space is basically a bunch of potential ray, so by manipulating space you are manipulating the set of all path as a bundled primitive in a single entity (radial space), it is simplifying stuff as surface is implicit to the boundary of that space.
     
  21. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    64
    I'm not ruling out a potentially new paradigm, however I suspect that what you're gaining in a simple volume description, you're losing in the complexity of what it contains. A large sphere of empty space could contain a complex set of light rays with varying direction, colour, intensity, etc.

    I do find the general concept of global vs local lighting interesting. You could have lights with a large sphere of influence represented as a very coarse grid, as their effect will be very diffuse at great distances, and smaller lights represented as fine but sparse grids that would also give higher quality reflections.
     
  22. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    480
    This really sounds like what lexie has done here with HXGI - use a volume with light sources injected in it and propagate that lighting frame by frame instead of calculating the entire light state in a pixel shader when doing the final render. It limits light complexity since you only store x amount of lighting per voxel and thus improves performance. But probably problems with multi-colored/directional lights overlapping.
     
  23. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    I'm not sure it's a new paradigm though, I think the term is "light field" and we have been doing in in many form from some time:
    1. Filtered cubemap or SH probe effectively gather the "light field" in between geometry gathered in a single "view" point.
    2. G buffer translate geometry data into light happy data, the depth buffer being the representation of the geometry in that data structure, it's precise enough you could run physics on that representation.
    3. You can combine 1 and 2, to get a complete scene representation from a single point.
    4. By multiplying view sample you can get the entire scene represented with all its data. It's basically an alternative and complete representation from the light data perspective.

    It does that have limitation:
    1. 3d geometry is "infinitely" precise potentially due to being vector. View base light field are rasterization and are resolution dependent, and resolution of details falls off with distance, which mean you need multiple sample at multiple place to resolve details (hence array of probe necessary).
    2. Due to being view based, unless you are a convex shape, you don't have parallax information. With convex and distance information, you can actually modulate the view from its origin and reconstruct a new view (this observation has lead to the box projected cubemap trick). Doing so with concave shape introduce missing data at parallax, they are the visibility/occlusion problem.

    My idea come from observation of point 2, parallax information is akin to shadow if you have a light emitted from view. So in order to reconstruct that data I need to sample in that "view shadow", but we can know where those data using shadow map technique, instead of computing shadow map at the fine details of the texture we do it to a volume deconstruction of space which is potentially coarser and therefore faster. Since light are akin to view we can therefore find the potential set of influence of that light by chaining occlusion of each sampling point until the ray potential is exhausted. And we can also limit computation to the join set of view and light sampling occlusion, in order to avoid contribution that simply aren't seen. I'm assuming that computing occlusion is faster than GI bounce, so we are potentially dramatically reducing the data needed to calculate contribution toward the camera.

    But that's just an hypothesis for now. Thanks for the discussion It made things clearer for me too now lol. And potentially it can evolve into a new way to represent geometry without keeping the "geometry data" if we can bake it in the "light field". lol
     
  24. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    221
    Lexi, you do know that I will buy this the moment it is released, right?

    btw, you should really add HDRP support (if it is not planned)
     
  25. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    I'm not touching the HDRP until its out of preview/beta, there have already been major overhauls of it over the last few updates already. it is not stable enough to make anything for it IMHO.
     
  26. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    221
    Maybe the camera related stuff will change a bit more but for GPU compute shader related things, it won't change that much. But in general I agree with you :D
     
  27. MariskaA

    MariskaA

    Joined:
    Feb 17, 2017
    Posts:
    11
    Hello Lexie. I'm following this thread from beginning and I was wondering if you had more informations about a release date? I work for a company and we are waiting for your asset for a year now, and would know if we should spend money on another developpement.

    Thank you.
     
  28. OnlyVR

    OnlyVR

    Joined:
    Oct 5, 2015
    Posts:
    43
    Could we test something?
     
    kornel-l-varga and moxi299 like this.
  29. Tasmeem

    Tasmeem

    Joined:
    Jan 14, 2010
    Posts:
    111
    Help Lexie,

    Any updates on the effort to release the current version on the asset store? Really looking forward to it.
     
  30. kornel-l-varga

    kornel-l-varga

    Joined:
    Jan 18, 2013
    Posts:
    25
    Im checking this thread every day twice :D just in case
     
    N00MKRAD likes this.
  31. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    it's dead
     
  32. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    1,238
    Yeah, sadly... :( Though, SEGI is kind of alive again and gets proper treatment, thanks to @Ninlilizi :)
     
  33. Mordus

    Mordus

    Joined:
    Jun 18, 2015
    Posts:
    115
    People are expecting too much here. I think he was pretty clear he was working on it as something he needed for his own project, that he was making it suitable for his own purpose not as a general solution, and that releasing anything as an asset was only a maybe from the beginning.
     
    hopeful and zenGarden like this.
  34. kornel-l-varga

    kornel-l-varga

    Joined:
    Jan 18, 2013
    Posts:
    25
    Honestly at this point, I am not expecting a fully supported asset or any kind of official release anymore exactly for the same reasons!
    I am only hoping that one day we could test it out ourselves, and if it works well perhaps use it in our projects.... one can hope.. Im in desperate need for a real realtimeGI solution.
    Thx @Mauri I will take a look at @Ninlilizi 's work. I have been using SEGI for a while now, but it does need some improvements for sure.
     
  35. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    IF he could released one or all of the simple version he worked on as community project that will be cool. The more head on a problem, the faster it get resolve.
     
  36. kornel-l-varga

    kornel-l-varga

    Joined:
    Jan 18, 2013
    Posts:
    25
    @
    Exactly, others might even contribute to it
     
  37. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    Currently our studio is working on too many projects for me to dedicate time to work on this.

    One of the projects should finished up by the end of the year so ill have some spare time to come back to this.
     
    Last edited: Oct 8, 2018
    ftejada, arzezniczak, ANFADEV and 9 others like this.
  38. Onevisiongames

    Onevisiongames

    Joined:
    Aug 3, 2016
    Posts:
    6
    Thanks for the info, good luck and success for the Project! ;)
     
    neoshaman and hopeful like this.
  39. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    232
    Good luck, I will hope for your success and the continuation of hxgi.
     
  40. kornel-l-varga

    kornel-l-varga

    Joined:
    Jan 18, 2013
    Posts:
    25
    Good news everyone! :)
    Thanks for letting us know. By the way great stuff out there from the HitBox team, wish you guys success!
     
  41. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    I had a few spare days to finish up something that has been on my back burner for some time.

    It's the final version of the sparse data structure ill be using for the GI.
    The world can either be stored in an octree(2x2x2) or 64tree(4x4x4), the octree takes up less Vram while the 64tree is faster to trace through. its a chunk based system so you can schedule a bunch of chunks to be revoxelized and the work load is automatically spread over multi frames,

    The voxel data is stored anisotropically, basically voxels are stored as 6 faces rather then a single cell. This greatly reduces light bleeding while also allowing directional emissive surfaces. It took me a while to come up with a good system to track/merge all this data while still remaining fast enough to use at runtime. The normal of each face is used to gauge the average color/emission of each voxel face, this creates a really good representation of the world compared to simpler voxelization methods that sum up all the data into a single cell.

    The data for each face is also propagated back down the tree so the data structure can be sampled at different resolutions. This allows me to cone trace the sparse data structure and because the voxels are stored anisotropically, the mipmapped data is a lot more accurate then what you would see from a mipmapped 3D texture.

    Even though the amount of Vram has increased for each voxel by 6 times. I should find that the voxel resolution should be able to be halfed for the same results, saving 8 times the memory.

    I'll make a shader that cone traces the sparse data so I can post some screen shots to better illustrate what the data looks like now.

    This is a huge step into finalizing the GI system. I'm trying to put aside a 1-2 days a week to work on this as our procedural generation for our main project is nearing a stage where the GI system will need to be finalized.
     
    Last edited: Oct 24, 2018
  42. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    Face are single colored of it's still based on SH?
     
  43. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    each face has a single diffuse color (8888), emission(1010102) and irradiance(1010102) value. you can blend the result if you want to sample a cube off axis. Look into how half life 2 sampled their ambient cubes if you want to see how to sample 6 faces with blending.

    Takes up less space then SH and is faster to sample. I'm not planning on using light propagation volumes so SH sampling isn't nessesary.

    I could merge some of that data but I want to be able to relight a chunk with out having to also voxelize it.
     
    Last edited: Oct 24, 2018
    Shinyclef likes this.
  44. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    What do you mean? Isn't that implicitly voxel just not on a grid structure?
     
  45. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    232
    Sounds to me like it means voxelization steps aren't required every frame, but you can still sample the stored lighting every frame. Correct me if I'm wrong Lexie.

    And my God is it such a great thing to see you talking about hxgi again Lexie welcome back haha.
     
  46. Demhaa

    Demhaa

    Joined:
    Jun 23, 2018
    Posts:
    14
    How will this affect reflections?
     
  47. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    If the lighting conditions change, sun move etc. I don't want to have to revoxelize an area just to update the lighting in it. By keeping the diffuse and emission separate from irradiance, I'm able to just update the irradiance sparse volume with out having to also revoxelize that whole area.

    Edit: The irradiance is calculated one bounce at a time. For each update it creates another bounce, Its a feedback loop. Irradiance = radiance * diffuse + emission. I don't want to revoxelize the area again just to find the voxels diffuse and emission to calculate the outgoing irradiance. That's why i store it as 3 values for each face.

    Reflections should be able to be cone traced at the very least, So they should look fairly accurate.
    I'm actually toying with a system for generating the irradiance volume that would allow me to also calculate some reflection probes per chunk. will have to try out a few methods to see what works the best.

    The plan is to make a hybrid screen space + voxel space lighting system.
    By tracing as far as i can in screen space for reflections and diffuse light before switching over to voxel tracing i should be able to get the best of both worlds. Kinda like how mixing screen space contact shadows with standard shadow mapping techniques works so well.
     
    Last edited: Oct 24, 2018
    neoshaman, Tzan, knup_ and 6 others like this.
  48. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    Oh yeah I figure out that you used the light structure as the scene structure too and they aren't separate, which mean updating lose the original data. Hence re voxelizing. I guess the new structure keep the light and albedo separate which allow for more dynamic update.

    Inspired by the talk in this thread I tried to come up with an even cheaper version of GI for even cheaper hardware. Scene representation is always the thing that is tricky to figure out, I had been wary about testing tree structure, it seems to layer some complexity, but I found some new idea:
    - A first insight was to realize that a lightmap UV is basically a (static) scene representation, so for any point evaluated, the sample ray will land on another lightmap position. So the idea is to to store, per pixel, an uv adress (and the contribution weight) list in a tile in an indirect texture, (basically all the potent ray that contribute the most to the pixel lighting, adress 0 is sampling the skybox). At runtime I just need to sum the tile sampling to get updated light, which mean I can use a "lightmap gbuffer rendering pass" to update lighting. And I only need to update the lightmap when light change. I get bounce light by simply reversing per pixel to sample point, so the bounces number is tied to the update loop. The UV indirect map being essentially the visibility ray (can use bilinear at pixel corners and mipmap to gather more per ray, basically like cone tracing). Depending on what data I store I can also get "potentially" dynamic object update, since I have the ray (and assuming the lightmap store position of each pixel) I can test if the ray is occluded by an object per pixel. However since it mean to loop every dynamic object, per pixel, per ray, it should only be used on strong hardware, with primitive casting at best? The main weakness of this method is that it rely on baked indirect map offline. What if there was a way to get the ray per pixel, at runtime? The other weakness is that it's mostly a diffuse only GI.
    - the second insight comes from my idea about cubemap, I was first blocked by the problem of passing the relevant cubemap per pixel, but I realized that cubemap don't have to be cubemap ... they can be any representation like lat long map or octohedron map, which are more flexible, and I can have an atlas of them. So I have talk about how cubemap are essentially scene ray representation (they are also empty volume texture with voxel at the boundary, so you skip empty space faster) and are cheap to evaluate per pixel coverage. Using the same idea than the lightmap I just need to associate every point of a lightmap to a single cubemap probe in the atlas using the indirection texture. We can then use the normal to sample the mipmap of the cubemap to get gathering (or send back light). And knowing which pixel is influenced by which probe is easy to generate at runtime by any method, so we could use voxelization (find the empty relevant space) on the cpu to bake an indirect map and only update it partially when needed. To project the lightmap on dynamic object we used separated lightprobe with the UV of the scene projected on them, and use that to sample the lightmap, so update are reflected in real time. The cubemap atlas don't have to be generated or updated in one go, we can async update any face depending on whatever logic we need.

    But until I start implementing it, it's just theory. I need to finish the infinite parallax hair shader first :/
     
    Shinyclef likes this.
  49. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    575
    1- You quickly run into storage issues, But this system is exactly how enlighten works and why it stores the visibility data in RAM rather the VRAM. the time it takes to find the relevant points to track per light map texel and group points together into larger groups is why it takes so long to bake. I honestly don't think you're going to beat enlightens performance here.

    2- sounds pretty similar to an octree version except the step to figure out possible combinations of cubemaps will probably slow this down into the realm of needing a baking step. I cant see the benefit over storing the data in a octree unless your game takes place in completely rectangular spaces, otherwise you will probably use more cube maps to represent the space then an octree would cost.
     
    neoshaman likes this.
  50. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,637
    It's not as generic, and efficient and precise than your solution, it's not in the same bracket of use case, I have a lot more (accepted) limitation and high approximation and a few unavoidable artefact, and definitely low resolution (lightmap are 256² max on mobile due to the uv precision AND 2048 size limitation, though project specific hack can jiggle around that).

    There is also a lot of small stuff and hack optimization I leave out for sake of brevity (and the gbuffer lightmap is just one idea of implementation to inject direct lighting approximation), but I think enlighten is a cpu solution (the runtime gather), I'm a gpu solution in 3 pass (lightmap direct+bounce, lightmap gi gather, view rendering), but yeah baking is an issue but I contacted the bakery guy to sort it out. The main benefit for me is the lightmap as a cache, and it's mostly design for low freq time of day lighting update, in "small" local scene or low density geometry (think ps2 level). It's not generic as specific implementation goes hand in hand with the level design, ie level are made with the limitation in mind, and the solution can be mix and match with the same underlying idea to match different level idea.

    I should probably left out the tree of cubemap, it was an example of structure I could use, though probably be using even cheaper approximation depending on the project. The basics are using box/primitive projections, and assigning cubemap per pixel by hashing the world position of the lightmap texel (the structure it is hashed into depend on the resources available in the project, so it's case by case), so there is a lot inaccuracy. Also the cubemap assignment can be local, they don't need to be created for the whole level immediately, only to encompass what's seen first. But cubemap are not just convex, they are radial, so some point have occlusion I just don't try to remove.

    It won't compete with hxGI in anyway! Hxgi/segi can deal with many dynamic objects and complex and dense scene for a high quality believable rendering, I don't.

    So yeah you are right, the limitation is the design, we have some plausible diffuse GI but it's not accurate. Anyway thanks for inspiring me finding my own solution :)

    EDIT: When I said cheaper GI for cheaper hardware, I was talking about the quality being cheaper too :p the main idea is really just the indirect map for the gather

    It's limited on mobile (at least the hardware I'm targeting) to 64 samples (8²tiles * 256² map = 2048²), but I'm likely over budget since it's opengl 2.0 (8 textures fetch?), hence the many alluded or untold hacks.
    The design of Cubemap placement is part of level design and pcg design, and need to be carefully considered on case by case, offline or online, it's not part of the solution. But basically we can match any structure on cpu by using the spatial hash of the cubemap/area and creating a very simple address structure/texture that match to runtime hash of pixel in the lightmap. It can be progressive during play using extra logic to async depending on the need of the project. In the case I envision the cubemap are rather sparse too.
    edit2
    also it's not hdr, though it's scalable in principle.
     
    Last edited: Oct 25, 2018