Search Unity

  1. Get all the Unite Berlin 2018 news on the blog.
    Dismiss Notice
  2. Unity 2018.2 has arrived! Read about it here.
    Dismiss Notice
  3. Improve your Unity skills with a certified instructor in a private, interactive classroom. Learn more.
    Dismiss Notice
  4. ARCore is out of developer preview! Read about it here.
    Dismiss Notice
  5. Magic Leap’s Lumin SDK Technical Preview for Unity lets you get started creating content for Magic Leap One™. Find more information on our blog!
    Dismiss Notice
  6. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice

Assets HXGI Realtime Dynamic GI

Discussion in 'Works In Progress' started by Lexie, May 24, 2017.

  1. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    533
    The stuff I have now would be suited for consoles performance. It's hard to find time to finish it as it doesn't have the range we would like for our game. When ever I have spare time, I try to get it closer to a production ready asset, but its taking a while.
     
    blackbird likes this.
  2. Tasmeem

    Tasmeem

    Joined:
    Jan 14, 2010
    Posts:
    108
    Cool, Can't wait!

    Did you ever consider a screen space GI approach? Or are you aware of any existing ones in Unity?

    It might not be accurate but it might be a good compromise until the real thing comes along.
     
  3. ekergraphics

    ekergraphics

    Joined:
    Feb 22, 2017
    Posts:
    243
    UniEngine is impressive, but $10,000 dollars per year is a bit much even for us as a company... so I guess if Lexie is still wondering how much he can charge for this at the asset store, that would be the ceiling. ;)
     
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    Screen space GI would be immensely limited, GI rely on surrounded geometry clip by screen space. Screen space reflection can be seen as a type of very close screen space GI but since distant influence (in screen space) are super costly it's not use as a global one.

    SEGI started as a screen space GI crude approximation, apparently it used to worked like AO sampling color along with the depth, and some similar trick with shadow mapping (sampling color).

    The closest simple cheap crude approximation would be cube map reflection IMHO. I'm investigating this.

    Basically Real time GI is figuring out visibility inside a lighted space, and updating the ray along side that space. Solution like SEGI and HXGI basically reconstruct in real time the visibility structure using voxelization.

    Other solution bake the visibility in different way. Enlighten maintain a list of most potent visible surface inside each surface, when light hit a surface, the surface update its direct lighting data, and then query the lighting data of the surfaces in its visibility list to update it's indirect light, so the update happen over multiple frame. In the division, Ubi soft's engineer use a similar solution but instead of storing visibility into lightprobe and query surfaces data to update the probe.

    If you can solve the occlusion/visibility query at runtime, you can basically solve GI in a cheap way, and the crudeness will be about the resolution of that visibility structure. There is many solution to solve the occlusion and visibility problem for rendering.

    My reasoning with cubemap is that rendering to a screen (and a cubemap) is a specific case of the visibility/occlusion problem. Box projection allow you to project back, in the environment, the lighting accumulated into the position of a cubemap at the resolution of the cubemap. So If I feedback the result of the lighted environment back to the cubemap, would I have a close convex space GI?

    I don't belive in a single solution, if my solution work great at close distance around the point of relighting, it would need to be supplied by other solution too, for distant rendering, for contact rendering (screen space reflection), for concave space (maybe a simplification of the segi/hxgi model), etc ...
     
  5. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    533
    screen space GI is only worth using on top of either baked light maps or a low res GI solution. By its self it causes too much issues.
     
  6. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    533
    You will need a lot of cube maps for anything more complex then a cubed room. Rendering a cubemap (6 renders) for all the probes can be pretty costly with any reasonable size scene, Even if its just the once when the level loads. You would most likely have to bake them offline.

    You will also need a lot of space to store all those G-buffers. Even at low res it adds up pretty fast. the last call of duty did a similar system. Check out their videos.
     
    neoshaman likes this.
  7. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    I didn't intend it to be a full GI solution with many cubemaps, just one located around the player, just a small experiment to have me starting and try stuff before ramping to solution I don't understand yet. I'm well aware of the limitation and don't expect really good result, but maybe it might be usable for more illustrative rendering rather than realistic one?

    That said, I was also wondering, when making pcg game, can we also find rules to generate occlusion/visibility query directly into the generation pass? Or can we design around space that make them more easy to compute/generate?

    edit:
    Limitation are: flickering, sudden influx of occluded light due to moving the probe (partial solution: temporal aliasing's retroprojection like trick?)
     
    Last edited: Jun 8, 2018
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
  9. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    1,747
    Yes, personally I follow this thread because I am a fan of Lexies work, experiments, etc, whether they end up as released products or not. There are plenty of other places I can look if I just want the usual chatter about how this unfair world doesnt give us the perfect GI solution for all purposes on a plate.
     
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    Point being this technique is still rather slow and need baking prior to gameplay :p
     
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    22,216
    Mod note: If you do not intend to purchase the asset, do not post here. If you own the asset, only post relevant questions that will need answering how to use it. All others are free :)
     
    MarkusGod and chiapet1021 like this.
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    There is still no asset to buy though...
     
    arnoob, iamthwee and Tzan like this.
  13. Yuki-Taiyo

    Yuki-Taiyo

    Joined:
    Jun 7, 2016
    Posts:
    62
    @Lexie

    Have you considered a financial support to yourself and your asset through Patreon, giving access to a bêta version of Hxgi to people interested by it ? It could be an interesting solution financially and you wouldn't have to provide full support to people until you're ready to launch it on the Asset Store.
     
    MarkusGod, neoshaman and jefferytitan like this.
  14. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    1,171
  15. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    41
  16. Demhaa

    Demhaa

    Joined:
    Jun 23, 2018
    Posts:
    1
    Despite not currently working on it, how less intensive is this asset compared to SEGI?
     
  17. Bud-Leiser

    Bud-Leiser

    Joined:
    Aug 21, 2013
    Posts:
    85
    Looking good!
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    I have been thinking some more about rtgi, even though I'm not implementing anything yet (assume diffuse light and not specular):

    The take away is that RTGI is local and "view dependent", it also have a gradient of quality from close high frequency to far low frequency. That insight is important, because, from a game perspective it made realize that we don't need to try a global approach. Most attempt "try" to be as global as possible and are only local as an approximation, but there is basically two types of GI in game lighting:
    a. Diffuse light from scene light are bound by the attenuation, that is diffuse GI cannot go further than the bound of their radius. They are generally close to geometry, they have high locality.
    b. Environment light by the sky (sun and skydome). Which is distant and affect the whole scene, it's a global lighting.
    Most attempt consider both light as "roughly" similar and put them inside the same structure, which generally encompass as much as possible of the scene.

    The most used data structure for most RTGI seems to be irradiance volume (and variant), that is a 3d texture with light informations inside. It's very cheap to compute, it's basically a look up based on world position. The difficulty comes from injecting and updating light, which is where most techniques differ (cone tracing, reflective shadow, map, voxelization, etc ...).

    But do we need an all encompassing data structure when light have high locality? Can we decouple Global sky GI from scene light GI? The great think about that type of data however is that the source of the lighting don't matter as long as you can inject it, which make texture light automatique. Global light seems like it can be handled through "mostly" AO.

    I was initially focusing on convex shapes for local light, which allow to decompose space such as we can cover all light path inside the shape, but light is "radial". Convex shapes guarantee than any point are link by a single line, which satisfy sampling. But if we want to place a minimum number of probe such as it cover all light path, radial allow for better heuristics. That is for any centroid in an empty place, assuming that centroid is like a light, all empty place which are occluded from, in the "shadow" of, that centroid are good candidate for new sampling point. There must be way to find the minimum number of useful sampling point to cover all light path.

    Does It turn the occlusion problem into a pathfinding/propagation problem? Essentially we are finding only the occluded space that can potentially have some bounce, using "shadow" (aka occlusion) as a heuristic. "View" is itself a occlusion/pathfinding and radial problem, if we can find the path from light to view (that is the space overlap), does that make a solution from RTGI that only solve for view?

    For example let say we use voxel as a space partition, we can aggregate empty voxels into a bigger boxes to simplify the structure, the centroid of the boxes would be the optimal placement for capturing all light path. We can then try to select the bigger box, expend on radially reachable neighbor from the centroid, forming a radial unit. What would remain would the occluded box from the centroid point of view, which is where we would optimally place a new sampling centroids. Using local light as the starting centroid might also help, we basically terminate when the light path inside each radial unit exceed the original light radius.

    Which led me to discover this mathematical problem, which (or varient) may or may not be useful?
    https://curiosity.com/topics/the-il...-a-well-known-mathematical-problem-curiosity/

    1. I was a bit uncomfortable with the dismissal of long baking time, for some reason. But I couldn't fully see the rational until now.
    a. Baking is usually applied to the whole level (globale) at a "relatively" high resolution and high precision.
    b. Run time is generally focused on "seen" geometry with a decreasing "fall off" from close to far regarding resolution. That is it's local, we don't care about the whole level. It's also using a lot of approximation and shortcut.
    c. This does not make baking methodology any useful, because if baking time is 10h, even 1% locally isn't fast enough. What it does is recast the problem slightly differently.
     
  19. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    451
    Those are basically the same. The largest by far issue to solve for realtime-ish global illumination is light transport. Light transport basically means "how much light moves from position x to position y (indirectly)". If you're not trying to solve light transport every frame, you already baked all lights into your geometry and nothing is dynamic (until rebaking).

    >""View" is itself a occlusion/pathfinding and radial problem, if we can find the path from light to view (that is the space overlap), does that make a solution from RTGI that only solve for view?"

    That seems correct. However solving "light to rendered pixel" is what shadowmaps do currently. You need to solve "light to random wall to rendered pixel" - preferably using multiple random wall steps. This makes the path search space massive.

    >" Global light seems like it can be handled through "mostly" AO."

    That is only a good approximation if your "indoor" environments are small with lots of openings (small-ish houses). If you want a (large) cave or hangar that becomes dark inside this does not work.


    "Local light realtime global illumination" seems semi-solved. If you can live with some constraints (~10 meter radius max, light transport is delayed/slow, takes a real lot of memory <or> limited view range) - that seems to describe HXGI as it is/was working. It is however very hard to scale up (since radius * 2 == volume * 8).

    Making the volumes have multiple level of details (either sparse voxels or overlapping larger grids) does probably help a bit performance wise but the complexity and therefore glitches goes up a lot.

    "Skylight realtime global illumination" seems a lot harder still. The average distance of light travelling goes up 50x+. The average "amount" of light sources goes up 50x+ (the entire sky). So as you said, you'll likely need special handling for this compared to "local" lighting.

    And some other fun challenge: if you spread anything temporally ensure you can sample reasonably even in geometry sparse regions for (quick) moving objects.

    (I really should make more coherent posts)
     
  20. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    yeah another assumption was only static geometry (for now).

    The thing is that I think we can just sample the direction of skydome (assuming it's encoded in something like cubemap) and use AO to mask out where it does not contribute (which is how we do it now). Except in this case there is no bounce, but then we move from global to local, Even then we can use cubemap/probe with extra data (like depth) to have local environement (basically what we already do.

    I think it can be "solved" "at cost" with refreshed local probe (basically we reproject the probe back on the environment, then use the result as a new input to accumulate in the probe). And since we know what are occluded "space" (rather than geometry) we can start to propagate sampling volume and find a new centroid base on estimated energy spend (that is there is a certain penetration possible inside the occluded region based on how light would lose power on bounce).

    Essentially I'm moving the focus from geometry (aka surface) to the space between them (aka a set of potential path). Generally GI is thought as surface emitting and receiving, which is a hard problem, because ray are basically random from the geometry point of view. However space is basically a bunch of potential ray, so by manipulating space you are manipulating the set of all path as a bundled primitive in a single entity (radial space), it is simplifying stuff as surface is implicit to the boundary of that space.
     
  21. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    41
    I'm not ruling out a potentially new paradigm, however I suspect that what you're gaining in a simple volume description, you're losing in the complexity of what it contains. A large sphere of empty space could contain a complex set of light rays with varying direction, colour, intensity, etc.

    I do find the general concept of global vs local lighting interesting. You could have lights with a large sphere of influence represented as a very coarse grid, as their effect will be very diffuse at great distances, and smaller lights represented as fine but sparse grids that would also give higher quality reflections.
     
  22. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    451
    This really sounds like what lexie has done here with HXGI - use a volume with light sources injected in it and propagate that lighting frame by frame instead of calculating the entire light state in a pixel shader when doing the final render. It limits light complexity since you only store x amount of lighting per voxel and thus improves performance. But probably problems with multi-colored/directional lights overlapping.
     
  23. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    3,067
    I'm not sure it's a new paradigm though, I think the term is "light field" and we have been doing in in many form from some time:
    1. Filtered cubemap or SH probe effectively gather the "light field" in between geometry gathered in a single "view" point.
    2. G buffer translate geometry data into light happy data, the depth buffer being the representation of the geometry in that data structure, it's precise enough you could run physics on that representation.
    3. You can combine 1 and 2, to get a complete scene representation from a single point.
    4. By multiplying view sample you can get the entire scene represented with all its data. It's basically an alternative and complete representation from the light data perspective.

    It does that have limitation:
    1. 3d geometry is "infinitely" precise potentially due to being vector. View base light field are rasterization and are resolution dependent, and resolution of details falls off with distance, which mean you need multiple sample at multiple place to resolve details (hence array of probe necessary).
    2. Due to being view based, unless you are a convex shape, you don't have parallax information. With convex and distance information, you can actually modulate the view from its origin and reconstruct a new view (this observation has lead to the box projected cubemap trick). Doing so with concave shape introduce missing data at parallax, they are the visibility/occlusion problem.

    My idea come from observation of point 2, parallax information is akin to shadow if you have a light emitted from view. So in order to reconstruct that data I need to sample in that "view shadow", but we can know where those data using shadow map technique, instead of computing shadow map at the fine details of the texture we do it to a volume deconstruction of space which is potentially coarser and therefore faster. Since light are akin to view we can therefore find the potential set of influence of that light by chaining occlusion of each sampling point until the ray potential is exhausted. And we can also limit computation to the join set of view and light sampling occlusion, in order to avoid contribution that simply aren't seen. I'm assuming that computing occlusion is faster than GI bounce, so we are potentially dramatically reducing the data needed to calculate contribution toward the camera.

    But that's just an hypothesis for now. Thanks for the discussion It made things clearer for me too now lol. And potentially it can evolve into a new way to represent geometry without keeping the "geometry data" if we can bake it in the "light field". lol
     
  24. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    166
    Lexi, you do know that I will buy this the moment it is released, right?

    btw, you should really add HDRP support (if it is not planned)
     
  25. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    533
    I'm not touching the HDRP until its out of preview/beta, there have already been major overhauls of it over the last few updates already. it is not stable enough to make anything for it IMHO.
     
    hopeful likes this.
  26. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    166
    Maybe the camera related stuff will change a bit more but for GPU compute shader related things, it won't change that much. But in general I agree with you :D