Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Graphics HXGI Realtime Dynamic GI

Discussion in 'Tools In Progress' started by Lexie, May 24, 2017.

  1. OP3NGL

    OP3NGL

    Joined:
    Dec 10, 2013
    Posts:
    267
    A best solution to all procedural level problems but its not really ready for use yet *sobs*
     
    tapawafo and PhilippG like this.
  2. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Working on it.
     
    IronDuke, MitchC, PhilippG and 3 others like this.
  3. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Yeah I've seen this, its more for path tracing though. My GI is calculated independently of what the camera is looking at so i cant really apply much from this paper. Also it takes 10ms to calculate this on the newest GPU so I don't think its really ready for games
     
    one_one, Howard-Day and hopeful like this.
  4. Yuki-Taiyo

    Yuki-Taiyo

    Joined:
    Jun 7, 2016
    Posts:
    72
    Please, can you tell if you can optimize the performances by using clipping on emissive meshes, so they don't generate lights above a given distance ?
     
    strich likes this.
  5. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    The lighting is only calculated with in a bounds. If you have a top down game for example that doesn't need very much height you can make the bounds smaller on the Y axis to speed up the propagation part of the GI.
     
    nxrighthere, Martin_H and Adam-Bailey like this.
  6. Yuki-Taiyo

    Yuki-Taiyo

    Joined:
    Jun 7, 2016
    Posts:
    72
    Actually it's for a FPS with random level generation -- and a split-screen multiplayer (I'm anxious about realtime GI with 4 cameras... will it cut the performances ?)
     
  7. Ogdy

    Ogdy

    Joined:
    Dec 26, 2014
    Posts:
    21
    If the scene is not too large so that it can handle the same voxelization for everyone I don't see any problem. Maybe in the future it will even be possible to handle cascades from different cameras.

    4 Cameras in split screen means the same amount of pixels to render, so it's the same as having a single camera.
    However, I can't really tell for more advanced effects such as his reflections implementation.


    By the way, it has been a while since we haven't got any news :p. I hope he haven't abandoned the project :eek:
     
  8. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    2,657
    He's currently busy with another project (it might be worth checking his twitter for screenshots), thus HXGI development is on hold for now:
     
    Ogdy likes this.
  9. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    I think that's the voxelization that cost the most, ie geometry, the geometry is split at 4 location with 4 camera, so it might not be the same at all.

    Cross posting from the segi threads:
    Right now I don't exactly know how hxgi do it's stuff at all, the closest thing I know that match his description is this webgl GI experiment:
    http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/
     
    Mauri likes this.
  10. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Depends on how large the area is. There are two parts to the GI. A GI volume and then a camera script that samples a GI volume.

    The GI volume handles all the voxelization, cascades and light propagation.
    The camera samples the computed GI data and calculates the specular contribution.

    If the level can be contained within a single GI volume then you could share it to all 4 cameras. It wouldn't really add any extra cost as you're rendering 4 lower res cameras. (same amount of pixels)

    If you need to give each camera its own volume then you will run into performance issues and probably use up most of your VRAM.
     
    Yuki-Taiyo and neoshaman like this.
  11. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    If you are smart about your voxelization then that step doesn't cost that much. If you want to have good performance it is recommended to use proxy geometry rather then voxelizing all the high poly meshes. This issue with having multiple GI volumes is VRAM. Performance you could do things like only update one volume per frame.
     
    Yasunokun and neoshaman like this.
  12. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    After doing a lot of testing with the cascade version I have come to regret the direction I took this project.
    One of the issues cascades introduced was that emissive surfaces start acting really awful out side of the first cascade. This is because the surface gets turned into a giant light emitting voxel even if the light source was small. for example, a tiny 0.5m cube ends up being a 4m voxel... This leads to a lot of excess light.

    Having the world scale up to large voxels really doesn't work very well on any large complex indoor scenes , which is basically my whole game. It starts introducing a lot of the same problems you get with a cone traced method. I really don't want to limit the draw distance of my game to 1-2 cascades.

    So instead of using cascaded light volumes to represent the GI I'm going to switch over to using a Sparse data structure. I really should have done this first...
    Rather then having a grid of equal distribution, the system will instead distribute more detail to areas near surfaces(where its most needed). This means that the lighting will be the same quality right near the camera as it is 250m+ away.

    Another great benefit of using a sparse structure is it will take up a lot less VRAM compared to the cascade version. With this extra VRAM I should be able to store the world as quads rather then voxels. This means each face can have different color / emission properties rather then it averaging everything to a cube.

    Another benefit of smaller VRAM footprint is I can actually do progressive voxelization now. The system will only re-voxelize small areas at a time, cutting down the cost of the voxelization step. There will be a way to move chunks to the front of the queue as well as settings for how many chunks per frame can be voxelized. It would have taken up too much VRAM to do this correctly before.

    The downsides of using a sparse data structure is that it will be more expensive to sample the light data in the final gather as it needs to traverse the data structure to find the GI data for lighting the fragment. Ill also have to defrag the memory manually as well, but the speed of being able to do progressive vocalizations should more then make up for it.

    If I continue to use Light propagation to calculate the GI, the sparse data structure should increase the speed of light as a cell can be varied size. The way it current calculates GI is the light moves like sound waves 1 cell at a time. If an area has no voxels near it, it will be a large cell, so the light will have traveled further in that update step.

    I actually have an idea for a new method for calculating the GI progressively, it should be able to calculate 1 bounce per frame, rather then updating the light 1 voxel per frame. It should be able to capture the skybox lighting and also have a lot better indirect shadows. The cost of calculating the bouce lighting from shadow casting light should also be a lot cheaper! Over all it should increase the quality of the GI to something similar to light baking. But this part is just a concept right now, Looking forward to trying it out.

    My side project is just about finished, so I should be getting back to working on this mid next week.
     
    Last edited: Sep 30, 2017
  13. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    I've been very interested in your project, sad to see the cascades don't really work out.

    What type of sparse structure are you thinking about? SVO, brickmaps, KD-trees or something else?
     
  14. brisingre

    brisingre

    Joined:
    Nov 8, 2009
    Posts:
    277
    Welcome back!

    What's the side project you've been working on?
     
  15. TooManySugar

    TooManySugar

    Joined:
    Aug 2, 2015
    Posts:
    864
    cool!. I like the almost lightbacking quality cause this could lead i nthe end into a usper fast lightmap backing. :D
     
  16. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Ill be using a similar method to OpenVDB, So far that style of adaptive voxel tree has the best compression vs access cost.

    Maybe.
     
    Zuntatos likes this.
  17. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Cant talk too much about it, but it will be releasing with in a few months. Ill share the details about it and how it was accomplished after its released. It basically forced me to learn how compute shaders really work.
     
    brisingre and hopeful like this.
  18. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I was able to get a few hours to dedicate towards testing the new way of calculating the data.
    So far the numbers im getting look like it will be a feasible idea. It's looking more likely that it will take 2-4 frames to calculate 1 light bounce for a chunk. That is still a lot faster then using LPV to calculate the lighting (old method). But there is a lot of optimizations still to come that my get it down to 1 frame per bounce.

    Here is an example of what the voxel data looks like now. Rather then being stored as voxels they are stored as quads. This means it will be possible to have different surface materials for each face of the voxel. The actual normal of the real surface normal is still stored as well. Before light could only bounce along 1 of 6 directions (along each axis), Storing the real surface normal will allow the light to bounce correctly off the surface even though its a voxel.



    The new method should also make it easier to calculate transparency and how it contributes to GI
     
    Last edited: Oct 2, 2017
  19. tweedie

    tweedie

    Joined:
    Apr 24, 2013
    Posts:
    311
    Your pace of development is always impressive. Looking forward to seeing how this goes :)
     
    hopeful likes this.
  20. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    478
    Followed your twitter. Looks like you're pretty much a compute shader expert by now haha.

    These new ideas look pretty exciting. Definitely looking forward to seeing more.
     
  21. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I haven't been able to share too much about this project because its a surprise. But the whole game loop runs on the GPU as it needed crazy multi threading to be possible.

    Looking forward to sharing more info on how its done once it's public.
     
    Martin_H likes this.
  22. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    478
    Love the idea of an entire game loop on the gpu haha. What is the CPU doing? Maybe it can provide some extra visual graphical effects to complete the irony :)
     
    Martin_H likes this.
  23. strich

    strich

    Joined:
    Aug 14, 2012
    Posts:
    346
    Lexie do you think you might be able to provide an ETA on when a beta could be made available given the new direction?
     
  24. buttmatrix

    buttmatrix

    Joined:
    Mar 23, 2015
    Posts:
    609
    It sounds like there is still quite a bit of road left

    keep-calm-it-is-coming-when-it-s-ready.png
     
    xVergilx, TooManySugar and hippocoder like this.
  25. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yeah please chill people, I'd hate to see any childish behaviour (which can happen judging by the SEGI thread and various terrain threads). So I keep an eye out. Everyone make a cup of tea or something :)
     
    code-blep and Martin_H like this.
  26. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
    Just curious , what is the advantage of openVDB storage type rather than SVO?
     
  27. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,001
    Make a cup of tea on International Coffee Day? Heresy! :p
     
    hippocoder likes this.
  28. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    They contain more data per node. octree contains 8 children per node, where as the OpenVDB version contains 16. overall they are pretty similar.

    Really hard to say, Our old engine/graphics programmer has rejoined the team (made custom engines/renderers for Dustforce and Devil daggers). Trying to convince him to work on this lighting with me to speed up the process. This new method for calculating the GI data is actually way simpler then my old one. Its not hard to make, its just whether or not current/older GPU are up to the task. It solves so many of the problems my old method had that required lots of hacks that made everything unstable.

    If you are looking at releasing your game with in the next 6 months id advise against counting on this being apart of your game, actually id advise against anyone counting on this being apart of their game. long range Realtime GI with little to no light bleeding is a problem I have to solve for my game. I might find some optimizations that work only for games with similar requirements as mine. Making this lighting useless in other situations.

    For example my world is fairly static but gets generated at runtime. It might turn out a fast pre-computed step is needed, thus imposing similar runtime restrictions as enlighten. Also our level data gets completely optimized to the lowest possible triangle count. It might turn out that you also need really optimized proxy mesh data.

    I don't want to have restrictions like that as I do have destructible objects in my game. I'm just pointing out there are optimization that I can make that will only work for my game that might render this useless to you.

    We chill

    Networking and sound mostly. I haven't bothered looking at the CPU load, but we do play a lot of sounds.
     
    Last edited: Oct 1, 2017
    Reanimate_L, brisingre and hippocoder like this.
  29. TooManySugar

    TooManySugar

    Joined:
    Aug 2, 2015
    Posts:
    864
    Yes please stop asking to reléase any beta, this already happened with SEGI, ppl beggin for a beta and them come to cry when things go a little south. IMO this should not go public like untill its pretty much final stage.
     
    hopeful likes this.
  30. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,785
    ah i see, never works with voxel. thanks for the info :)
     
  31. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    2,657
    Yeah, maybe people - who work on a realtime GI solution in the future - should stop posting their WIP, if it's still waaaay far away from being finished. This way people won't get hyped and thus won't get disappointed.
     
  32. chiapet1021

    chiapet1021

    Joined:
    Jun 5, 2013
    Posts:
    605
    I don't know, it's a damned if you do, damned if you don't situation. It's important for Lexie to post early enough to gauge interest so he can decide whether it's worth it for him to release his solution and support the community. Obviously, that also means people will clamor for the solution to be released right away as well, even with it being in a non-final state.

    I think it's good to post progress to let us know how things are progressing, and I appreciate Lexie being transparent about not committing to a release until he's certain that's the decision he wants to make. I think the onus is on us as a community to help temper expectations with those who are overly eager or impatient to try it out before it's ready.
     
    SteveB and buttmatrix like this.
  33. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    2,657
    I'm sure that, after a few failed/never-released ones (such as this candidate) and after SEGI got released, it's safe to say the interest is still there. This factor shouldn't be a problem at all.
     
  34. chiapet1021

    chiapet1021

    Joined:
    Jun 5, 2013
    Posts:
    605
    It’s all relative, though. How much interest could inform not only whether to release the asset, but how to price it, what features to focus on, etc. And sharing progress early can help prioritize the early feature set, rather than just making assumptions on interested customer use cases.

    There are pros and cons to both approaches (talk about it super early or wait until right before you’ll release it). I don’t know that either is right or wrong. It’s just what the developer chooses to do that makes the most sense to them.
     
    SteveB likes this.
  35. SteveB

    SteveB

    Joined:
    Jan 17, 2009
    Posts:
    1,451
    Well as with any product, and we just can't use past experiences at every given moment, but customer awareness and interest has to be built, and this can't be done at the last minute (Sega Saturn).

    Lexie is showing such enthusiasm and continued progress that clearly the intent is there, and the talent very much there.
     
    TooManySugar and chiapet1021 like this.
  36. TooManySugar

    TooManySugar

    Joined:
    Aug 2, 2015
    Posts:
    864
    I didn´t say that. WIP fórum is exactly for that. Showcasing stuff you're working on. But the fact that is WIP is simply incompatible with a reléase. Further if the thing undergoing is a GI solution.
     
  37. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    My contribution and interactions with this world are through my work. I would rather share my travels as I go, then forget the road that got me there. For me that is sharing my WIP.

    knowing when something as complex as realtime GI is close to finished is pretty hard to gauge, I thought I was close when I posted it. As I expanded the GI to support the features I needed for my game I realized that the view distance wasn't going to be enough, even with using the max volume size with 1m voxels.

    I tried cascades and it worked great for outdoor scenes. But when putting it back into my game the light bleeding and emissive surfaces losing all detail wasn't acceptable for me.

    I understand that for a lot of people/projects what I have now is probably fine, even the first version with out cascades was probably fine. It must be frustrating to see a GI system working well enough for your game and the creator saying they are throwing it out and starting again. With a few weeks work I could probably get the cascades system cleaned up for others to use. But that is a lot of time for me. I work fulltime on this stuff. Dedicating two weeks work destroys my burn rate (financially and mentally) as I'm not making progress on the things that matter to me.

    Gauging the communities interest is really important for me. I understand that there is a huge need for realtime GI, but for the price people expect assets from the asset store, its hard to make money from it. People are fine spending 10$ on an asset that took a day to make. but anything over 100$ is seen as too expensive even it if took 2 years of R&D.
    I have to spend x hours a week supporting it and answering emails. I value money disproportionately smaller then my time. So for it to be worth a release it needs to make a lot more then the time away from what I want to do.

    "Then just put it on git in its current state if money isn't important to you"
    My name is attached to this. I'd rather make a real impact rather then another git page that dies. Also I'm currently in a state where I can value my time more then money, But as I burn through my savings that opinion will change out of necessity. So I cant just give away parts of my research for free.
     
  38. Mauri

    Mauri

    Joined:
    Dec 9, 2010
    Posts:
    2,657
    I have to admit that is a good read. And, maybe you're right...

    I never said the opposite. In fact, I have much respect for what Lexie achieved/achieves. Clearly something not all of us could come up with. But with showing off such stuff comes anticipation, people get hyped... It's like Games really.

    Anyhow, I won't post any further here on this thread. Good luck.
     
    Last edited: Oct 3, 2017
    SteveB likes this.
  39. SteveB

    SteveB

    Joined:
    Jan 17, 2009
    Posts:
    1,451
    ...sure it's like any product in existence. You'll never stop people getting excited, but then again that's the goal of showing progress.

    I like being excited, and of course I've dealt with disappointment many times...I'd rather just be the former and not worry about the latter. :)
     
  40. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    I'd love to give the current status a try, my project is procedural top down, no far view distances needed. I'd also be willing to spend +100$ on this. Bleeding would be an issue though.
     
    Mauri likes this.
  41. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Your welcome to comment here Mauri.

    I got a little more time to work on the new voxelization. because the data is stored a little differently then before the merging of face data is a little harder to do. It's also really important to keep the face count as low as possible so merging unnecessary faces is very important.

    On the left is a bunch of books that voxelize to a lot of faces. On the right is the merged values. This should remove around 10-50% of unnecessary faces depending on your scene. This will speed up the GI as well as keep the VRAM usage down.

    Storing the data in a sparse data structure brought the VRAM usage for the voxel data down to 0.146mb for this scene. With the old system with the same resolution would have been 2MB just to store it per voxel, It would be 12mb to store each face value the old way. This is why having directional face data was out of the question before.


    Note that normal snapping and position snapping have been turned off to better illustrate the books.
     
    Last edited: Oct 3, 2017
    tapawafo, SteveB, chiapet1021 and 3 others like this.
  42. scheichs

    scheichs

    Joined:
    Sep 7, 2013
    Posts:
    73
    Hey Lexie, just as an monetization idea. Why not split HXGI into a Light version (e.g. without cascades as an example!) and a Pro version with cascades. Once the Pro version is ready you could give ithe Light users the chance to upgrade to Pro for the price difference or something like that. Wouldn't that be a win-win for you and interested people?
     
    PhilippG likes this.
  43. buttmatrix

    buttmatrix

    Joined:
    Mar 23, 2015
    Posts:
    609
    s̶a̶v̶a̶g̶e̶ ̶+̶1̶
     
    Last edited: Oct 3, 2017
  44. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    478
    Very cool Lexie. That is a massive vram difference!

    Even if nothing releases I still enjoy reading about your findings because I will need to find a way to do lighting for my game one way or another and understanding various techniques is very useful. But I'd also drop some cash for your efforts...

    Here's a thought, looking at your pics...
    Assuming a sparse voxel structure benefits when larger level volumes are used instead of going down to the smallest detail volumes, this structure leaves a door open to 'lossy compression' in a way doesn't it? What I mean by that is if you round off voxel face colours so very similar colours end up being the same, you'll end up with larger faces. If you can control the rounding intervals, then you have a colour quality slider that trades colour precision for performance.

    An interesting thought?
     
  45. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    That was not meant to be a dig at anyone...

    So the sparse voxel data allows me to make all the voxel data in the world the same size. Even with a large world the size should be pretty small as its only storing the face data rather then wasting VRAM on empty space.

    It needs to retain as much detail about the overall structure of the world. Id have to limit merging as long as it does't mess with the world representation (otherwise you get bleeding), that might not be worth the trade off as its a more complex merge then the one I do currently. But its an idea for sure. I'm looking into how Virtual point lights are placed and merged to see if its something i could apply to this voxel data. Will see how it goes.

    Yes I've thought about this, Its kinda hard to separate everything though, its all very intertwined.
     
  46. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    478
    Interesting. It against my intuition on this kind of data structure. It's like Oct trees right? Thinking in terms of 2d quad trees for a moment... If you hit a tile that says "you need to check my children", then you need to drill down. But if all four children are orange, doesn't the parent quad say "don't bother checking children, I'm orange" when checked? The resolution in this case is actually the same right?

    Well I guess I just have a misunderstanding of your solution and should stop distracting you on the forum haha.
     
  47. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    In that version the parent needs to know and calculate and average color and see how far it deviates from all its children to see if its worth saying, hey my children are all orange. Or you could just kneel down and ask the child :)
     
  48. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    478
    Ah I was actually thinking that every colour would be rounded regardless of if it's in a parent or child node so the parent would just need to know if all children are the exact same colour and stored during the voxelisation step. But yes it is trading work in one place for work in another. Not as simple as I at first thought.
     
  49. strich

    strich

    Joined:
    Aug 14, 2012
    Posts:
    346
    With this approach I assume dynamic objects such as NPCs aren't affected by the GI?

    Is there much of an impact when modifying the scene at runtime and having to rebuild the merged faces?
     
  50. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    478
    Lexie can confirm, but my understanding is that everything can both contribute to and be lit by the GI. However, due to light propogation (speed of light) limitations causing a ripple effect when gi contributors change/move, you may prefer for fast moving dynamic objects to not contribute to the GI by being voxelised, but rather just have them be lit by the GI by sampling the light volume created in the voxelisation step.

    As a side note, is it correct to keep calling it voxelisation now that it's more like... Quadification? Lol.
     
    Lexie and buttmatrix like this.