Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Graphics HXGI Realtime Dynamic GI

Discussion in 'Tools In Progress' started by Lexie, May 24, 2017.

  1. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Image from tracing the new Sparse Voxel Octree with anisotropic voxel data.



    Although the data doesn't look too different for most of the voxels, the color difference is a lot more noticeable on the RGB test cube. The anistropic data really shines with emissive surfaces and for the lighting data that will be stored in each voxel.



    In my old system, this cubes emission color would have been averaged together creating a white/gray emissive cube. With anisotropic data ill be better able to calculate the lighting from emissive surfaces a lot more accurately.
     
    Last edited: Oct 28, 2018
  2. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    That's fantastic. Dynamic emissive surface lighting is the biggest missing piece of my visuals.

    I made this to test emissive blocks but without the glow it really doesn't reach it's potential.
     
  3. Demhaa

    Demhaa

    Joined:
    Jun 23, 2018
    Posts:
    35
    Is it possible to make it colored per Voxel vertex? Seems overkill in theory though
     
  4. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Might be hard to calculate that, but yeah way over kill. would double the memory requirements for not much gain.
     
    neoshaman likes this.
  5. sledgeman

    sledgeman

    Joined:
    Jun 23, 2014
    Posts:
    389

    I see "Minecraft" here ... :)
     
  6. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I've done some optimizations that have sped up the octree generation by 2x and also sped up the cone trace function by 2x compared to my old SVO system, I was a little worried to begin with that the SVO would be too slow with the extra work needed to calculate and sample the anisotropic data, But with these new optimizations I have a lot of room to work with for the lighting calculation and sampling.

    There is one more big optimization to implement for the tracing that I have yet to finish. It was one of the main inspiration behind rewriting the SVO from scratch, Some extra data is calculated during the octree generation that should see another big speed up for tracing.

    After that last optimization is implemented I'll be able to move onto calculating the lighting data, so look forward to some screen shots soon.
     
    Last edited: Oct 31, 2018
    N00MKRAD, ftejada, MaximKom and 3 others like this.
  7. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    That sounds like a huge win. Optimisation news is some of my favourite news to see :D. Looking forward to seeing how your new optimisation idea goes.

    With segi, I was able to modify the voxelisation shader to account for my non-standard use of vertex properties. Does hxgi also have a voxelisation shader that could be similarly tweaked? Although I suppose I am prematurely assuming/hoping you will be selling hxgi eventually...
     
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    It's all done on gpu?
     
  9. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    It looks kinda similar to the SEGI voxelize shader, you just have to account for the anistropic data writes, Its pretty simple set up though,

    Yes, it all takes place all on the GPU.
     
    neoshaman likes this.
  10. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    I am REALLY looking forward to this.

    Tried and tried and tried SEGI for months but it leaks lots of light, gives directx crash errors and specular reflections from emissive lighta look horrible so I have to end up using shadow casting expensove lights also.

    Unity realtime GI is even more horrible, with all the lightprobe placemebt and the hours of baking, not mentioning that I lose the ability to switch lights or procedurally generate anything.

    This aaset looks so clean, and if its even more pergormant than SEGI its the PERFRCT thing for my project. Is any test version or buyable product being released any time soon? Because I might hold on the graphical development of my project and stick to database and gameplay just so I can way for this. Its like a godsend with everything i need
     
  11. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Thanks for the interest, id like to reiterate that there is not release date for this, time will tell if this becomes public, so id ask you not to plan your schedule around anything that i make.
     
    knup_ likes this.
  12. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    I am willing to buy whatever build you have even if it doesn't have all the final features. We're desperate now, SEGI is giving directx crashes all over the place, Unity precomputed GI ruins lots of our gameplay features (procedural level sections, full lightning control, REAL reaction to light instead of S***ty impreccise expensive lightprobes...), and what you already achieved is waay more than enough for our needs. In fact even the first features you did show, even without the combined SSR + approximated reflections or all the other fancy thing, would really really save our day. Our game puts a lot of weight on lightning vs darkness, changes in lights and map structure that influence them and other things.

    We have been working on this project almost a year, with SEGI as the foundation but now SEGI is a no-no, since it randomly crashes continously with d3d11.dll access violation errors (crashes stop if disabling segi). Now we are completely lost as half of our project features depend on a completely dynamic lightning solution and not the crappy baking+lightning probes unity has. We can't even move a damned door without it looking bad, if we don't have a dynamic GI solution.

    Thanks for your attention.
     
  13. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Quick direct lighting test to confirm that the data is tracing correctly and that the deferred light injection pass is working. Not really representative of anything I'm actually going to use though.


    (note: no tone mapping used so the skybox contribution looks a little flat, its also a flat color rather then using a skybox texture for now)

    Next step is to fill the sparse radiance data by tracing some rays for each leaf node in the octree, then mipmapping that data back up the tree. Every update of this step will create an extra bounce, resulting in infinite light bounces.
    I've nearly finished with this step, but my spare time is running a little low this week.

    After that I need a function to sample the sparse radiance volume with some way to interpolate the data from nearby leaf nodes and the main system for diffuse lighting is done!

    The plan is to mix this lower detail diffuse GI with some screen space GI+AO to pick up the finer detail of emissive surfaces and details that dont get represented in the voxel grid. Quantum break used a similar method for their game and the mix of a sparse radiance volume + screen space GI worked really well.

    Then all that is left is to modify unity's screen space reflections to fall back to tracing the sparse octree when it fails to find a suitable screen space sample and the system is complete.
     
    Last edited: Nov 9, 2018
    Sannyasi, bigd, RockSPb and 18 others like this.
  14. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I got the basic system done for generating the data for the sparse radiance volume.
    I still need to work on the running average of each faces data. I'm not super happy with the compression I'm using to store the HDR color, it tends to shift the hue a little too much. I'm storing the Lighting data as HSV (7/7/18) for now, I think I need a little more precision in the hue if i want to improve the running averages accuracy. other methods of storing HDR colors didn't work well with a running average.

    Its not as fast as I would have liked, but I have a lot of optimizations to go on this part.


    (voxel world getting lit by its radiance data)

    I quickly hacked together a shader to sample nearby radiance data per pixel. This probably wont be the way I actually sample this data as it introduced a little too much noise into the system, Some temporal filtering/smoothing would have to be done to remove the noise and id rather a system that can just interpolate the radiance volume directly so there isn't any ghosting. The whole thing runs at 60fps 1080p on my Laptop GPU, I have a lot of optimizations to go though.


    (Top image is my Sparse radiance GI system, Bottom is unity's GI for comparison)

    I didn't want to spend a long time baking unity's GI for this comparison, It took 5mins to bake on the setting I used and it showed me that my GI data was pretty close, im sure it would have a little more accurate AO if i increased its settings a bit more.

    There is a bug with how emissive surfaces get generated and mipmapped through the data, So I can't show that off just yet.

    This will probably be the last update for a while. We have crunch coming up on one of our projects so my time will be a little thin for the next few weeks.

    Overall I'm pretty happy with the sparse data structure, it uses around 10-15% of the memory so that will allow me to finally get past the 256^3 size of all the other methods I've tried out so far.

    The world is split into 64^3 chunks. So if you need to revoxelize an area its fairly fast as the volume is pretty small. If you need to revoxelize more then 1 chunk the system queues them up and spreads the work over many frames.
     
    Last edited: Nov 16, 2018
    m4d, OCASM, Tenebris_Lab and 14 others like this.
  15. sledgeman

    sledgeman

    Joined:
    Jun 23, 2014
    Posts:
    389
    Now it doesn´t look like Minecraft anymore :p
    The "Sparse radiance GI system" looks really nice. Because of the "noise" it reminds me a lil bit to an unbiased renderer. Do you mind to make a build, or a webGL for your followers, just to play around and test their Laptops / PCs / Devices (y)
     
    Lexie and N00MKRAD like this.
  16. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    The last couple of parts I've added are really un-optimized, I was kinda rushing to make sure that this concept of calculating GI actually works as I've never seen radiance data calculated this way before. best way to describe it is a hybrid of cone tracing and Light propagation volumes with out any of the down sides of either of them. Once i optimize these steps i can release a public build, Right now its not representative of the performance I want to expect from the system.
     
    Last edited: Nov 17, 2018
    ekergraphics, Tzan, Shinyclef and 2 others like this.
  17. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    Oh man I really do hope you release something public, been following you here and on twitter for looong time! You really seem on to something here.
     
  18. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I got directional lighting support added and fixed a few bugs, the sampling results are a lot better now.
    I did a comparison of unity's progressive lightmapper to check the difference in quality. It looks like the compression on the lighting data might need some adjusting, the hue and saturation for that blue bounce lighting might be a little off and that's causing less white light to be generated, or I'm just sampling the diffuse in the wrong color space. Ill look into it later. I also think I'm sampling the skybox texture incorrectly as the baked version has brighter sky contribution, must be doing something wrong with the HDR decoding for the cube map.

    Overall its pretty close, If I was to increase the exposure on the camera the back corner is all lit up as well, its all HDR colors so it works well with adaptive exposures.

    It took Unity around 3mins to bake this scene with the progressive light mapper. Mine took about half a second of updating the radiance data to get to this quality (starts off black and with in a few frames it has calculate 1-2 bounces). I also think my normal maps are capturing the lighting a lot better then unity's light maps so that's cool.

    I've included a render from SEGI to show how standard cone tracing fails at doing indoor areas accurately. The lighting data gets mip-mapped and results in lighting from outside leaking into the room. There are some way to alleviate these downside at the cost of performance, but its pretty much impossible to completely remove. My game has a lot of interconnecting indoor and outdoor areas so it's impossible for me to use SEGI.

    I hope this comparison shows the reason for this journey and why i didn't want to settle for SEGI or my LPV version.


     
  19. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    I thought the lpv was pretty good but I have less understanding of its shortcomings. I'm any case that looks really nice. Is that a fairly dim emissive? Would a bright one light up the room?
     
  20. scheichs

    scheichs

    Joined:
    Sep 7, 2013
    Posts:
    77
    Looks super awesome!!
     
  21. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    LPV function pretty similarly to minecrafts lighting, So light can creep around corners, its not really calculating line of sight, its just moving light from 1 voxel to another. Also any volume larger then 256^3 takes up too much storage.

    That is just the bounce lighting from the sun. If it was brighter the whole room would be lit up more. Normally you would have adaptive exposure to give people the illusion of a HDR color range. so the sky would be blown out and the inside would look bright enough to see. Its all HDR lighting so as long as you have HDR enabled on the camera you can change the exposure when applying your color correction or just have adaptive exposure enabled.


    (same shot but with a higher exposure - Neutral tone mapping was used)

    But here it is with an emissive thing as well.

     
    Last edited: Nov 20, 2018
  22. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    This may sound silly, but any chance we could get access to your simple test scene so we can compare your results to the options that we have? It's hard evaluating things without a way to compare apples to apples.
     
  23. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Need to optimize the last few things I've added. The radiance data could be generated a lot faster. I was more interested in seeing if this method would work, So i kinda rushed through that step. Once that's done ill have a better idea of performance and ill be more comfortable making a playable build. The performance is currently faster then SEGI and the quality is close to unity's progressive light mapper if that puts anything into perspective.

    Edit: Oh just the scene. Sure thing will upload.

    I did do a comparison of mine/unity's progressive light mapper/SEGI a few posts up. but ill upload the scene so you can do it your self.
     
    Last edited: Nov 20, 2018
    ftejada, Lex4art and Shinyclef like this.
  24. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    That would be great too, but I actually meant just the Unity files for the test scene without any of the HXGI stuff. I want to understand the bad cases for other methods better so we can compare them to yours.
     
  25. scheichs

    scheichs

    Joined:
    Sep 7, 2013
    Posts:
    77
    The only thing that concerns me: Are you going to leave us with this pics again for some months in the "dark"? I think I can't bear it...
     
  26. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Here is a link to the scene, If you're able to get good results with SEGI, could you take an image of the settings you used? I have never been able to get good results with it and it would be amazing to have a good settings file to work off with comparisons. I just copied the settings used in the included scene on high and changed the voxel scale to the same I'm using (1 voxel = 0.125 units, Mine runs on 1 voxel = 0.25 as they are anisotropic). and I don't want to mess with blockers as its not realistic to place those in the world IMO.

    Note: That I also used unity's tone mapping on neutral(not included) and the normal map + skybox I used can't be distributed so i had to remove that.
     
    Last edited: Nov 20, 2018
    Shinyclef and jefferytitan like this.
  27. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I am really busy for the rest of the year. some exciting opportunities are happening + all the family holidays, so my spare time is pretty sparse. But Ill keep working on it when i get the time.
     
    ZeKJ, knup_ and Shinyclef like this.
  28. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    OK I understand, this makes perfect sense. These pics are really great, the light is very convincing! Are the emissives 'free' in this solution as they were before?
     
  29. SkutteOleg

    SkutteOleg

    Joined:
    Nov 22, 2014
    Posts:
    5
    Tweaked SEGI preset from my project to look kinda like yours.
    Sreenshot18.png
    Voxel size had to be specifically '27' for minimum light leaking tho ¯\_(ツ)_/¯
    SEGI.png
    So probably not very much helpful
     
    Pr0x1d and Tenebris_Lab like this.
  30. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Yes.


    Thanks heaps for those settings, Looks a lot better for sure, although the settings work for that small room, its not really applicable to every scenes. SEGI runs at 12-13fps at 1080p on my machine with the above settings, where as mine runs at 60fps+ at 1080p. I'm not sure at what point its a fair comparison but I'll leave that to you guys to decide when it becomes available.
     
    ftejada, ekergraphics and chiapet1021 like this.
  31. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    Maybe it's just a magic number where the voxels happen not to intersect any important walls?
     
    neoshaman and Lexie like this.
  32. Cascho01

    Cascho01

    Joined:
    Mar 19, 2010
    Posts:
    1,347
    "Half a second" for 1-2 bounces means your solution is not "realtime" at this state, right? :(
     
    Last edited: Nov 21, 2018
  33. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    I think you have the numbers mixed up. If 1-2 bounces takes 3 frames (for example) at 60fps, that is 1/20th of a second. However it doesn't reach the quality in the screenshot until half a second. Also I gather that quality would be maintained until such point as you move too much or the lighting situation changes too much. Does that sound right Lexie?
     
  34. Duende

    Duende

    Joined:
    Oct 11, 2014
    Posts:
    200
    Hello, does this asset have a release date? It looks promising. Very good work, Lexie.
     
  35. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Its realtime solution, runs at 60+fps on my old laptop GPU while updating the radiance data, 100+ if i stop updating the radiance volume. Still a lot of optimizations to go though.

    Every frame each voxel in the sparse radiance volume looks out into the world and calculates the incoming lighting, they then multiply this result by their diffuse and that becomes their outgoing light, each surface that can see light basically turns into an emissive surface for the next frame, so the next time the radiance volume updates and other areas see this voxel it now has a small emissive property and sends its incoming light out. Its a feedback loop that each update creates a bounce of light.

    If two surfaces are really far away from each other it can take 2 maybe 3 updates for the light to travel from one side of the room to the other, creating 1 bounce. You could update the radiance volume more then oncce per frame to speed this up though, or even turn off the radiance updating all together to have a baked radiance volume.

    In the end the system is actually creating infinite bounces (some floating point/compression issues will likely cap that out at some point), but most scenes look convincing enough after 4 bounces. For reference the images from unity's progressive light mapper are only 4 bounces. their system doesnt support more then 4 bounces.

    Ill make some videos soon to show it off running in realtime with dynamic spaces.
     
    Last edited: Nov 22, 2018
  36. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    No release date, been working on solutions for years to solve my specific needs. So i really can't say if/when ill release this.
     
    Last edited: Nov 23, 2018
    Duende likes this.
  37. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I originally rushed the radiance gather step to make sure my idea would work, It was the most expensive part of the shader. I've gone back and optimized it while making the results a lot faster to propagate. It now runs about 13 times faster then before with out losing any quality, which means i can easily update the bounces more then once per frame.

    Next step is to figure out how I want to sample the data from the sparse radiance volume, The method I'm using now isn't really the best way to do it. Its a little more expensive then Id like (Similar to cone tracing). I could just render it out at half res and then upscale the results, I would expect pretty fast performance doing it this way (< 2ms at 950x540 on my 660 GTX). The other method is trying to figure out how to interpolate the sparse radiance data, this might end up introducing some strange artifacts so I'm not sure if this is the best idea either. Will have to spend some time messing around with a few techniques to find what works best.
     
    Last edited: Nov 22, 2018
    Tzan, Shinyclef, neoshaman and 7 others like this.
  38. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The radiance is still SH or it's cube faces?

    I mean what if you had sparse SH3 in empty space that the voxel could query as a way to sample the environement radience, so you only update the sparse SH, then use one query per face.
     
  39. Duende

    Duende

    Joined:
    Oct 11, 2014
    Posts:
    200
    Thank you for responding, it's sad that there is not even a distant release date.
     
  40. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I store them kinda like ambient cubes from hl2, SH3 doesn't play well with moving average stuff. I believe ambient cubes are faster to sample then SH3 and the quality is pretty similar with out all the downside of compressed SH3, they take up the same space as storage as well (id have to store SH3 in at least 16bits precision, although 32bits would be better, that would be twice the size).

    Yes this is what the second option kinda implies, I can sample the radiance directly and its really fast (<1ms at 1080p on my old laptop), i calculate the radiance in empty spaces as well. The issue is that the data is stored sparsely rather then in a 3D texture which gets interpolation for free. with out interpolating the data from neighboring data, the results look blocky.



    Quantum break has some great slides on how they interpolated their spares radiance volume, So ill try implementing that. My gut feeling is that it would be faster but the results wont look as nice as the per pixel tracing.

    I could use both though, Do a down sampled per pixel trace of the depth buffer with some upscaling and noise reduction, then anything transparent/forward rendering could sample the sparse radiance volume with interpolation.

    After the latest optimizations for the radiance update step, I did some testing to see how well doing the per pixel tracing is.

    HXGI

    External GTX 1070
    • 1080p - 120fps
    • 540p - 300fps

    GTX 970M (laptop GPU around the speed of a GTX 660)
    • 1080p - 65fps
    • 540p - 130fps
    I think half res rendering + upscale is pretty important for lowerend GPUs or VR, for 1080p it would render the lighting at 540p and then upscale it to 1080p, any pixels that cant sample the data correctly would do a full res trace. I want to have a lot of headroom to render the actual game when its played on low end machines, only supporting 10 series and up is not a valid option IMHO. Also these tests show how most of the cost is in the per pixel sampling of the radiance data. So optimizing this step will have pretty large gains.

    I did the same test using @SkutteOleg settings for SEGI, Honestly the settings are way too high, but you have to do this to remove the light bleeding. If you were to use standard cone tracing you probably wouldn't go this overboard and just live with light bleeding. But here are the results for fun.

    SEGI with @SkutteOleg extreme settings

    External GTX 1070
    • 1080p - 33fps
    • 540p - 96fps

    GTX 970M (laptop GPU around the speed of a GTX 660)
    • 1080p - 12fps
    • 540p - 40fps
    Edit: @Duende - The system is close to being finished. but I wont set any release data. I have a lot of other commitments that i have to meet before i have time to work on this project. Our timeline for Spire (game I'm making this for) wont need it until mid next year. so till then its not a priority, I just enjoy working on it so I do it in my spare time.
     
    Last edited: Nov 24, 2018
    Tzan, ftejada, Shinyclef and 2 others like this.
  41. Duende

    Duende

    Joined:
    Oct 11, 2014
    Posts:
    200
    Oh well, then there are plans to market it. It's a start. :)

    Thank you for your attention, you are doing a great job.
     
  42. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    I got the radiance data copying over into the new sparse chunk when it gets revoxelized, this allows me to have dynamic changes to the scene.



    This is just a test, not representative of the final results.

    I still have a lot of work to go, the denoiser isn't great and I'm using standard TAA for now till I make my on temporal sampling for the lighting data, that's why there is soo much ghosting. All that will be fixed later, just trying to get the base system all functioning.

    I can increase the responsiveness at the cost of performance, this was recorded on my old laptop so i didn't want to push that slider too far.

    I still need to try out interpolating the radiance data rather then tracing the sparse volume.
     
    Last edited: Nov 28, 2018
    ajaxlex, eobet, Yuki-Taiyo and 16 others like this.
  43. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    502
    Looks fantastic. :) Even more amazing to hear you have a plan for the ghosting already. That's something you don't see baked lighting do!
     
  44. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    This looks fantastic! For the performance cost you're talking about, this is really, truly incredible! Just wondering, what's the resolution you're using here? Seems like there are some issues at the seams of objects? It's relatively unimportant, but I just wonder if they could be mitigated by use of a higher voxel resolution? Also, does this implementation have cascades?
     
  45. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Not sure what the issues your talking about near the corners. I'm still working on the pixel sampler of the radiance data, should be able to make it look a lot better when i focus on that. So that might fix the issue your talking about.

    Edit: the voxel resolution is 0.25. those blocks in the middle have a width and depth of 1, so 4 voxels.

    This video was recorded at 720p as it was on my old laptop. It's rendering the lighting at half res and then up-scaling it. right now that process is pretty simple, just wanted to get something in place for now. It's doing some aggressive smoothing so that might also be what your talking about

    The system doesn't use cascades, instead it stores everything in a sparse data structure. This allows me to scale it up to really large scenes with out the massive increase of VRAM as it only stores data where its needed most (near objects)

    If something changes it only revoxelizes the space nearby. so you don't have to revoxelize the entire scene like a lot of other cascade systems have to do. It's also storing far away objects at the same resolution as nearby objects so you don't suffer the crazy lighting bleeding cascade systems introduce.
     
    Last edited: Nov 28, 2018
    Shinyclef, macdude2, Neviah and 6 others like this.
  46. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    That's really, really cool! I'm very, very excited to see this GI at work in an actual game! I've included an image below, it honestly might just be that there is a lack of AO, I'm sure just adding HBAO would make it look the seams look a lot more natural.
     

    Attached Files:

  47. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    http://nothings.org/gamedev/ssao/
    You mean a lack of bevel?
     
  48. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Thats a really cool link.

    With higher settings it looks closer to path traced.
    Right now I'm not sampling the skybox with the correct colors so the contribution from that is a little low. My voxelization pass results in slightly darker/saturated colors so that also results in a darker image. Once those are fixed these two images should look pretty closer together.

    The two biggest differences are the near light bounces from the sphere and the indirect shadows from the right most purple cube. The sphere could be fixed but would introduce bleeding into the mix, I don't think I could ever get as good indirect shadows as that in HXGI.

    You could run the system with settings like this, but you'd most likely limit it to newer cards or lower resolutions for older cards. I'm focusing on getting the low end settings working well as that's what i'll be using for our game. With texture + some real geo and some image effects, the difference on low settings wont be that noticeable.
     
    Last edited: Nov 29, 2018
    Flurgle, Shinyclef, hopeful and 2 others like this.
  49. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    That's incredible! I can't believe there even exists an algorithm that can produce lighting like this in the computation time you're talking about. Honestly, the soft shadows you have are more than good enough, and it's definitely not going to be noticeable with textures and other affects going on. So I assume the performance is about 1/2 to 1/3 as good with a voxel resolution of 9?

    I'm also curious what you mean by bleeding? Doesn't the system calculate infinite bounces? Why would another bleeding term be necessary?

    I'm very, very excited to see you complete this!
     
  50. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Voxel res is still the same as the video, the sampling settings just got increased. Its the same data as before. I'm not sure exactly how much of a performance cost is for putting the settings up this high will be, I need to get the denoiser/temporal sampling working better before I can give you the exact numbers.

    The reason the sphere isn't casting light onto the surface near it is because the voxels are kinda large and actually connecting the sphere voxels with the cube next to it, so there is isn't any empty space between them in the voxel world. For me light bleeding through walls is way worse then light not bouncing, so I don't really propagate the lighting between connected voxels, there has to be empty space between them to receive bounce lighting. This allows me to have 1 voxel thick walls with zero light bleeding. Id rather that functionality then the alternative, but its easy to add a setting for it.
     
    ZeKJ, Shinyclef, macdude2 and 2 others like this.