Search Unity

SEGI (Fully Dynamic Global Illumination)

Discussion in 'Assets and Asset Store' started by sonicether, Jun 10, 2016.

  1. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Hey there, I just gave SEGI a spin and have the very same issue with bleeding like described here - anyone solved this problem yet?
     
    RB_lashman likes this.
  2. tweedie

    tweedie

    Joined:
    Apr 24, 2013
    Posts:
    311
    The leaking is a product of the technique, the only solutions are to either make the walls thicker, or use the GI blockers sadly.
     
    RB_lashman, PhilippG and buttmatrix like this.
  3. PhilippG

    PhilippG

    Joined:
    Jan 7, 2014
    Posts:
    257
    Thanks :) Any experience using GI blockers? Are they expensive to use? Would you place them like walls?
     
    RB_lashman likes this.
  4. Arganth

    Arganth

    Joined:
    Jul 31, 2015
    Posts:
    277
    anyone have experience how many unity units the wall thickness should be?
     
    RB_lashman likes this.
  5. AndyNeoman

    AndyNeoman

    Joined:
    Sep 28, 2014
    Posts:
    938
    Hi all,

    Someone mentioned last week about having the SEGI component on a empty gameobject and not on moving player/camera. It got me thinking if people would share any best practices or things they have picked up along the way. Or even just link to the info that might already be contained in the thread.

    The camera issue has me confused. If I set SEGI on it's own object it creates a camera that interferes with my game camera. How do you deal with this as SEGI needs a camera component?

    For now I have just reverted to having SEGI on my camera but I would prefer to have it set up correctly.
     
    RB_lashman likes this.
  6. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    I guess you just need to reference a camera to the variable in SEGI script... This shouldn't be hard to do.
     
    RB_lashman and AndyNeoman like this.
  7. AndyNeoman

    AndyNeoman

    Joined:
    Sep 28, 2014
    Posts:
    938
    I thought that but it is a requiredComponent so you cannot just override with a public camera reference. There are also voxelcam and shadowcam so there is a few bits to understand. I am sure you could change it all but then it's git and you cannot update to new version without issues. I would rather try and understand first.
     
    RB_lashman likes this.
  8. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Yeah, i am just looking into it, it is indeed !
    Will post if manage to do it !
     
    RB_lashman and AndyNeoman like this.
  9. Deleted User

    Deleted User

    Guest

    I'm going to have to truly thank Sonic Ether for opening up the source for this under MIT, that is truly generous of you and thanks a lot.!

    I might (slim chance) be able to do something with this, I've created LPV's for various engines and I've been studying SVOGI techniques. Please correct me if I'm wrong but is there a specific reason it's DX11 only? I have tested examples (GPU Gems etc.) of voxel octree's (for IL) that have been done in GL 4.3 upwards so I would aim to make it GL compatible.

    Second on the list is performance, at the moment I've not looked at the code.. So as to the nitty gritty of what SE has done, not a clue and all I really know is it looks amazing. I have to admit it's the infinite bounce thing that gets me, I've no idea how you could do that and expect reasonable performance.. Although I would be aiming for around 1.5 - 2ms on a GTX 980 (otherwise for games it becomes a hard sell).

    At best on rudimentary tests of another system you'd be looking at 4 bounces max via a "semi-static approach" and for fully dynamic you'd be looking at one or two bounces best to able to come anywhere near performance requirements.

    There would be downsides:

    There's probably going to be some artifacts, like light leaking / potentially a bit of ghosting on non static objects but hopefully not enough to really care too much. AO / IS (indirect shadows) would only work properly on static geometry..

    The biggest thing would be can I make it look as good all said and done? Well that's the question, I'm pretty sure I can make it quicker and I'm pretty sure many could of too (including SE) but after that who knows? Also the other question and this is what I really want to know before I even spend an hour looking at this, is Unity going to get in the way? Would I be better served focusing my efforts on a GI solution for an engine that's open source?

    You don't have to really answer, these are just initial thoughts.

    Very interested in learning the approach though and seeing how many walls I can hit, again SE thank you for this it's very helpful.
     
    Last edited by a moderator: Oct 14, 2017
  10. AndyNeoman

    AndyNeoman

    Joined:
    Sep 28, 2014
    Posts:
    938
    Interesting read, I'm really looking to your additions and improvements. I am using SEGI on a fully outdoor natural environment (Rainforest etc) the water system used (aquas) has been masked out as it produces strange flickering but the look I get is much better for day night cycles than anything else I have tried.
     
  11. Lewnatic

    Lewnatic

    Joined:
    Sep 29, 2012
    Posts:
    209
    Are there any platform/forums/discord groups in which we can track on which features the community is working on? Something like this would be very useful for further project improvements. Think of it like the blender way.

    Each tester/dev could submit revs and decide on which class/feature he wants to improve or work on.
     
    Last edited: Oct 14, 2017
    RB_lashman and AndyNeoman like this.
  12. Deleted User

    Deleted User

    Guest

    I have Aquas so I can check it out and see what's going on.

    @Lewnatic

    It becomes a matter of purpose, for me I want to use it in a semi-openworld game which means I'll need to potentially add more tradeoffs than other branches and our visions might not match.
     
    Last edited by a moderator: Oct 15, 2017
    AndyNeoman and RB_lashman like this.
  13. Deleted User

    Deleted User

    Guest

    Ok just out of curiosity, one of these is LPV and ther other one is SEGI.. Which one do you prefer?
     
    Last edited by a moderator: Oct 17, 2017
    IronDuke, RB_lashman and Martin_H like this.
  14. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    789
    I really prefer the bottom image.
    More contrast, better "color bleeding".
    So which one is which?
     
    RB_lashman, Martin_H and Deleted User like this.
  15. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,686
    Bottom one seems warmer, top one more cold.
     
    RB_lashman and Deleted User like this.
  16. Deleted User

    Deleted User

    Guest

    I probably should of explained, it's more about the technicalities like light bleeding and AO coverage etc. what looks best is still perfectly valid of course because that's what lighting solutions are about. Although in terms of warmer / colder etc. it's nothing more than a bit of colour grading and IBL contributions like this is option 1 again but with a more dusky setup:
     
    Last edited by a moderator: Oct 17, 2017
    RB_lashman and Vagabond_ like this.
  17. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    I guess the top is LPV and i would prefer it because the bottom one seems to have some light bleeding issues (at least it loks so). If this is correct i really would prefer the LPV as image can greatly be improved using image effects !

    If bottom is LPV i would say - would prefer the one that gives less artifacts and can be used for larger distances !

    P.S. the top one looks more clear !
     
    RB_lashman and Deleted User like this.
  18. buttmatrix

    buttmatrix

    Joined:
    Mar 23, 2015
    Posts:
    609
  19. Deleted User

    Deleted User

    Guest

    Yep the top one is my LPV solution, it has some unfortunate issues.. On AMD cards it somehow crashes the display driver and I never managed to figure that one out before moving back to a Nvidia card. It does suffer with light leaking but if you meshes are thick enough and placed correctly it's not that bad.

    It's not as easy to setup as segi, you have to wrap your terrains / buildings / interiors in a bounding box.. That's because it does support point and spot light but it's also for variable quality transitions.. LL is less noticeable and quality doesn't need to be quite as high over a vast distance terrain.

    I've never seen LPV's without some light leaking though, on the pro side it is pretty light.. I did manage to have a look through SEGI.. I got around 10 FPS on an openworld scene, it doesn't appear to be the Algo causing it though..

    70+ ms was taken up by culling, not an extensively difficult issue to fix but there also seems to be a lot of issues with noise, light leaking, ghosting amongst a plethora of other things.

    What I essentially wanted to do is have Voxel cone tracing for indoors with a bounding box that switches to LPV's for terrains etc. Although an end to end VXCT solution would be best as you can use the cascaded grid not only for reflections but also large scale AO that helps you blend the two.. Epic have distance field AO which can do something similar, so I potentially could mix a "Skylight" with cascaded voxels in a seperate pass to deal with that in LPV.

    I know LPV and it's not actually that hard to implement all being said, but honestly know little about cone tracing or cascaded grid solutions (bar LPV cascades).

    There is another MIT licensed voxel cone tracing alg out there which I'm going to try, if not than I'll see if I can get some help fixing my solution..

    I did send a message to AMD but of course as the little fish they never replied.

    I do want a GI solution but I'm not sure if starting with SEGI is the best idea.. I'll mull it over, it'll happen one way or another so bare with me.. I'm not aiming for all out best quality, just something that can retain the quality in my screenies indoor and out.
     
    Last edited by a moderator: Oct 16, 2017
    RB_lashman and Martin_H like this.
  20. Ryunis

    Ryunis

    Joined:
    Dec 23, 2014
    Posts:
    24
    Without knowing which of the images was SEGI, I far prefer the bottom one. The indirect lighting just seems so much more "tight" and accurate. It's best seen in the way the light from the windows bounces off the floor and illuminates the ceiling, as well as the door in the back. The top one the lighting seems washed out and low resolution in comparison.
    That's just my two cents, also completely disregarding performance.
     
    RB_lashman, hopeful and Martin_H like this.
  21. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    So cool to see you work on this now, got me to watch the thread again :). I doubt SEGI fits my project, but it's still interesting tech to keep an eye on. I prefer the bottom render.
     
    RB_lashman likes this.
  22. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,686
    When the scenes have similar lighting color, the SEGI lighting looks so much better so far as I can see. I don't have a trained eye for leaks and such, but just basically looking at lighting, like on the doorway at the end of the hall and the glare on the planter ... the bottom pic is the more appealing one.
     
    RB_lashman likes this.
  23. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    I think an important thing to keep in mind is that while playing a game the kind of things one notices can shift very much compared to screenshot A/B comparisons. E.g. in games like Rage the texture-streaming can be very apparent and is a real immersion breaker for many people, when everytime they look around quickly, half the view's textures pop back in over a few frames. In screenshots none of that is visible.
    I could very well imagine that some tradeoffs that look very bad in screenshots - like more noise in the calculated light solution as a tradeoff for lower latency in light situation changes - actually make for a better experience during gameplay, if you use sufficiently noisy textures that will easily hide the artifacts, or if your scene is very fragmented by design, like a detailed forest landscape.
    So I would urge people working on this, to not forget the "realworld usecases" and if possible post video comparisons instead of screenshots.
    I really wish I had the time to actively play around with this too, but I fear I wouldn't have much to contribute code-wise anyway.
     
    hopeful and RB_lashman like this.
  24. Deleted User

    Deleted User

    Guest

    Yeah, I'll have a crack at it and compare it to other techniques I've seen under an MIT license like the one from Wicked Engine.. Ultimatley though speed is what matters, when you have a 20+ NPC's / 10,000 pieces of foliage, terrain, weather system, water, TOD then you're going to be happy with acceptable..

    Even in my "semi" openworld game it wouldn't be odd to see a few thousand meshes of various sorts on screen at any one time so budgets are tight and every 0.1ms counts.

    End of the day this is for a game and has to be better than Enlighten at least for it to be worth it.
     
    Last edited by a moderator: Oct 16, 2017
    Alverik, Martin_H and RB_lashman like this.
  25. GamerPET

    GamerPET

    Joined:
    Dec 25, 2013
    Posts:
    370
    I'm sure this was posted before but... why did SEGI went free? Where can I find the story? :D

    Thanks
     
    RB_lashman likes this.
  26. TooManySugar

    TooManySugar

    Joined:
    Aug 2, 2015
    Posts:
    864
    RB_lashman likes this.
  27. GamerPET

    GamerPET

    Joined:
    Dec 25, 2013
    Posts:
    370
    Kin0min, IronDuke, hopeful and 2 others like this.
  28. buttmatrix

    buttmatrix

    Joined:
    Mar 23, 2015
    Posts:
    609
    @Lexie is working on a GI solution, but there is no confirmation whether it will made publicly available.

    EDIT: From your post, LPV = light proxy volume?
     
    RB_lashman likes this.
  29. Deleted User

    Deleted User

    Guest

    Nope, cascaded light propogation volumes simlar to what Crytek used in Crysis etc.. Seems Lexie's is another LPV solution, it's cool for an open terrain but in a lot of cases (for indoors at least) you'd be better off using Enlighten.

    It's a shame Unity didn't add the option to use Enlighten in a bounding box, which in short limits where it pre-computes and applies it's radiance data.. Then you could of just blended something like an LPV solution for outdoors and job would be a good one.
     
    Last edited by a moderator: Oct 16, 2017
    RB_lashman likes this.
  30. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,686
    Thank you. I got a good laugh out of that. :)
     
    RB_lashman likes this.
  31. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,686
    Could you do that with additive scenes? One scene GI baked, the other using dynamic GI?
     
    RB_lashman likes this.
  32. Deleted User

    Deleted User

    Guest

    Sure of course you can, if that's the way your game is designed (as in Skyrim) where everything is divided into cells and every interior is a seperate scene there is no real reason you couldn't leverage LPV's for external scenes and lightmap interiors etc.

    It's just a matter of storing pre-computation and / or lightmap data per scene. It would be one of the fastest most efficient ways of doing things, only major issue being it's a limited user case scenario.

    Trying to do this where you blend indoor / outdoor would be tricky especially with a time of day system, not only do you have two different techniques that look different from each other in terms of quality, even Enlighten only supports colour changes on the fly as every position has to be re-calculated.

    So in that case within the bounding box you'd be better off using something like ray-tracing or photon mapping and then blending seperate lightmaps at different points (because Enlighten can't match Photon Mapping in terms of quality). Then having a LUT or specified colour ratios at certain points to match the TOD system (for your LPV's).. IBL will introduce it's own colour additions on top so you don't have to worry about it too much..

    You'd also have to wrap your interiors in a GI blocker.

    I'm not really sure how that would work out or if people would notice the sudden contrast of one bounce from an LPV solution to something capable of producing 100 bounces +.. I have to admit though Photon Mapping is very quick, on my dev machine I can lightmap an entire village at full resolution in minutes so if it was restricted to interior / blend duties it wouldn't cause hair reducing levels of wait times.

    Interesting thoughts, I'm not 100% sure how I want to approach it ultimatley.. At the moment nothing is off the table. I'd like to get to know Voxel Cone Tracing a little better before I decide.
     
    Alverik, hopeful and RB_lashman like this.
  33. Deleted User

    Deleted User

    Guest

    Just for reference here's all the screenies I did when testing out various solutions:

    LPV (ShadowK) + radiance cascade volumes

    Screen1.jpg
    Screen3.jpg

    SEGI:

    Screen2.jpg
    UE4 Lightmass:

    Screen5.jpg
    UE4 LPV (Lionhead), TBH I gave up on this one after a wall was completely different to the rest.. Talk about light leaking to the max..!

    Screen4.jpg
     
  34. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Judging by the images only (where no artifacts are visible) i would say that your solution (both images on the top) is giving best result, just great regarding the current scene ! Any way of proving a build just to give it a try !?
     
    TooManySugar and RB_lashman like this.
  35. Deleted User

    Deleted User

    Guest

    At some point yeah, I'll release it as a public alpha as it's already based on some MIT stuff, I could charge for it but morally I'd rather just give back to the community.. I need to find out why it's crashing and it might look good in a screenshot but there's a lot of aritfacts whilst moving at this point it's probably half of one with SEGI..

    The point of all of this is I'm trying to decide what's best (with a bit of input of course), in all fairness Enlighten can beat every one of these solutions (as odd as it sounds) it really just needs something like a long range AO solution to traverse between indoor / outdoor.

    Soon my friends, sooooonn..!
     
    Alverik, coverpage, FPires and 5 others like this.
  36. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    Is your method not also similar to that? From my testing, cone traced GI has the worst light bleeding of all of them. As soon as the cone needs to trace more then 1-2m away, it starts sampling heavily mipped voxel data. You dont notice this on small rooms though as it doesn't have to trace too far (your screen shots)

    I'm in the middle of moving over to a more sparse LPV approach right now. That way I can still have large view distance with out using up all my VRAM. Also playing around with a realtime radiance caching system as well. Not sure if i can get the performance i need from that version though.

    I think a low detail GI solution + screen space GI is the way forward for realtime GI solutions IMO. Can give detail close/better then cone tracing and still be able to hit high frame rates (until path tracing becomes a thing)

    Edit: also enlighten isnt really an option as it needs to be realtime with no pre-compute.
    Also the 70ms on culling is segi voxelizing the scene.
     
    Last edited: Oct 18, 2017
  37. arnoob

    arnoob

    Joined:
    May 16, 2014
    Posts:
    155
    I think you are definitely right, form my tests, SEGI does really great on rather large structures, but poorly on small details. However I wasn't aware there was descent screen space GI solutions on unity, do you have some recommendations about some assets that can do that?
     
    RB_lashman likes this.
  38. buttmatrix

    buttmatrix

    Joined:
    Mar 23, 2015
    Posts:
    609
    this is the one for now lol
     
    arnoob and RB_lashman like this.
  39. Lexie

    Lexie

    Joined:
    Dec 7, 2012
    Posts:
    646
    None that are public yet. Its more an observation from doing lots of research on GI. Screen space does a pretty good job at reconstructing the smaller detail if you have some lower detailed GI to go with it.
     
    arnoob and RB_lashman like this.
  40. Deleted User

    Deleted User

    Guest

    It is pretty much the exact same, cone tracing does have a lot of bleed (worse than mine) and I also seem to get very inconsistent results which as a base negative of cone traceing as a whole, due to using "beams" / cones / thick rays whatever you want to call it..

    LPV does have light bleeding though and I know Enlighten isn't "real-time" hence I already mentioned the pre-computation step, that wasn't the point. I was mentioning how one could blend with another solution to gain benefits of both dependant on how your game is setup..

    I actually did notice it on small rooms as well with SEGI's implementation, yes I use a sparse voxel octree with mine. I still think CryEngine / Lumberyard's approach is best, it just fires thousands of rays traced through voxels / shadow maps to gather occlusion / indirect lighting.

    But it has seperate modes, one is lightweight where the bounced light is sampled directly from shadow maps w/ out the use of compute shaders.

    Then they have a full shebang version where several layers of radiance are voxelised with opacity DL is direct injected and sampled during the RT pass.

    I've honestly never technically tried to implement a cone tracing solution so I'm just very interested to learn about it, from what I'm finding out it seems like it's the least preferred method but I can try various techniques to see how it ends up.. If nothing more just to learn, I'll not spend too much time on it as there are more "proven" techniques out there.

    As for the 70ms overhead, it should never take 30 minutes on a GTX 1080 (that's when I gave up) to voxelise a scene (doesn't even take that long in an offline renderer for the scene size) I've had entire forests with VXGI that never had issues, something was wrong..

    Although interestingly enough it only happend on the Blacksmith scene, every scene where I've used my own shaders for foliage etc. it's never happend and on top of that I was getting around 100FPS on a small scene with an extended cone length.

    What was more confusing is how the cascaded version ran slower than the other SEGI version, something I'll look at.
     
    Last edited by a moderator: Oct 18, 2017
  41. Abuthar

    Abuthar

    Joined:
    Jul 12, 2014
    Posts:
    92
    Just showing off a little more SEGI eye-candy. My project uses 1 model at a time so performance is great and we get high quality bounce, occlusion, and emissive lighting :)

    http://gph.is/2gR97Cl

    EDIT: Is there a way to embed gifs in the forum? Would probably be more convenient than clicking a link :\
     
    Last edited: Oct 19, 2017
    RB_lashman and coverpage like this.
  42. f1ac

    f1ac

    Joined:
    May 23, 2016
    Posts:
    65
    Not sure about embedding .gifv, but for Chrome there is an extension called Hover Zoom+, it shows all the images/gifs/videos when you hover the link.
     
    RB_lashman likes this.
  43. Martin_H

    Martin_H

    Joined:
    Jul 11, 2015
    Posts:
    4,436
    How much is the performance of SEGI roughly affected by the rendered screen resolution vs the spatial voxel resolution? I'm wondering what SEGI could do for a lowres 2D pixelart game if you make the voxelresolution in the screen depth axis really flat and use textured meshes with depth for sprites, as if the pixels were extruded out into individual cubes.
    Or is this all crazytalk and should really instead use a custom screenspace GI solution that works similarly to SSAO?
     
    arnoob and RB_lashman like this.
  44. OnlyVR

    OnlyVR

    Joined:
    Oct 5, 2015
    Posts:
    55
    Alverik and RB_lashman like this.
  45. TooManySugar

    TooManySugar

    Joined:
    Aug 2, 2015
    Posts:
    864
    Alverik and RB_lashman like this.
  46. Abuthar

    Abuthar

    Joined:
    Jul 12, 2014
    Posts:
    92
    Kind of a random question, but does anyone know of Translucent/SSS shaders that are compatible with SEGI and it's emissive lighting? So far as i've tried, UBER and Alloy do not not contribute to emissive lights when it comes to SSS. And Pre-Integrated Skin Shader doesn't work with SEGI at all.
     
    RB_lashman likes this.
  47. mattis89

    mattis89

    Joined:
    Jan 10, 2017
    Posts:
    1,151
    Hello! Is this asset stiĺl alive and kicking? Does it work with unity terrain, i have big 8x8k size..
     
    arnoob and RB_lashman like this.
  48. buttmatrix

    buttmatrix

    Joined:
    Mar 23, 2015
    Posts:
    609
  49. zenGarden

    zenGarden

    Joined:
    Mar 30, 2013
    Posts:
    4,538
    Nice article.
    With Cryengine-Lumberyard SVOGI , thi is another solution working for large open world without baking anything.
     
    RB_lashman likes this.
  50. Tenebris_Lab

    Tenebris_Lab

    Joined:
    May 23, 2017
    Posts:
    35
    SEGI is great, an unexpected surprise in the world of unity's non real-time solutions.

    Some issues but mostly very good results.

    LuxWalker_Day.jpg
    LuxWalker_Dusk.jpg

    This scene above consists of a model loaded at run-time. All vegetation is added during the session. Fully dynamic because of SEGI.

    We have some really interesting people on this thread. Clever minds working together or just snooping should make for some nice development. I hope someone with some skillz makes something even more awesome.

    Having projects like SEGI allows for some to get good results easily, and for others to learn a tremendous amount and make some crazy things.



    screen_1920x1080_2017-10-17_10-54-53.png screen_1920x1080_2017-09-21_20-53-40.png

    There are no lights in the night shots, only invisible or visible geometry with emissive materials.
     

    Attached Files: