Search Unity

SEGI (Fully Dynamic Global Illumination)

Discussion in 'Assets and Asset Store' started by sonicether, Jun 10, 2016.

  1. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    In my brutal, honest opinion: In general I do not see any global illumination going on - it looks like builtin hemisphere ambient, without using any SSAO style techniques.

    About the pink-ish picture:
    I don't see a gradient in the shadow area, it all looks completely the same shadow strength to me. I would expect a gradient depending on how many indirect bounces reach the position.
    The ground on the left is reflecting something, not sure what (the roof? which shouldn't be as lit as it is in the picture anyway).
    Not sure why everything is pink-purple ish.

    About the picture before that (the one about perf improvement):
    Also missing most of the gradient I would expect.
    There is some 'rim-shadowing' going on at some at the pillars.
    The roof on the left side is having something I would best describe as ambient occlusion, but also looks rather inconsistent. May be the same as the rim-shadowing.

    Edit: & scheichs' pic the post above this looks like it has 1 light bounce with ~2-3 meter range
     
  2. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    The pink was overriding the sky color to illustrate the GI part of the image better. It was deliberate. I appolise for the confusion :)

    The version from scheichs pic was based on someone else fork of SEGI (https://github.com/CK85/SEGI) which writes the Unity ShadowMap into the voxel data. I since rebased on Sonic Ethers version which does not do this. Hence the change in behaviour. As someone else pointed out that resulted in a behaviour that was just wrong.

    The slight 'rim-shadowing' is due to recontructing that missing gBuffer. That is def a point to be improved upon. It's been a source of great pain.

    Below is a better comparison pic of on/off without anything weird going on to better illustrate the state of this thing

     
  3. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Hey, since you reverse engineer SEGI, would it be possible if you make a tour guide of the code (like a high level chart), explaining what each part do and how they are connected together? Might allow more bystander to hop and contribute!
     
    arzezniczak, ftejada and Dzugaru like this.
  5. N00MKRAD

    N00MKRAD

    Joined:
    Dec 31, 2013
    Posts:
    210
    Both kinda look like it's just direct lighting with ambient light on top of it.

    There's no visible light bouncing and everything looks really flat :/
     
  6. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Your not wrong there.... Everything is working.... Just the pathtracer is failing to pickup the colour as its doing it's thing... This part of it is baking my noodle tbh. I've double checked everything so many times and just not making much progress there



    Documenting this thing is a good idea. I'll see what I can do there.
     
    ftejada and neoshaman like this.
  7. Demhaa

    Demhaa

    Joined:
    Jun 23, 2018
    Posts:
    35
    Yeah, one thing that bugged me about Segi was that the shadows were based on primitive shapes. Will it get to the point where shadows are the proper shape?
     
  8. scheichs

    scheichs

    Joined:
    Sep 7, 2013
    Posts:
    77
    For further discussion, can we agree to disable environment lighting in the lightings settings and disable the torchgroup? It's hard to determine any GI effect, if there are to many different things contributing to the scene light.
     
  9. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Agreed.

    In the meantime I figured out why it wasn't working right. You need to invert the depth buffer... I'd adapted that function to enable stereo rendering support... Must have spaced on the need to invert the depth buffer while I was there :D

    Note, this fix isn't in the repo yet. I'm still fiddling.

     
  10. scheichs

    scheichs

    Joined:
    Sep 7, 2013
    Posts:
    77
    Yeah! Looks much better now! :)
     
  11. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    Does this version of SEGI include cascades, thus defeating the view distance issue? You seem to have revamped it a lot
     
  12. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Right now. No it does not.... Mostly because I was keeping things as simple as possible while learning this beast.
    Could it? Yes, if people would prefer cascades?

    It'll only be a few days work to port it over.

    Quick show of hands people. cascades, yay or nay?
     
    ivanmotta, Shinyclef and S_Darkwell like this.
  13. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    Oh if cascades can be combined with reflections and I can rely on the lightning on long distances, so I can get rid of the blur and fog I use to conceal the limits of the effect, it would be great
     
  14. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Reflections is the complication I was avoiding by holding off on cascades till now.... SE never finished that part of it, so will have to figure a way to work that out myself. But I enjoy being mentally challenged, so it's all good :)
     
  15. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    I got a problem with the latest install. It throws <SEGI> is not attached to a Main camera. ensure that the attached camera has the MainCamera tag. I have my only camera in scene with that tag and with SEGI post process effect attached to it.
     
  16. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Usually if that shows up when it's not supposed to, your either in scene view or have attached have segi-nkli.cs to the camera rather than adding it as a postprocessing2 effect.

    Here's a version that removes that warning in scene view (but expect some slight weirdness, it currently only renders correctly in game view)

    https://github.com/ninlilizi/SEGI/releases/tag/v0.9.4b3
     
  17. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    Thanks working now!

    By the way. Is there any way to make an object contribute only as emissive to GI? I would like to place ambient lightning and I managed an invisible shader that emits light by setting the albedo as cutoff, but the object still contributes to occlussion, causing some "strange" shadowing on the walls and celings they are on. So is there any way to make it so an object only contributes its emissive properties?
     
  18. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    I have been quietly observing your progress, deciding not to ask too many questions or make too many requests, lest I put pressure on your shoulders and make you want to give up... But since you asked... huge YES to cascades!

    My top priorities are 'mood' and performance. Basically I want pretty glowing emissive surfaces with acceptable framerates. Cascades sound like a huge win for my case.
     
    chiapet1021 and ftejada like this.
  19. PROE_

    PROE_

    Joined:
    Feb 20, 2016
    Posts:
    32
    If cascades would be performant enough for open worlds :rolleyes:
    Definitely yesss.
     
    ftejada likes this.
  20. ftejada

    ftejada

    Joined:
    Jul 1, 2015
    Posts:
    695
    Even if you do not have it, I think it would also be appropriate, since you can also have medium-sized interiors and if you can achieve an increase in performance with a small loss of quality in the distance, I think it would also be appropriate.

    On the other hand I do not know how SEGI currently works when parts of the scenario are not being drawn by the Oclussion Culling ... In these cases SEGI has this planned and the performance is increased when working with less mesh or the performance consumption is the same with Oclussion Culling that without him?

    I feel if it's a stupid question, but the doubt has come to me
     
  21. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    DUnno what it is but I might have to revert back to old non postprocess stack SEGI.

    With the old one I could achieve completely drk rooms and bit better distribution of light with 80 fps and light shadows turned on in unity 2017

    on this one I cant manage more than 50 fps, even with subsampling and resolution turned to low and lights off.


    Sad because I really loved the idea of integrating segi in ppstv2 and I could finally use it in 2018 but must focus in other work areas
     
  22. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Don't worry about discouragement. A bunch of internet strangers telling me I'm doing it wrong is massively encouraging to a shader noob like myself. Even if it's just to avoid the embarrassment of forever being remembered as the girl who couldn't SEGI.

    Essentially there are 2 parts to SEGI... The voxelization and the path tracer. The voxelization basically uses a geometary shader to write to a 3D texture. Geometary shaders are kinda slow whatever you do, but 3D textures have the advantage that you can move around inside the volume and still render based on your current location within it... Which makes them easy to cache, because provided your not near to the edge of it you can just keep re-using the same volume.
    The pathtracer is the tricky bit... But already uses culling and is not effected by scene complexity. It simply works on the perpixel lighting of the current viewport. The determiner of performance here is how many times it samples the 3D texture. Which is calculated by ViewportWidth*ViewportHeight*Cones*ConeTraceSteps... Again this isn't' affected by scene complexity, and the 3D texture is mipmapped, which already smooths out the detail as you get further from the camera. As it is... An empty scene will take just as much to trace as one with lots of geometry because of it
     
    Last edited: Oct 9, 2018
  23. gvrocksnow

    gvrocksnow

    Joined:
    Aug 19, 2013
    Posts:
    4
    @Ninlilizi: Just downloaded your latest release and it works great! One thing I found is that I needed to change the ordering from "PostProcessEvent.BeforeStack" to "PostProcessEvent.AfterStack" in segi for the bloom effect to work correctly. Thank you for updating this asset. Another yes for cascades.
     
    hopeful likes this.
  24. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    About 50% done with cascades.

    I've been recently reading up on signed distance fields and stuff. And realised the pathtracer traces the full depth of the voxel volume per sample. ie, behind any solid geometary in the scene. So I figured that if I sample the Depth texture, and do a bit of math to align it with the voxel depth. I can have the pathtracer bail out early once it's gone beyond the newly determined depth stop. Saving a few samples per pixel.

    This has today resulted in ~8% increase in performance in the demo scene without loosing any detail.

    @rvrocksnow thanks for the tip. I've changed that in the repo now :)
     
  25. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    So the 3d texture store plain color? Do you think it would work with SH instead (and have directional data)? Would that improve performance?
     
  26. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    Oh that's brilliant! Fantastic work!
     
  27. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Spent a day reading up on SH.... And you could probably achieve passable results i recon using SH and Light Propagation Volumes in a material shader.

    This explains it http://ericpolman.com/2016/06/28/light-propagation-volumes/
     
    chiapet1021, neoshaman and ftejada like this.
  28. Demhaa

    Demhaa

    Joined:
    Jun 23, 2018
    Posts:
    35
    Have you looked at Lexie's HXGI thread? It's really interesting
     
    ftejada likes this.
  29. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    SH is fast but I feel like its an unnecessary step. Gains from SH implementation will be little but there will definitely be visual loss. Given that SEGI is already an approximation by sparse voxels, I wonder if another layer of approximation is necessary. GPUs are getting faster at a steady pace, and the original SEGI can handle 1080p render at 60FPS with 1070GTX now with no issues (I built a AAA grade FPS scene to test it a few months back). I just don't see SH implementation as the "way" to go. The default Unty light probe uses SH and well, enough said.

    Maybe the direction for the optimization should be targeted at getting better visuals at less cost than the original SEGI, not compromising visuals for few more FPS. Lighting oddities can be really disturbing to look at, which mostly comes from conventional optimizations "lower sampling, approximated sampling etc."

    I understand that some sort of visual vs performance trade is inevitable but taking SH feels like beating a dead horse. Then again, this is just me hating SH and I could be wrong :D
     
  30. chiapet1021

    chiapet1021

    Joined:
    Jun 5, 2013
    Posts:
    605
    I believe SH was proposed as a way to help address SEGI's current light leaking challenges, right? I'm assuming there are other potentially viable methods.
     
  31. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    If that's the case, then I guess we are stuck with SH.
     
  32. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Either way.... It doesn't hurt to learn about it as an approach.

    Actually.... I've been mentally mulling over UAV windowing a secondary 3D texture.... That way I could compute a large number of samples into secondary texture over many frames... Cache that result.... And then retrace the secondary texture as a single sample per frame.... That way I could conceivable achieve something close to 30 odd samples with the per frame overhead of 2-3.

    Though, it's just a theory that would even work at this point.
     
    Last edited: Oct 11, 2018
    ftejada and hopeful like this.
  33. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Well that's not actually true, segi use a grid of single colors, SH allows you to get directional data instead of single colors, so it's an improvement as you get more data out of the voxel grid.

    For example in order to combat light leaking in SEGI, you would need blocker, and blocker are the size of a single voxel. Using SH you would only sample the direction with the normal, that is a single wall can receive different light from a single voxel due to directional data, you have less need for blocker. You can apply the same technique of SEGI with SH.

    You need to understand the techniques to understand why it works and why it doesn't.
    SH in unity are either sparse lightprobe or LPPV, they are static baked lighting in different structure for different use. SH itself can have many used.

    LPV is another approximation but then it's the SH which are responisble for the visual, but the propagation technique. Segi use cone trace to sample update the volume.

    Basically GI need 3 system:
    1 - a data structure
    2 - a sampling method
    3 - an update method

    1. Data structure
    a. how do you store light primitive in a single point?
    - color constant: lightmap, segi, cubemap GI
    - SH: LPPV, tetrahedral SH, LPV
    - surface: enlighten
    b. How do you store the primitive?
    - texture: lightmap
    - graph: tetra SH
    - volume texture: SEGI, LPV, LPPV
    - surface lists: enlighten
    - Cubemap: Cubemap GI
    c. other possible data structure
    - binary tree (RTU)
    - BVH (RTX)
    - Octree (SVOGI, some HxGI experiment)
    - cubeface colors (like in Half life 2)

    2. Sampling method
    - texture look up: lightmap, segi, LPV, LPPV, cubemap GI
    - graph traversal: tetra SH
    - list of surface: enlighten
    - raytracing with tree traversal (see 1c except last)

    3. Update method
    - List query: enlighten
    - propagation: LPV
    - Cone tracing: SEGI
    - render scene: cubemap GI
    - none: LPPV, tetra SH, lightmap
    - raytracing with tree traversal (see 1c except last)

    HxGI use SH and has good result even on low hardware, but then it's a LPV method tweaked.

    What you have to ponder is those 3 point and the trade off they intriduce. SEGI is expensive but kinda accurate because:
    - you need to perform voxelization of the scene
    - you cone trace every viewport pixel

    It's not because of how it store the structure with plain directionless color HOWEVER SH2 (which has enough resolution) can increase VRAM requirement, it's 7 floats per RGB aka 21 floats. It's always a trade off. But we might actually be able to cut down resolution due to direction data.

    Or use a SH buffer as a way to have directional discrimination (ie single colors) of blocking geometry (to prevent light leaking). Which would be cheaper.
     
    DragonmoN, imblue4d, ftejada and 2 others like this.
  34. Ciano77

    Ciano77

    Joined:
    Mar 6, 2014
    Posts:
    5
    Hi Ninlilizi,
    first at all thanks for your time in this project. I download the last versio od SEGI v0.9.4b3 but I have a compiler eror:ù
    "Assets/SEGI/SEGI_NKLI.cs(19,35): error CS0246: The type or namespace name `ParameterOverride' could not be found. Are you missing an assembly reference?" I tried with different version of Unity 2017 got also the job system as error so I switch with the current version of unity 2018.2.12f1. Do you have any idea how to resolve this problem?

    Thanks.
     
  35. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    It currently doesn't compile with 2017.... With 2018, you need the PostProcessing2 stack installed.
     
  36. Ciano77

    Ciano77

    Joined:
    Mar 6, 2014
    Posts:
    5
    Thanks Ninlilizi,
    now compile just fine with the PostProcessing 2, I got them from github. Anyway I got these issue, and when I push play just the error on Graphics.Blit ("unknow material").

    Thanks,
    Luciano

    upload_2018-10-13_6-45-36.png
     
  37. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Jusst in case some people are interested I found this:
    https://www.in.tu-clausthal.de/fileadmin/homes/CG/data_pub/paper/GISTAR.pdf
    A comprehensive survey of all global illumination technics

    http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/
    WebGL Deferred Irradiance Volumes

    http://jcgt.org/published/0005/04/02/
    Real-Time Global Illumination Using Precomputed Illuminance Composition with Chrominance Compression
    https://copypastepixel.blogspot.com/2017/04/real-time-global-illumination.html

    I was trying to come up with a simpler GI solution too. Using cubemap instead of volume for low end hardware.
    - The reason we use volume is that it's a simple look up for position, for a given position of a pixel or object, you just look up light inside the corresponding grid in the volume it's O(1), easily cachable and have fixed cost that don't depend on scene complexity, the only difficulty is in updating the structure. But at the same time you end up using a lot of empty space for what's essentially coherent data, the only relevant data is at the surface of the bouncing lit object and the light.
    - Going for cubemap is based on a simple observation. Gi in itself is simple, especially with the trick of accumulation over frame to get cheap multiple bounce for free. The really hard part of GI is that it's the sum of all visible light to point, that visibility takes time to compute. Assuming a simple convex scene, a cubemap is basically a representation of a scene, ie the visibility of all surfaces from a point in space, and since the scene is convex, all point can see all other point. Cubemap being texture, we can use the mipmap to get the gathering for free. It's not a new idea, it's the system being box projected cubemap, and you can rerender the scene with the projected light to get cheap bounce.
    - Now the problem with that idea is that:
    1. scene aren't convex
    2. rerender is costly.
    - Adressing two, since cubemap are a representation of the scene, we can just use it to compute the light directly, so we have the equivalent of the Gbuffer, we store albedo and normal, which give us the direct light, world position to compute shadow to mask the direct light.
    - Then accumulate the result into a bounce map using mipmap, and then use the final composition of the bounce map as the final box projected lighting. the problem is the whole "using mipmap idea", I don't know if we can actually compute the mipmap at real time of a shader result (only read the store one before computation), and I don't know the cost of computing mipmap at runtime anyway.
    - Then there is the whole scene aren't convex to solve, I'm not sure how that works yet, probably using a complex graph to find where each point falls into and sampling a corresponding cubemap, and then merging result of cubemap when necessary? Would still be cheaper?

    Looking at segi, I was wondering if we could replace volume by cubemap to have skipped empty space, but then you would lose the O(1) query and have a complex data structure for non convex scene...


    Another idea is to use an indirect Gbuffer, for each lightmap pixel, there is a corresponding lightmap 8x8 Gbuffer tiles that represent the most contributing directions, each tile of the pixel is albedo, world normal and world position and compute direct lighting, then we sum all the pixel of the tile to get the final lightmap pixel color ... (maybe we can store individual indirect point in a look up texture and have the indirect map only store the the UV and weight mask to that indirect light, that will decrease the size of the data)
     
    Last edited: Oct 14, 2018
    guycalledfrank likes this.
  38. Ciano77

    Ciano77

    Joined:
    Mar 6, 2014
    Posts:
    5
    Very interesting articles!
    for sure this technique is usefull for today hardware and also for the low-end. Anyway I think that the voxel solution is the future, maybe we need some algorithm to optimize the empty spaces but think about one thing, if you aready voxelize the scene for the render, you can reuse the same voxels for physics...see the multi material physic Flex by Nvidia.
    I'm try to learn voxels solution and the segi concept to try to end-up with something usefull.
     
  39. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I was wondering if we could use a simple of that and use vanilla segi, but jumpstart the screenspace computation using the cubemap for the viewport ray and skip some empty space? unless it's already doing that lol. Or maybe we don't have to accelerate ALL ray, only ray that are contained inside the visibility of the cubemap (and left occluded one to be raymarched).

    Just a random idea I haven't thought too deeply about it because I would have to fully understand all teh segi code at low level lol. I making assumption on what's being said on how it works.

    However after the cubemap idea I proposed a tile based indirect lighting method. I have thought a bit more about it and run some math ...
    - You would need a preprocessed scene's sampled point map (albedo.rgb, worldNormal.rgb, worldPosition.rgb)
    - you would need a lightmap buffer (rgb) and a discrete light buffer (rgb).
    - you would need an indirection tile map that store the contributing sample points to a lightmap point (RGB, with RG as UV and B as weight).

    - The size of the indirection tile map is about how much sample points (in tile) can contribute to a single lightmap point. So the size is lightmap.size x n², with n being the size of the tile. A tile of 8 size is 64 sampled points contributing, a tile of 4 is 16. Weight might survive compression, but UV need to be exact.
    - The sample point map since they are indexed by UV are bound by the precision of the UV, so a 8bit rb map would sample 256² map, which mean 65 536 points MAX per scenes, in 3 maps. That sound like enough?

    The process of computing (static scene) GI would look like this:

    Pass 1.
    Compute the discrete light buffer from the scene sample points, use lightmap buffer to bounce back light, store it.

    Pass 2.
    a. (compute heavy version?) using indirection tile map, sample the discrete light buffer and summing them per tile, for each lightmap buffer pixel, store it.
    b. (memory heavy version?) Store in an indirect map buffer, instead of summing it (still in tile). Apply mipmap to that buffer.

    Pass 3.
    a. Following 2a, use the lightmap in the scene.
    b. Following 2b, use the indirect map buffer, and sample the mipmap directly at the correct LOD.
     
  40. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Additional note:
    The overall data cost is 4 x 256 rgb lighting map, 1 lightmap, and 1 indirection map that is lightmap size x tile size (control the quality of the gi). The lighting has a fixed cost, the GI gathering indirect map take all the cakes :oops: But the compute cost only need to be update when light change.

    Since we need a loop for gather, it's better to store UV indirection as both the RG and BA, and have a single read for two texture, and then move weight into a weight RGBA texture where we could read 4 weight in a single go. That can reduce the fetching cost.

    I still don't know how practical it will be in reality, but basically I would need to find a way to distribute sample point in the world, give them proper weight, then use something to create the indirection map that control visibility of those sample. I'm not there yet.

    Another property, is that we can create a highquality version and downsample it quite easily. If we use 16 bit precision for indirection map we can sample 4 billions points! The tile can be of any size, but limited by max texture size. Tile themselves can be ordered by weight (they are technically random list), so downsampling mean just cutting off the smaller less contributing weight. Given that, we can store raw hirez data, then generate map according to our need.

    It doesn't compete with segi as segi is fully dynamic, and I rely on precompute.

    Which mean the next idea is how do we computeand update the sampling point at run time, as cheaply as possible?
     
  41. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    Hey, I think its time you start your own thread. This thread is getting too long :D

    Also, give it a cool name.
     
  42. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    NIGIRI
    Ninlilizi's GI Realtime Indirect

    :oops: ok i'm out
     
    Shinyclef likes this.
  43. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,686
    Or just NinSEGI.
     
  44. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Yeah but there is less pun (nigiri being a type of sushi)
     
  45. greengremline

    greengremline

    Joined:
    Sep 16, 2015
    Posts:
    183
    If you do add a bake step, please don't tie it to scenes and make it asynchronous with the ms per frame user-controllable, because if segi is based on baking and requires scenes or the bake is synchronous, it will make segi useless for 90% of procedural games that use prefabs and need to keep frames smooth while loading

    And if you're not making a procedural game, you won't need to use segi
     
  46. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Following up my brain dump ...

    Turns out I don't need a cubemap to store a cubemap ...
    I can just store all the cubemap and their face in a single look up texture. This allow a complex scene representation.

    Then I use the indirect texture idea to point at a single cubemap, then use the normal to sample the lighting at the correct mipmap for gathering. And bounce back lightmap GI in a similar way.

    Then I can update the cubemap face async to match scene ONLY geometry change (light is done in the cubemap Gbuffer) and use another cubemap texture only for dynamic object (less object to render therefore more update rate) and just compose the two.

    It's easier to place and link visibility from cubemap, so we can create those map (cubemap lighting and indirect) at runtime, which works for pcg. We can also have additional fast (graph?) structure on cpu to decide which cubemap to sample based on proximity and other parameter, ie discriminate what are the relevant cubemap that contribute to lighting.

    It might be potentially cheap enough for mobile. NOW I ONLY have to try implementation of proof of concept :eek:

    Thanks @jjejj87 for starting that train of thought :p


    Edit:
    I can use variable size cubemap too ... or ... I don't need cubemap I can just use SH for radiosity only GI. Though might work better with prebake.
     
    Last edited: Oct 15, 2018
  47. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117
    Not true. The current Unity GI system fails in many scenarios
    If you want to have time of day with GI -> need SEGI
    If you have a scene that has both indoors and outdoors -> need SEGI
    If you want to have a door or any dynamic object with GI -> need SEGI
    If you want to load levels async for open world without matching all scene lighting -> need SEGI
    If you can't afford to wait 12 hours on 100m x 100m precompute scene -> need SEGI
    If your scene file grows exponentially due to lightmap textures -> need SEGI
     
    hopeful likes this.
  48. greengremline

    greengremline

    Joined:
    Sep 16, 2015
    Posts:
    183
    That's fair, I hadn't considered those

    Still, making it async and not require scenes would allow it to cover the majority of use cases
     
  49. peeka

    peeka

    Joined:
    Dec 3, 2014
    Posts:
    113
    Hi, I am on unity 2018.2.11f1 when I try the Sponza Atrium demo on Ninlilizi's SEGI-0.9.4b3 I got this error spam and eventually it will use up all my system memory and unity will freeze crash.

    I am running VR, deferred single pass, the demo has the same error when I change the camera to either Forward or Deferred.

    Any idea what could be wrong?

    Thanks
     

    Attached Files:

  50. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Hey guys.
    First, apologies for disappearing for a bit.... Needed to take a week out for mental health reasons. All those 10 hour days staring at shaders is not conducive to ones sanity ^_^

    For this... You are adding as a posteffect2 effect and not just attaching the script to the camera?

    This is a problem with generating the sun depth texture when running in VR. I'm aware of this issue, know exactly the problem and finding a solution is on my todo list. Hang tight :)
     
    scheichs and neoshaman like this.