Search Unity

Feedback I'd like this GI solution in Unity, thanks a lot :)

Discussion in 'General Discussion' started by hippocoder, Apr 6, 2019.

?

Would you like this?

  1. Yes

    94.9%
  2. Yes

    68.4%
  3. Yes

    69.2%
Multiple votes are allowed.
  1. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
  2. iamthwee

    iamthwee

    Joined:
    Nov 27, 2015
    Posts:
    1,505
    yh shame the project is crowd funded and runs in blender
     
  3. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    271
    Raytracing against SDF Spheres and Cubes with Physics is fast since 2012.
    http://madebyevan.com/webgl-water/
     
  4. Darthlatte

    Darthlatte

    Joined:
    Jan 28, 2017
    Posts:
    25
    Hi, how exactly does this work? Do you just set the sunlight to pure red and use a gradient sky with all blue colors and bake the lighting or what? The results look good, but I fail to understand how it works... If someone could write a mini guide or give some hints on this I would be very happy :)
     
  5. Adam-Bailey

    Adam-Bailey

    Joined:
    Feb 17, 2015
    Posts:
    228
    I've been meaning to expand that little test project to release as an example but have had absolutely no time.

    Exactly that. Sunlight set to RGB[255,0,0], ambient light to RGB[0,255,0], and then any other lights (controlled as one) to RGB[0,0,255].

    Bake lighting as normal if just baking direct lighting. If baking indirect lighting then all static geometry will need a plain white texture.

    That gives you a lightmap where the three lighting types are baked to R, G, and B respectively. You can then use those channels as masks in a shadergraph shader.
     
  6. Darthlatte

    Darthlatte

    Joined:
    Jan 28, 2017
    Posts:
    25
    I understand and thanks for the explaination :) So in the shadergraph the values for direct/indirect lighting are lerped based on the RGB mask? I would love to see an example if/when you have the time ;-)
     
  7. DMeville

    DMeville

    Joined:
    May 5, 2013
    Posts:
    400
    +1 would like this gi solution. Need something for time of day system with dark areas (caves, houses) for large areas and baking stuff is dumb. I know it's been a few months since this was posted, but I'm sad that no one from unity has popped in.

    I've been lightly following the different GI systems for Unity for a long time, SEGI, HXGI, and even got hyped years ago about SpectraGI with their impressive video but no actual product. Popping in every few months hoping there's been a breakthrough with performance and systems were actually released to the public. I'm going through the preproduction phase of a new project that could really use a dynamic realtime GI solution now, so I stumbled on this thread..

    I've tried SEGI in the past but it was slow and artifact-y (maybe it's better now, this was years ago). HXGI hasn't released anything (and sadly I'm doubtful they will) and there's a few other half-baked systems that probably don't do well to support anything more than demo scenes. Pretty desperate at this point.

    What other options are out there currently? Short of trying to do it yourself, or hiring someone smart enough to do it for you?
     
    Last edited: Jun 10, 2019
    iamthwee and joshcamas like this.
  8. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,271
    Does this type of post-injection work in Unity games?

     
  9. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    5,629
    I mean, that's all ReShade does, really, and there are ReShade presets for Hearthstone even.
     
  10. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    25,278
  11. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,271
    Adios Enlighten! (and good riddance!) Nice to see Unity finally being forced to develop an actual Realtime solution, finally!
     
    SunnySunshine and hippocoder like this.
  12. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    1,766
    Maybe its a good thing that I wasn't able to bake anything with enlighten.

    I don't need to rebake everything nao. (⌐ ͡■ ͜ʖ ͡■)

    Too bad its infinity before releasing actual realtime gl solution of out preview.
     
    hippocoder likes this.
  13. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    25,278
    Steady on, give Unity some positive reinforcement! There's people there who invested a lot of work and effort. Sometimes things don't go to plan, sometimes they do.

    Such mob :D
     
  14. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
    I was skeptical at @hippocoder saying GI was solved for RT (I mean at a good enough level) but damn if in between now and the start of that thread if change hasn't greatly evolved. RTX + existing solution + DDGI = @hippocoder was right, will he ever pardon my sin? I shall never doubt a moderator again.
     
    Martin_H and hippocoder like this.
  15. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    25,278
    My child... sins cannot be absolved but should you go right ahead and make sure Unity does a good job of it, I'll forgive you :p
     
    xVergilx and neoshaman like this.
  16. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
    BTW I think I'm going to start implementing the idea floating into my head, the realization that actually any texture is a 3d one due to mipmap, and that due to diffuse light spread it can be actually good enough as a representation.

    Also because I delved into lightfield courses and discussion, and they state that the 5d (plenoptic) equations (aka xyz, horizon and azimuth angle, understand a lightfiled volume, generally lightprobe array in game) is over kill and we only need 4d (xy and angles). Another way to put it, you don't (always) need a light probe volume, all you need is a cubemap because that's essentially an empty 3d texture, the plenoptic equation will fill the inside just fine. PS: I Assume you store SH in the texture not mere pixel color.
    https://en.wikipedia.org/wiki/Light_field#The_4D_light_field


    I dunno about you, but that open a lot of opportunity for weak and ultra cheap hardware to have decent (diffuse) lighting (and probably gi compute) at small cost with no extra hassle (no 3d textures sampling, one sampling).

    Which mean the main headache now is just to know how to update those structure to have RTGI.

    EDIT:
    When you can't stop thinking and realize that solution mipmap solution make so much sense for heightfield terrain ... I thought that you would need to raymarch the heightfield ... but NO! Light field solution work cheaply because they encode the spherical lighting environment in one point, raymarch is only when you get pixel color instead of SH ... so you just need one single sample per point, so now you have a flat lightfield associated with a heightfield, all you need is to reconstruct the lightprobe position from the heightmap and 2d position and sample that from the shader ... DONE! there is no light inside the terrain anyway, and since probe interpolate just fine, you can sample the mipmap too to get basically a column of lighting. You probably need a policy of distribution to find the sampling height of mip map probe though,depending of the range of variation of height, since each mimap cover a bigger and bigger area, which could probably work in tandem with mipmap data of the heightfield anyway (edit, which would store max height of the area covered anyway, but then you need to "march" the mipmap "column" to find the relevant probe since the data is arbitrary in height, probably could use a pointer to next mipmap height in another channel?).
     
    Last edited: Jun 21, 2019
  17. iamthwee

    iamthwee

    Joined:
    Nov 27, 2015
    Posts:
    1,505
    @UT I want these solutions now, production ready and available for my mac mini. Plz hurrie.



     
  18. DMeville

    DMeville

    Joined:
    May 5, 2013
    Posts:
    400
    I'm sure many of you have seen this talk and blog posts about DDGI from nvidia:

    https://www.gdcvault.com/play/1026182/
    https://devblogs.nvidia.com/dynamic-diffuse-global-illumination-part-i/

    Apparently they've been working on it for 5 years, and it looks pretty good imo. This talk was given at GDC, at the same time as the talk in the original post of this thread, and while it's not quite as fast as 0.6ms at 4k on XBO, maybe they've seen the other talk and made some hefty improvements since then. Although, since it's nvidia tech, I wonder how nicely it plays with other graphics cards..

    I messaged the speaker, Morgan McGuire, on twitter asking about availability and unity integrations or betas or anything to get my hands on code and start playing, as I'm getting to the point in my project where this is something I would like solved and couldn't find any information about this other than these two links. He said they'll have updates at Siggraph in a few weeks. I have my fingers crossed that means actually releasing some code before 2021.

    (https://twitter.com/CasualEffects/status/1148397983177826305)
     
    Last edited: Jul 9, 2019
    elbows, Total3D and OCASM like this.
  19. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
    Yep I have seen this, it's basically using RTX to update lightprobe volume WITH a nifty idea to control light leak, which is the main contribution of the technique (it make updating lightprobe viable).

    Now the big thing to optimize it, is this update pass, that's the main thing if you want to port it elsewhere. DDGI use rtx, but it has been implemented using other technique (see HxGI thread). That's where you can pillage any other techniques depending on your hardware budget and quality target. I think even old school approximation light propagation volume could do (low quality lighting), which could further (probably) be optimized using lightfield theory to skip empty space.

    edit: Optimizing the structure that control light leak (visibility texture) might be another improvement. Also for low end, trying non grid light probe structure (tetrahedral) which allow for sparser update.
     
    Last edited: Jul 10, 2019
    DMeville likes this.
  20. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    239
     
    DMeville likes this.
  21. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,271
    @OCASM Thanks for the video, but wow that is a lot of technical mumbo jumbo to me. I hope it means something to someone in a position to make stuff happen. :D
     
    OCASM and DMeville like this.
  22. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
    OCASM likes this.
  23. DMeville

    DMeville

    Joined:
    May 5, 2013
    Posts:
    400
    'RTX' means hardware accelerated raytracing, right? Which only works if you have an RTX level nvidia card (or similar)? Personally, I'd really like a GI solution that can scale down and work work on older cards too, as the majority of players don't have that kind of hardware yet, and probably won't for years still. I could be misunderstanding, though.
     
    Last edited: Jul 11, 2019
  24. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    239
    HDRP is intended for high-end PCs and Unity's real-time GI is intended for 2021 and beyond. By then NVIDIA, AMD and consoles (maybe even mobile) will support hardware ray tracing.
     
  25. DMeville

    DMeville

    Joined:
    May 5, 2013
    Posts:
    400
    Sure, but what about projects targeting LWRP or wanting to release sooner than 2021? No GI for them? Clearly from the OP it can be done acceptably without hardware accelerated raytracing (0.6ms at 4k on an xbox one x!), and it can be done yesterday. The dream would be to have it running on modern hardware, and let those with RTX cards accelerate it making it run even faster or at higher quality, that way everyone wins.
     
    Last edited: Jul 11, 2019
    angrypenguin, christoph_r and Metron like this.
  26. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
    To be more precise, we should probably say "Nvidia's hardware accelerated BVH traversal shader for raytracing".

    So while in the video they use RTX, the broad line lessons they get with RTX should be applicable to other implementation of accelerrated structure, since they all have the same problem.

    AMD hasn't been in a hurry because, apparently, you can get good performance using compute type shader, and implement the acceleration structures there, that's more work though. On nvidia it's just there, therefore conducive to experiment that focus on optimizing the tracing part (scene sampling) rather than on the (ray) acceleration part. The core of that video is agnostic to the tracing method, it's also focus on efficient and accurate light transport.

    Basically RTX (kinda) is a "scene sampling" implementation that deal with visibility at each points' hempishere, as long as you solve that problem efficiently (for your target hardware and target quality), you should probably be cool.

    For example:
    - Enlighten solved it on weak hardware, on CPU, by prebaking (offline raytracing) the visibility on coarse surface, and storing that visibility per surface. Then at real time, compute the lighting at each surfaces, then resolve the final GI by querying surfaces lighting inside the visibility structure of that surface.
    - Voxel solutions, dealt with visibility by storing (coarsly) and marching the result at real time in a 3d textures.
    - Light probe just store the resolved visibility result of tracing in angular structure (SH) offline, which allow to query in one sample per pixel.
    - DDGI use raytracing to update lightprobe at runtime (decoupling from resolution and framerate), but the update method can be anything (voxel, prebake like enlighten, compute or RTX).

    So to answer your question. It would scale if they keep what they have and JUST change the update method (ie replace RTX with another solution). Providing you find one (or many that scale on all hardware, or use specific solution for specific machine, or handle the trade off with quality.

    The real problem is efficiency, and right now, RTX is the proven most efficient method that allow raytracing to reach real time and good enough cost.


    In fact I'm exploring that by (ruthlessly) approximating the visibility using a box projected cubemap which stored the adresses of points to sample, with ruthless approximation light transport(not accurate then), stay tunes for when I get results back.
     
    DMeville likes this.
  27. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    239
    To quote @Jesper-Mortensen :

    "To address HDRP and LWRP support we are going to integrate the features that make sense for the pipeline in question. Some features require hardware capabilities not available in LWRP so those will have to remain HDRP only. Also, the pipelines are quite different in nature so to integrate efficiently we need to do it separately for each pipeline in order to achieve optimum performance."

    https://forum.unity.com/threads/enlighten-deprecation-and-replacement-solution.697778/#post-4701119

    And to quote myself:

    "It should be exclusive to the HDRP so it's well optimized and not held back by the limitations of the LDRP. That's the point of having different pipelines in the first place. For the LDRP they could have a different, cheaper technique like LPVs."
     
    pcg, christoph_r and DMeville like this.
  28. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    4,165
    For the curious, I found the DDGI paper, that is super readable and goes more in depth than the video and blog:
    http://www.cim.mcgill.ca/~derek/files/DDGI-highres.pdf

    - Apparently they also propose an accelerated raytracing structure that basically trace using the lighprobe structure (not rtx, not bvh, basically cubemap hoping)
    - They don't use SH like in typical LPPV, they use full cubemap, laid out in octahedron encoding, in an atlas.
    - They use Gbuffer at cubemap level to compute the lighting and accumulate over time
    - You don't need to use a grid, any linked probe structure can do (tetrahedral, box projected cubemap, etc ...)
    - Result with 1m spacing are very close to ground truth
    - It does look trivial to implement a simple version.
    - It's kinda close to my own hypothetical and untested (yet) solution, both use cubemap atlas as visibility structure (but differently) and texture gbuffer. The main difference is that mine make the hypothesis around cubemap as samples addresses to the gbuffer stored in a lightmap, shadow are define by sampling the analytical skybox using the visibility structure, and box projection to replace tracing. Both light objects as an async structure that is then simply sampled by geometry at runtime.
     
    DMeville likes this.