Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Official Enlighten deprecation and replacement solution

Discussion in 'Global Illumination' started by Jesper-Mortensen, Jun 19, 2019.

Thread Status:
Not open for further replies.
  1. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    These forums are not for Unity's business discussions, IMHO. I too am curious but this is not the the place.

    Be aware that shutting down middleware as a *product* is true, and paying through the nose for SDK ongoing support is a different nuanced detail. However these forums are not for gossiping, which helps none of us.

    What Unity needs to do to quell this sort of anxiety from customers is be more forthcoming about the new solution instead, with videos / talk etc.

    But we will not be discussing Unity's biz moves this directly / gossip.

    Use this thread to talk about the new solution.
     
    keeponshading and Alverik like this.
  2. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    231
    Our licence is with Geomerics/ARM and it runs until 2023 at which point we have to remove Enlighten to comply with the contract. The current feature set will be available until then in the builtin pipeline (i.e. 2020.4 LTS).

    We are focusing our efforts elsewhere as we don't see Enlighten matching our vision for the future.
     
    Alverik likes this.
  3. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    It a big difference between

    1. support a widely used best of class technology like enlighten with your engine on latest possibilities and bug fixes collected from your users
    * develop something better to reach independency
    * then replace it without the tag - in preview

    and the official communication.....

    Live with current since years not improoved integration up to 2021, probably we have then something better in preview up to 202x.


    And no. I dont think there is any RTGI breakthrough who can replace a prebaked one in terms of quality and performance.
    Why you should spend something from your frame budget for soemthing what could be precalulated through a hassle free first class pre-processing in higher quality? Who does this in real live?

    It s nearly the same like loading your electric car during driving around through another car and are only allowed to drive 30mph/fps against loading it at home overnight and drive to work with 100mph/fps or faster.)


    Sorry. Was sending this before i read
    Use this thread to talk about the new solution.

    We started by trying to discuss a new solution. But there was absolut no reaction to this.

    Back to buisness.
     
    Last edited: Jul 4, 2019
    hippocoder likes this.
  4. Jesper-Mortensen

    Jesper-Mortensen

    Unity Technologies

    Joined:
    Mar 15, 2013
    Posts:
    231
    We are not yet ready to talk about the new solutions. This thread is to inform you well ahead of time about the deprecation of Enlighten.
     
    Alverik, Flavelius, Rowlan and 3 others like this.
  5. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Thanks for the heads up. And please do keep us informed even if it seems a bit bumpy at times, it's only bumpy because there's been too long a period of time between information updates.

    Appreciate all the hard work, must seem frustrating for developers not to be able to share sooner.
     
    Alverik likes this.
  6. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    801
    @hippocoder this thread is about the deprecation and the new solution (as per the title). Ultimately, Unity's business decisions end up being all of our business decisions. As others here, I'm totally in support and understanding of moving away from Enlighten and towards a better integrated solution.

    @Jesper-Mortensen thanks for the clarification about the reason for removing Enlighten. I suggest changing the wording of the opening sentence of this whole discussion to reflect that, to not cause more confusion.
     
    joshcamas and hippocoder like this.
  7. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Sorry but biz chat is not allowed in this thread. The decision was already made: anything else isn't helping you, unity, me or anyone, specially without facts that can never be found out.

    If you want to talk about Unity's business in general, use general discussion for that. That's the final moderator decision and I'm sure it's the right one (it's not silencing anyone - just take it to general if its important to you).
     
    Alverik and fherbst like this.
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    That's no longer accurate:
    https://morgan3d.github.io/articles/2019-04-01-ddgi/overview.html
    It's still kinda "prebake" (it's using lightprobe proxy volume, that will probably bake with the offline gi baking solution) but with new insight that allow RT update of the lightprobe proxy volume (mostly a visibility structure). The decoupling of of the data and update structure is another one of the leap made (it allow LOD the techniques across many machine), with the move to "infinite bounce" using multipass.

    In fact read between the line of the blog post UNITY made has huge emphasis on "lightprobe structure without light leaking", thats your cues about where they are going. They will probably use insight from the plenoptic function to further compress the data (as light field progress is going on at the same time). But another thing they said about automatic probe placement is very similar to a recent paper from enlighten (about hellblade lighgting solution).
    https://www.siliconstudio.co.jp/mid...enlighten-global-illumination-that-scales.pdf

    Also Remedy and also Crytek have successfully implemented compute based bvh (see the neo noir demo running on amd gpu).
    https://www.cryengine.com/news/how-we-made-neon-noir-ray-traced-reflections-in-cryengine-and-more
    https://gpuopen.com/wp-content/uploads/2018/05/gdc_2018_tutorial_graphics_tech_northlight.pdf


    And unity is probably looking to leverage the talent they have like Seb Lagarde and Natalya Tatarchuk to make sure to fine tunes the light transport using insight of these breakthrough. This will make sure they have top notch rendering without going through hops with someone else tech. For example lighten using a surface approximation based method that had lost directionality and flatten the image, i'm pretty sure unity want to work around that limitation.

    RTX hardware features is just a cherry on top that will accelerate update.
     
    Alverik and DMeville like this.
  9. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    Well with the biz chat side of things off the table, I still remain very confused about the HDRP timing, why is it losing support a year before the old built-in renderer?

    The answer, even if it cannot be stated now, really better be that we are going to start seeing preview versions of the new solution in some 2020 versions of HDRP. Otherwise there will be a painful year, just at the very time I thought HDRP was going to settle down a bit (by coming out of preview).

    I'm not sure that will turn out to be the answer, it could just as easily be because HDRP is scheduled to get some new features during 2020 that will be incompatible with Enlighten or would require development time to make them compatible that Unity doesnt want to invest into a dead end.

    Oh I dunno, I am very glad to see the back of Enlighten but the timing seems really awkward to me. I think I would rather have known the information either much sooner than we have been given it, or not until Unity were ready to give a little bit more info about their future solution for this that will be HDRP compatible. I'm now going to end up feeling nervous and insecure for who knows how long. I can live with this scenario for some months, but eventually I will probably end up begging for more info about the future.
     
  10. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    Although to be fair, its always possible the timing will work out ok, since the timing woes I speak of are not solely in the hands of the GI Lighting team.

    For example, if the Unity Editor and the HDRP hit all the milestones I need them to by the time 2019.3 & HDRP 7.x have matured and turned into 2019.4 LTS, then I will actually be able to stick with the LTS version and avoid the awkward year (or however long the gap will actually be). But it will only take one crucial thing to slip to 2020.1 and not be backported, for my sentiment about terrible timing to return. Given how many systems are in heavy development and flux at the moment (eg DOTS) I think more than a little luck will be required to dodge this!

    Anyway sorry for moaning, I appreciate the work being done on future solutions. And I'm a fan of the bleeding edge, and previews that are not ripe, so I will always do anything I can to encourage early public releases.

    I suppose it boils down to the fact that the main message from Unity ends with the sentiment "we are here to help you make the transition" but really its not actually possible to make the transition yet, is it? The best that can be achieved at the moment is help to cope with the interim period in some way. Until I know what actual realtime GI solution there is that I can transition to, the transition will have to wait, and I will just be begging for info to help me cope during the long awkward pause.
     
    Alverik and Rich_A like this.
  11. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    By the way, my own situation is very lucky, there are a number of ways I can cope and I even have the rare luxury of changing my product timescales to match the dev tools/platform realities. So I will survive 2020 whatever happens, without requiring special assistance coping with a transition that cannot really begin yet. But I doubt my situation is that common and my brain still craves information at the earliest opportunity with which to make sensible choices going forwards.
     
  12. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    Oh thinking about it further, I guess the 'help you make the transition' stuff from Unity at this stage is more about the baked side of things, as opposed to the (precomputed) realtime GI side of things.
     
  13. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935


    You can extra enable raytraced GI in Unity Experimental DXR in the Unity office and BMW scene.
    (via UI only raytraced AO, raytraced reflections and raytraced shadow)
    In booth scenes you have an fps cut down to 3 to 8fps on lowest quality setttings for raytraced GI (Titan and Quadro 6000 RTX).
    Without around 40 fps.
    The quality from the realtime raytraced GI is around 10 procent to an offline pre calculated.

    Disable all 4 Raytraced playful things.
    Pre calcualting reflection probes and use realtime ones, light probes , bake indirect and use enlighten you can generate same quality with better aa an you have .
    140fps on a GTX 1080.

    Only raytraced reflections have a little benefit. The are great. But look at blenders EEVEE s screenspace reflections.
    There is no noticeable difference in 90 procent of use cases.

    With an good pipeline you generate this over night during you have a nap.
    So it it 3fps versus 140fps while prebaked has higher quality.

    With baked pipeline you have a lot of customers who can play on 40 to 140fps.

    Is your game based only on the wonder tech you decribed above you have 0.005 procent of these customers who can play on 8fps when you will reach compareable quality.


    The neo noir has same limitations.

    Don t get me wrong.
    Great tech. Perfect to sell hardware.

    You can simply write a full GI raytracer in 3 days.
    After this you see what must be calculated.
    A fast Full GI raytracer needs around 5 to 15 years.

    We are using Full GI Realtime Raytracing in 4K since 2013 (up to 600 CPU Nodes HPC on tiled image clustering over optic wire).

    The games in your link use max one or two of these four raytraced DXR implemtations as tech demo.
    When you play the games you play on pre calculated.
     
    Last edited: Jul 4, 2019
    Bordeaux_Fox likes this.
  14. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    @keeponshading

    They haven't implemented a non DXR RTX solution yet, They are saying they are looking for it. So the result you have probably don't reflect the finale solution.

    If you look at the first link I provide you, you would know the cost of tracing a full scene using the LPPV update is well within 60fps update, can be async, and can be only updated using change and is gpu accelerated. And it would work on low tech hardware either by disabling real time update or make it slower, or reducing the support data structure resolution.

    The whole point of this thread is to announce they are working on it. It's not what's in the preview.

    EDIT:
    What my post was demonstrated was a bunch of known solutions unity can use to provide RT rates of GI.
    DDGI (first link) demonstrate that a solution exist for RTGI, other link demonstrate how they can pillage other solutions to combine and ameliorate DDGI.
     
    Last edited: Jul 4, 2019
  15. Rich_A

    Rich_A

    Joined:
    Nov 22, 2016
    Posts:
    338
    I would be entirely fine with a HDRP lighting solution that requires a DXR-compatible graphics card on the developer side.
     
    OCASM likes this.
  16. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    Metro Exodus does diffuse and specular GI at 1080p@60fps.

    Unity's current DXR implementation is purely experimental. It's useless to use it for anything but play around.
     
  17. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    So fine. Hope they make an Asset soon.)
     
  18. rsodre

    rsodre

    Joined:
    May 9, 2012
    Posts:
    229
    The first time I used reflection probes (~2017), I was so disappointed. It was a mobile game, based on a single chrome that should reflect the environment, so I placed a single reflection probe inside the ball. Since it renders a cubemap, it multiplied my single efficient render pass by seven (the main camera, plus 6 faces for the probe), killing performance for average devices when probing in real time! Even with really low resolution probes.

    Since Unity now can do single pass 360 stereo renders (that would sum 12 cube faces), please can I expect single pass probe renders too?
     
  19. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    People avoid refreshing cubemap even on faster hardware + mobile use a type of rendering (tile based) where context switching is costly (frequent move from local to global memory). I wouldn't expect much more that we already got for real time.
     
    Alverik likes this.
  20. 5argon

    5argon

    Joined:
    Jun 10, 2013
    Posts:
    1,554
    This thread is such a good read about pain points since I was mostly with 2D games.
     
  21. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Imagine if the new solution was fully dynamic, just required baking object uvs in some manner. That'd be cool especially with slow but smooth updates
     
  22. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    Do yo have tried this around 2017?
    Examples show the full dynamic behaviour. Quality is like every cycles render.






    The pain here is. I cannot see any downsides and costs.)

    The most hard thing is that it has speed and this Full GI feature set without exeption completly PBR based.

    Core Features
    • Unidirectional path tracing with multiple importance sampling
    • Multi-core CPU rendering with SIMD acceleration
    • GPU rendering with NVIDIA CUDA & AMD OpenCL
    • Multi-GPU support
    • Unified rendering kernel for CPU and GPU
    • Apache License v2
    Interactivity
    • Designed for interactive updates
    • Fast object, shader, light changes
    • Tiled and progressive rendering
    Layers & Passes
    • Render layers for decomposing the scene
    • Render passes for geometry and lighting
    • Shadow catcher
    • Holdout mattes
    • Denoising
    Geometry
    • Meshes
    • Hair curves
    • Volumes
    • Instancing
    • Multi-core BVH build
    • Fast BVH refit updates
    Subdivision and Displacement
    • Adaptive subdivision
    • Catmull-Clark and linear schemes
    • Displacement
    • Bump mapping
    Camera
    • Perspective and orthographic cameras
    • Panoramic and fisheye cameras
    • Stereoscopic rendering
    • Depth of field
    Motion Blur
    • Cameras
    • Object transforms
    • Meshes and hair curves
    Shading
    • Physically based
    • Node based shaders and lights
    • Principled BSDF
    • Production tricks
    • Open Shading Language (CPU only)
    Volumes
    • Absorption, scattering and emission
    • Smoke and fire
    • Subsurface scattering
    • Homogeneous and heterogeneous
    • Principled Volume
    Lighting
    • Global illumination
    • Point, sun, spot and area lights
    • Mesh lights
    • Environment light
    • Sky model
    • Light portals
    Textures
    • Image textures
    • Environment textures
    • Procedural textures
    • Bump and normal maps
    Cross Platform
    • Windows, Linux and Mac OS X
    • 32 and 64 bit
    https://www.cycles-renderer.org/about/

    you only send geo , material, light sources properties and textures from unity and in some ms you see interactiv what you get in scene view.

    Meanwhile all available denoisers like DNoise, Intel Denoiser or Optix are integrated too. So even the preview in editor is noise free after some seconds and makes the bakes fast as hell like you already know from bakery.

    Also full source and daily improvements from open source community. Lastest is e cycles who maakes cycles faster than octane for all my scenes.
     
    Last edited: Jul 7, 2019
  23. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    Benchmarking (values are personal experienced based assumptions.)

    1) Quality, mainly through full featured GI Featureset see above against PLM feature parity from GPU to CPU soon. Sorry. PLM delievers a subset from around 20%.No advanced Directional Modes....get all these easy plus better Translucency, better SSS ,Thickness, Caustics..... ,all there ready to add more realism to LWRP and HDRP shaders when you want to bake it,
    makes the biggest difference. Best in class versus we plan to have something very limited soon.And 2021something we could not talk about.
    Even when all DXR and VoxelGI solutions will be available there is always more quality and lots of fps possible with baked variants especially for low end hardware and on highestend hardware you can focus on innovative stuff instead of spending your frame budget for something what could be precalculated and realtime previewed.

    2) Costs of development
    development and maintaining of an platform independent solution, multi gpu solver with feature set as it is already available.
    OpenSource against mostly Ownbree with from a vendor like AMD. AMD and NVidia were powering up cycles development the last 4 to 5 years.
    Around 10 men years initial and to keep it running another 3 men years for full featureset.
    Available today.

    3) Costs of use and for content creation
    Asumption is that Unreal Vray and Unity Octane will go similar direction to deliver best in class baking and preview rendering.
    Solution above is freely scalable on as many gpu/cpu nodes you can access. No licensing or limiting to 1 up to 4 gpus like vray and octane by reaching same speed and quality.
    Baking large game worlds (Kingdom of... up to Cyberpunk 2077 style) or to improve quality in realtime movies generation and industrial bim or car visualisation. x to 300 000 dollars per project.
    You pay only the hardware rent to linear accelerate speed to reach highest quality. Cost down around 50%.

    4) Futureproof
    By solving full feature set in precalculation it s easy
    to concentrate on development to fed these in realtime GI playables.

    5) Competiton. Now that HDRP has closen the gap between the competition , think even more,the one could be on top who delievers overall visual quality ,platform independent and for less money , scalable without costs, to all customers as fast as possible.

    With free ressources you could
    make then best in class moves like
    - PostProcessingstack better than Yebis, PP3 it s close but still potential for dof, bloom, flares and glares.
    - deliver best in class networking,
    and make texture and lightmap streaming (Granite), think you own it now, freely available. Additional you could support AmplifyTexture too because it is better to have alternatives.

    Would be cooler than reading blogposts with strange Geomerics, SiliconStudio, Unity between the lines.

    Think Unity made his way to the top by making best quality and ease of use for everyone in terms of plattform and budget. It worked out so far, probably good to stay on this course.
     
    Last edited: Jul 7, 2019
  24. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    The baked replacement for enlighten is progressive lightmapper. The realtime replacement for enlighten doesn't exist, which is more what this thread is about.

    The people needing dynamic solutions are those left in the lurch particularly if they will be updating their game with HDRP.

    Cycles is clearly just not happening.
     
  25. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    Currently the Progressive is in bad condition and the last 3 years of development and results are speaking for themselves. Even with all planned features they are far behind competition and needs for high end use cases.
    Bakery solves a lot but has no realtime preview.

    This could be solved easily and precalculation for Realtime GI as replacement for Enlighten and more through extensions are easily possible too, using method described above .

    I don t see any chance by going the direction shown in the last 2 Blog Posts.
    But let s wait on the white rabbit out of the black copenhagen hat.
    It s no problem for me because i use mostly lightmaps and custom bakes from cycles and vray which quality and speed is out of reach in Unity. In addition RealtimeGI in Unity with some Bakery. So i need 3 solutions to reach needed quality and interactivity. One of these is deprecated now.
    Unreal s VRAY activities need an extra view.

    Let s see what will be happen. Only try to show other fast way outs.

    To get one reason against cycles would be cool and i really tried hard to find some expect one.
    It surely would rival Octane integration.

    Please make it more clear.
     
    Last edited: Jul 7, 2019
  26. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    They already said in the opening no stuff like UV
    With the blog we can infer it's probably probe based.

    The big unknown is the solution to update the probe, ie raytracing method, which can be a mix of many techniques. Currently there is voxel update, mesh based (bvh compute) update, hardware acceleration (rtx bvh), maybe cubemap Gbuffer (direct lighting sample on cubemap, accumulated back into probe then probe retroproject on cubemap)?
     
    Alverik, DMeville and hippocoder like this.
  27. jRocket

    jRocket

    Joined:
    Jul 12, 2012
    Posts:
    687
    Would you care to describe what exactly is wrong with it? I've been using the progressive lightmapper and it seems okay to me, besides the indirect probe lighting.
     
    Alverik likes this.
  28. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    Flowing the biggest issues ...

    No realtime preview of the result.
    - most limiting factor for quality and iterations

    No predictable or plausibel physical and pbr based output
    - bad indirect lighting transport to car interiors or archviz interiors from ibl skys.
    - no mis, no portals
    - wrong light falloffs
    - wrong lightprobe intensities and leaks in lightprobes
    - not possible to bake translucency or sss for static nor thickness or transmission maps for realtime
    - not possible to bake to vertex color for "hard to unfold" objects.
    - not possible to bake bent normals
    - normals and height not used
    - bad directional bake quality, only dominant direction

    results are far away from plausibel and state of the art baking and rendering like in cycles , mitsuba, vray, iray

    - light leaking
    - seams
    - unusable packing algorithm.

    - no possibility to rebake only changed scene parts or changed scene lights
    - no possibility to easy rebake parts of a scene

    - needs rebaking after upgrades

    - cpu performance around 40 to 60 time slower than competition like cycles, bakery, vray.
    - bad gpu stability . falls back to cpu most.
    - you can not predict bakeing time or calculate big projects

    - no scalabilty multi gpu
    - no network clustering
     
    Last edited: Jul 8, 2019
    Bordeaux_Fox likes this.
  29. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    I think it's not a thread about baking performance anyway. It's announcement for a future RTGI solutions, beyond enlighten.
     
  30. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Still valid feedback for unity staff I guess.
     
    neoshaman likes this.
  31. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    Think it is a common missunderstandung that these are seperate tasks.
    Baked GI solutions must match (precalculated) realtime GI solutions in lots of use cases.
    Is the Baked GI not mastered and correct there is less chance.
    Nor is there any realtime GI solution who can handle the broad needs of Unity use cases alone.
    Best i know is CryEngines VoxelGI but you will have lots of fun by trying to do all interieur stuff or mobile, ar and vr.


    So. First.
    Have a perfect FullGI bake pipeline. Allow streaming and interpolating of textures , lightprobes, lightmaps.. for different time of day calculations.
    Then add switch to DXR replacements for raytraced AO, GI, shadows and reflections. Choose it additively or not depending on your need and frame budget.
    Then add some realtime GI method like VoxelGI in CryEngine for bigger outside areas who works additive in these bounds for nature working with heavy instancing.

    Baemm. All Unity targets are possible by using fitting combination in LWRP and HDRP.


    So. There is a development cascade and need for all to work together bounded in a perfect baked /precalculated GI calculation when you are targeting 'best in class'.

    HDRP makes this possible now and it s made with 'best in class' thinking. Everthing physically based and corrected now.
    So. Your move GI team. )
     
    Last edited: Jul 8, 2019
  32. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I don't think you're reading the thread or blog. Unity has it's solutions lined up. It is PLM + a replacement for realtime GI in 2021. There's no confusion anywhere only disappointment at the timescales because they didn't have the foresight to act sooner.
     
  33. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    It was not possible to me reading further than..
    • Easy authoring: We need to remove the dependency on authoring suitable UVs and other surface based authoring.

    Joke beside. Let s simple use Realtime Raytraced UV s and packing from Cyberpunk 2077 Tech demo.

    That is far to late and by reading my thoughts you probably see why i think so. Also is it far from realism for 2021 by going further this direction.
    When you revisit Blogposts about Networking and PLM last 3 years you also see that it is needed to have an eye on community constributed thoughts and solutions.
    Today Unity is a platform with community and feedback driven development. When you get such feedback like in this case it has a reason and probably it is better to lower the height of your flight to see whats down there.

    Cheers.
     
    Last edited: Jul 8, 2019
    hippocoder likes this.
  34. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    Probes are too coarse for next-gen global illumination. They're fine for the LWRP but for the HDRP. Anything but per pixel ray tracing will not cut it, specially for character close-ups.
     
  35. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    325
    Please correct me if this is wrong, but I don't believe we're anywhere near having the ray tracing power to do diffuse GI this way. You would need a dense hemisphere of rays for each pixel. I believe that the general idea is to use ray tracing to update probes for diffuse, and supplement with per pixel rays for specular if you want it.

    There might be ways to use a few additional short rays to more intelligently interpolate the probe field.

    Edit: There is of course, that's what DDGI is.
     
    Last edited: Jul 9, 2019
    hippocoder and neoshaman like this.
  36. konsic

    konsic

    Joined:
    Oct 19, 2015
    Posts:
    995
    Could you implement distance field indirect shadow pipeline ?

    I think it's the most robust and fast GI today.



    Code (CSharp):
    1. https://youtu.be/AShFlzgeFaA?t=12
     
    Last edited: Jul 9, 2019
  37. keeponshading

    keeponshading

    Joined:
    Sep 6, 2018
    Posts:
    935
    What do the constribute to GI?
    All static objects are baked here + dynamic ones via probes like here.



    NGSS was working on these.
    Now that we can calculate sdf for meshes it should be possible.



    Think the main problem is to receive shadows from outside the screen. So for the centered room demo they are fine.

    But the only thing the do with the precalculated GI is making it black.
     
    Last edited: Jul 9, 2019
  38. alexandre-fiset

    alexandre-fiset

    Joined:
    Mar 19, 2012
    Posts:
    702
    Actually that does not sound right, it should be: if you are shipping before 2022 and not on newer platforms.

    We understand it is not really your fault, but will we have precomputed or realtime GI preview this year? Because we're onto a two years project now and switching up our lighting workflow 4 months before launch is clearly raising a red flag on our end.

    We're kind of wondering what to do at this point...
     
  39. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    Metro Exodus already does ray tracing GI (diffuse, specular and indirect shadows, even normal maps are taken into account) at 1spp at its highest settings. Granted, you can lower it to 0.5spp and still looks pretty great. Still a massive difference between that and probes. You would need a super dense grid of probes to even just approximate ray traced GI and yet it wouldn't be close to it.

    The tech also supports arbitrarily shaped emissive surfaces (with shadows). Probes don't.

    https://gdcvault.com/play/1026159

    Except it's only useful for rigid and procedural objects. Characters are out of the question. Also requires a lot of memory (SDFs are stored as 3d textures).
     
    konsic and SamOld like this.
  40. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    TO be frank I expect optimization on that front too, SH grid are 5d plenoptic function, and in lightfield research we know it can get down to just 4d (ie a plane), that is we can skip empty space in convex area. This would open probe density to be close to a regular lightmap pixel density. Also probe makes for a good cache for the ray result anyway. I haven't read the stuff on the new radial basis for specular, so who know what else can be done.
     
  41. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    325
    Consider me corrected. I didn't realise that the hardware accelerated ray tracing was that fast. That's a lot of rays per frame. As somebody targeting the high end laptop and above market, this is frustratingly just out of reach today.
     
    OCASM likes this.
  42. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    Any examples of such tech handling stuff like indirect shadows or arbitrarily shaped emissive surfaces? Not to mention dynamically changing environments.

    Remember that the HDRP GI solution is intended for 2021 and beyond. By then RT capable cards will be much more common and affordable.
     
  43. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    DDGI linked above in one of my post, that was teh big nvidia reveal at the last gdc.

    But I was wommenting on the dense aspects, it can be optimized further using lightfield theory.

     
    Last edited: Jul 10, 2019
  44. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    DDGI produces very coarse results. Has all the problems I mentioned before. Lightfield rendering produces good results but it's for the most part a pre-rendering technique, not very suitable for dynamic environments.
     
  45. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
  46. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    I'm refering to lightfield as the math idea (see the video in the spoiler), not the typical (image array based) implementation you would traditionally see, a grid of lightprobe is effectively a representation of lightfield (the probe being xy, phi, theta, ie encoding the 4d plenoptic function) with low angular resolution. As such we can infer that we don't need the full grid (the 5d function) as the math is the same (as demonstrated in the video). Which mean you can redistribute the data density closer to surface by skipping empty space, which mean less coarse data sampling.

    I think something got lost in translation.
    - my first answer was only about the density of data sampling, I said we could improve the technique, by redistributing the data to less coarse representation, by taking adventage of what we know about the plenoptic function.
    - Then you kinda move the goalpost to something else, whivh DDGI cover the context of that post in isolation. It does handle everything you mention.
    - But then we got back to coarseness of data, I think it's kinda a red herring anyway. These techniques aren't mutually exclusive, caching coarse data mean you can better spend the cost of the technique you want (direct raytracing) to area where it matter, like high frequency area, which mean better quality overall. DDGI still use plain raytracing to update the (diffuse) probe and for specular. Ie the key here is to understand that caching structure isn't mutually exclusive with raytracing.

    It's not just about blindly pitting one technique against each other, but knowing how they work, we can parse and get better composition of each other.
     
    SamOld likes this.
  47. SamOld

    SamOld

    Joined:
    Aug 17, 2018
    Posts:
    325
    There's also a trade off between spatial and angular resolution. When doing per pixel ray tracing you can't afford a high hemispherical density. @OCASM's example of Metro Exodus impressed me because they're getting far better results (and far more rays) than I would have intuitively guessed, but it's not perfect. The low ray count means it has a relatively high chance of missing small or far away details in the environment. Probes give a lower spatial resolution but a much lower chance of missing angular detail because there's budget for more rays. What matters most here probably depends on the type of environment you're in.

    It's easier to compensate for low spatial resolution by adding ambient occlusion than it is to compensate for a low angular resolution. There's no way to get that missing information back.

    It might be that the ultimate solution is an amalgam of the two. Short per pixel traces could capture very local detail, while the further away lower frequency information could come from a probe field. I built a quite successful prototype of something a bit like that a few months ago, but it was using voxels not ray tracing. It seems to be a good principle.

    I've not played with ray tracing hardware so I don't have a good grasp on what's affordable yet.
     
    Last edited: Jul 11, 2019
    OCASM likes this.
  48. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    I think the main difference is that ray tracing "just works" while probes still require careful setup and can't handle all the cases RT can (at the moment at least). That results on a higher computational cost but I think the benefits are worth it.
     
  49. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    The reason I invike the plenoptic function is that low angular resolution is compensate by higher density of "angular probe" as each probe capture some ray at its position. So missing details would be captured by neighbors. So if you use the plenoptic function reduction (from 5d grid to 4d planar) to skip space inside convex area you also get denser probes, at relevant places, to capture angular data with the increased density, at the same old cost.

    Anyway I feel the technique are orthogonal, they complete each other and don't really involve trade off of choosing one or the other.

    What do you mean? Old school volume did, but DDGI contribution was to include a visibility structure that remove that constraint entirely for legacy probe.

    From the DDGI blog
    https://morgan3d.github.io/articles/2019-04-01-ddgi/overview.html

    The blog goes in great details explaining all of this, another example:


    And raytracing still need the visibility structure to be "baked" or reconstruct at run time using BLAS and TLAS BVH. I guess an "optimized light field" placement would work just the same, you would build the structure similarly (but instead of bvh, you reconstruct a convex division of space and have the probe at the boundaries).

    Also DDGI is STILL using raytracing, that's why it just works too! They are complementary.

    Basically what you don't understand is that DDGI trace from screen (specular) AND from probe (300 spp per diffuse probe). Vanilla RT is only from screen.

    Also Most implementation of RT still use old technique to jumpstart raytracing, frostbyte use SSR mask coverage to increase sample count where SSR failed to resolve (ie spending the budget only where it matter for out of screen objects), metro use AO to mask where the cost should be spend. On top of using BRDF analysis to know where to sample other the hemisphere to get faster results with less rays.

    They both just work, and are not in opposition of each other, they are just various acceleration of the same technique. Adding DDGI don't remove traditional RT, it merely augment it. In fact you can comine them in so many way, for example a coarse grid, slow update, for far away GI and short rays for close GI with dynamic object, seems like reasonable idea, most dynamic effect would happen at close range, and they would need high density per pixel sampling, small light are unlikely to affect much distant object, so caching and sampling them would be best.

    Now what I'm curious is that what happen if we use lightmap based DDGI instead of the volume based sampling. I mean that's almost per pixel raytracing but cached in SH pixel over time.
     
    Last edited: Jul 11, 2019
    hippocoder, keeponshading and SamOld like this.
  50. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    Feature check:

    DDGI:
    - Coarse diffuse GI (no leaking, but no shadows either).

    RTGI
    - Per-pixel area lights/shadows from direct sources and emissive surfaces
    - Per-pixel shadowed diffuse and specular GI

    In the best of cases DDGI still requires RTGI for specular and AO for diffuse. Might as well unify your solution and reap the quality and maintenance benefits.

    DDGI are better probes that what we currently have but they're still probes. They're fine if high quality is not your priority. So sure, they could be used for far away regions but I wouldn't use them for areas close to the camera.
     
    keeponshading likes this.
Thread Status:
Not open for further replies.