Search Unity

Graphics Erebus - Real-Time Ray Tracing SDFs; UE4/5 Style DFAO and Soft Shadows [HDRP/URP/BUILT-IN]

Discussion in 'Tools In Progress' started by SuperDrZee, Aug 12, 2021.

  1. SuperDrZee

    SuperDrZee

    Joined:
    Aug 12, 2021
    Posts:
    41
    The baking part was actually a typo. 'Now' has now been corrected too 'no'. (I'm dyslexic, mixing up similar words is part of this condition)

    And while there are other shadow rendering methods, I would actually be willing to bet money you won't find any that are as near performant as Unreals method or our own. (4000 meter drawing range not limited to resolution under 3 ms, real time no baking) Which is the entire point of this, not that it produces shadows; But that it does it much faster then rasterizing meshes to textures, we've put this out there alooot. o_O

    In regard to the AO, again, if you provide a solution that doesn't use RTX and supports infinite range on screeen (Segi and VXAO is limited by clip map extents) with 20–30-meter ray tracing distances, works in real time and isn't screen space; I'd be very interested in seeing this. :)

    Ill also be the first to admit my background was in procedural meshing and GPGPU programming. I may not be a lighting master, and while AO isn't full blown GI it's a logical first step when developing a software ray tracing engine. RTAO and other more modern hardware methods haven't been made obsolete. And do realize what ur saying.

    SSRT has the luxury of being able to access Unitys entire rendering pipeline, not only to draw the gbuffer but the render the probes with-in init. In order to work with SDFs in the same way we are, the entire direct lighting pipeline needs to be built up by hand as we use 0 non-compute shader, 0 meshes. This means allllllllllll the work has to be done by us.

    We don't have the luxury of boot strapping reflection probes and letting unity do the real leg work while we just casually trace the results. We also don't want this because of it's short comings. You also mention SEGI for instance which is notorious for its lack of object support, range, transition between clip maps, lack of anisotropic voxels, horrible self-occlusion when objects are moving among a lot of other things an SDF solution would not have.

    Unlike the plugins you mentioned as well, we need to have access to every light as well as an optimized metric for figuring out light blocking / shadowing before we could even start working on indirect lighting. Lumen is heavily reliant on the feature sets we have now. No SDF shadows then no way to tell if a pixel is actually receiving light or being blocked by another object. o_O (Even SEGI relies on a very naive 'draw every mesh in the scene again from the directional light perspective')

    Anyways, sorry to have offended you...? I'm manic with atypical autism, the dyslexia don't help with conveying ideas as well. Given that though, seriously if I seem to overshoot my accomplishments or jump the gun with something like this, for me it's just hard to control my enthusiasm with these things as well as being able rationalizing a reasonable timeline for something this big.

    We had to build a full instancing system with object management capable of managing a million plus objects, 3d texture atlases, or our data formats, icon generators, a really robust BVH based sdf convertor, light managers, camera and renderer managers along with all the UI that goes with it, a lot of other crap. Definitely wasn't as simple as shoot rays at buffer or unity probes, or write geometry shader > trace voxels > profit.

    If you believe this all simple, pointless and we suck, I mean, all i can really say is that's fine and sorry if something offended you. :) I hope your day gets a little better man.
     
    Bwacky, QGGameDesign, Incol3 and 7 others like this.
  2. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    633
    Ignore the troll people. Thanks devs again for your hard work and can't wait for release!
     
  3. sirleto

    sirleto

    Joined:
    Sep 9, 2019
    Posts:
    146
    Maybe too harshly written by SOIL. Too vorwurfsvoll. But I also wondered about above screenshots what exactly is going on? They look unspecial, to say the least.

    Can you perhaps link past posts of this thread, where I can reread what this tech is about?

    I mean, comparing SEGI with 4000m draw distance obviously makes no sense, from neither party (accusers, defenders).

    But calling 3ms SSS something not yet existing sounds rude against assets like NGSS with his frustum SSS.

    The fact which SOIL mentioned correctly is: where are light effects? Bounce light, anything. The screenshots so far don't show neither results better than others in regard to visual Fidelity nor performance.

    Which either would already be very welcome :)
     
    Last edited: Dec 5, 2022
  4. SuperDrZee

    SuperDrZee

    Joined:
    Aug 12, 2021
    Posts:
    41
    Those last screen shots really aren't anything special. Someone had asked about seeing some interior view and that's about the sum of that.

    For me, I'm just not sure to how to elaborate that while SEGI has a clip map. There's huge differences between the approaches, it really isn't fair to SEGI or this to compare the two. But i mean that comparison is what was thrown at me.

    "But calling 3ms SSS something not yet existing sounds rude against assets like NGSS with his frustum SSS."

    Also, sorry if I'm misreading you but while we do support SSS shadows, those are not what I'm referring to. NGSS does not have distance field ray traced shadows unless this has changed in the past few months.



    None of the shadows here are screen space or are using anything remotely similar to NGSS. All the shadows are ray traced in real time at around 3ms. (Screen space shadows take 0.5ms) No RTX, all custom software ray tracing. Really curious where i said SSS as i would like to correct this asap.

    In case the significance if this is still lost, unity takes on my pc 14-40 ms for 500 meters. Or 1/4 the view space distance on screen. So 5 to 10 times faster for 4 times the distance is what I'm trying to sell here. Again no screen space involved.

    Distance Field Shadows are akin to ray traced shadows; These shadows can draw well off screen unlike SSS. And while I'm not the only person to have developed something similar. I haven't personally seen anything available for unity that offs DF soft shadowing, and that's the only thing I am claiming to not yet exist with-in unity's eco-system. (NGSS offers a pcss filter modded into unity shadows, and SSS, no ray tracing anything)

    As for more information about all of this, the main thread post explains all of this.
     
    Last edited: Dec 5, 2022
  5. TKDR

    TKDR

    Joined:
    Apr 14, 2016
    Posts:
    2
    Have y'all not even read what Erebus is? Have you looked at the initial post? Erebus is real-time, fully dynamic, distance field ambient occlusion. In the grayscale debug screenshots, you can see what exactly it is that Erebus does. That ambient occlusion is not screenspace. It's capable of utilizing off-screen information, at distances as high as twenty to thirty meters, as the dev describes above. Imagine how flat that scene would look if the ambient lighting was totally uniform, excepting some modulation from surface normals' direction relative to the sky, as with default non-baked ambient lighting in unity? Erebus is not a GI solution. My understanding is that that will be worked on and released later in a package called Aruna (?). Both will utilize the same fundamental technology however, namely 3D per-instance distance fields and a global distance field, constructed dynamically on the fly. As someone who's used distance field tech in UE4, I can confirm for the haters that this asset will be a game changer for procedurally generated and/or open world games in unity that are looking for ambient lighting with a realistic sense of depth and occlusion. Erebus will be more impactful on the final look of many games than any of the features that HDRP currently possesses.
     
  6. sirleto

    sirleto

    Joined:
    Sep 9, 2019
    Posts:
    146
    Thank you for explaining again, what your product is about!

    I guess the SSS was my misunderstanding.

    I find the big terrain screenshot nicely uniform, but I think a comparison to NGSS frustum SSS must be allowed IMO.

    Obviously it has no offscreen support, and if you want AO like behaviour, you need something like a hacked SEGI, which obviously is either slow, super imprecise or not really realtime.

    So I'm happy to look forward to Erebus + the other GIB product you were talking above, and I didn't understand that it is a separate product.
     
    Last edited: Dec 5, 2022
    hkalterkait and SuperDrZee like this.
  7. SuperDrZee

    SuperDrZee

    Joined:
    Aug 12, 2021
    Posts:
    41
    For sure, you definitely can. :) I'd say a fairer comparison would be unity's built-in shadow mapping methods vs SSS. Unless I'm missing something that makes DF soft shadows more like SSS shadows. But really, it's apples to oranges, i get to most people shadows are shadows. ;)

    Edit** Sorry didn't see your last edit;

    But yah, you summed it pretty well with the last parts.

    This is for people who want large scale artifact free AO and Shadowing and can't fair with running SEGIs expansive voxelization shaders every frame. And that's not to take away from SEGI, just we had to get a lot more hands on with our clip map to even get it working and as a result its a lot more responsive.

    Our plans are a full-blown GI solution with all of this, just the hurdles even getting it this far were a real challenge for a solo dev thing. Even with Chris, it's taken a lot to get it here. In order to do the bounce light, we needed really optimal shadows to hit ranges Lumen is now, just so we know what's actually receiving light even off screen in order to actually bounce/reflect it.

    So, the logical first step was to build the foundation up first which is what UE4 initially offered with their SDF tech, then once that's finished like it is now, we can start building off of it to support reflections and indirect diffuse which we're already actively looking at. I think we're leaning for two separate packages, this being our NGSS if you will, and Aruna being closer H-Trace or SEGI, though that's sort of up in the air atm as we work our way into the next phase.

    This obviously won't be for everyone. If you're looking for very cheap ray tracing effects that can handle long ranges stably for shadows and AO, this is what Erebus intends to deliver for the time being. :)

    Edit** Figure ill just continue this post, rather than double posting.

    I will be the first to say this all was hell-a experimental. I go through the posts sometimes on here and over the past year it's went from pretty bad to pretty close to where it should be, especially in these past few months. So yah, definitely a fish out water, especially getting started.

    The examples below aren't as half ass'd as the arch viz scene conversion stuff. For me personally, all i wanted to show there was what the same rooms AO would look like with the AO making proper / closer contact.

    Here's our demo scene we pushed with the assets package. (Can't include a bunch of stuff from our terrain so opted for something simple and non-rights breaching)

    While it's nothing special per-se, maybe it can help to illustrate what the AO as well as our shadowing stuff does.

    Top AO on, below AO off; (Shadows in both images are DF Soft-Shadows)


    Top Shadows And AO on, below Shadows And AO off;

    So, while it's not magic, for interiors and scenes populated with tones of artificial assets, like buildings or city's where light baking isn't applicable it can bring out a lot of lost details, especially in completely unlit areas. (Yah, I know these shapes are really simple; Imagine it's wear house ;) )

    The shadows and AO can account for geometry occluded by something on screen, as well as off which I guess is the major thing that makes any of this worth while. Both are ray traced after passing every object in the scene through our culling pipelines, once we know what effects what pixel on screen we then ray/cone trace the SDF.

    There's a lot more to it, but in the end this works similar to how RTX effects work, as in it traces data sets derived from every mesh we intend to trace. Just rather then BVH we use own acceleration structures and SDFs.

    Can see this a little better below, what I mean with the unlit area stuff here.

    I find this last image is a pretty good example of what modulating a fixed ambient factor by just large-scale AO can offer.


    Moving a point light around aimlessly. (Just to show the shadowing)


    Moving an occluder around. (Also, pretty aimlessly. Strapped for time atm)

    And like the TXDR had mentioned, it mostly shines in large scale organic scene, especially procedural ones.

    Top everything on; Bottom everything Off;

    We never really talked much about it, but we also semi-support specular occlusion which can be seen on the spheres in this image.

    So no, this won't deliver visuals that full blown GI promises too. And I won't lie and claim it was ever intended too either. ;)

    What it was designed to do was give people who want to build large dynamic scenes a middle ground where it's stability and responsiveness to scene changes is a high priority which make other solutions with-in unity's current offerings non applicable.

    Or for maybe the odd user who has lots of lights and needs a cheaper shadowing rendering method or someone looking for very long-range shadowing in general that. (We support shadowing from point, spot and directional lights now, all with variable softening settings local to the light and no hard limits to counts)
     
    Last edited: Dec 9, 2022
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Something I wondered, can anisotropic sdf be used I someway? Like nesting a big regular sdf with small anisotropic brick on the surface of object or vice versa, or anisotropic sdf is a dead end in every way? I had this weird idea of having oriented anisotropic sdf by having the a channel having a crude direction toward the closest surface and have rgb have the scaled basis aligned to it, kind of ellipsoid oriented sdf. What do you think since you are more accustomed to implementation?
     
    one_one likes this.
  9. SuperDrZee

    SuperDrZee

    Joined:
    Aug 12, 2021
    Posts:
    41
    I think it sounds interesting! Been thinking a lot on the topic due to distortion that come up when an SDF gets scaled by a non-uniform factor funnily enough and anisotropic distances is what I came up with as well as a possible solutuin. Aside from it being probably a bit involved, I don't think it a dead end per-se. I think maybe even just including the direction could be enough.

    Instead of sampling the distance u could just fetch this vector and apply the volume scaling to it, then treat the length of the scaled vector as the new distance.... I think the distance could maintain a high accuracy still too.That's very smart! Really cool idea! :)


    On an another note; If anyone plays with our asset store demos; Stick to Build-In. HDRP and URP need some tweaking and we're left in the description with the intent of having the materials and LODs set up to better match built-in, this is just something stupid and small we over-looked.

    We we're also suppose to add bill-boards but some how over looked that so it's pretty un-performant as well. Should have that sort in day or two with new links here as well as that fixed in the store descriptor; Thanks and sorry!
     
    neoshaman likes this.
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    To be frank I rubber ducked myself by asking that question, I have been pondering this for a while, but somehow posting here unlocked new ideas lol.

    Basically sdf are super efficient, it's a direction vector scaled by the values of the point sample to get a new point to sample. Everything more ask more memory and more computation for little gain except maybe in corner cases like parallel to surface, which is why I was looking for extra data.

    But after posting my speculation, I realize the data is already in the basic sdf. A sdf is a gradient, so if you sample the derivative you get the vector of the closest surface by default, and sampling the derivative of the derivative (the divergence) you get more information about how the gradient will evolve. For example using the first derivative you get a vector toward the surface, the second tells you if the vector toward the surface are parallel, diverge or converge, which let's you makes more informed guess to define the next sample strategy. No need for extra data or new distance primitive.
     
    QGGameDesign likes this.
  11. nehvaleem

    nehvaleem

    Joined:
    Dec 13, 2012
    Posts:
    436
    All of the demos listed in asset store description are unavailable. Any info when these will be up again? I would love to try them out before buying the tool.
     
    QGGameDesign likes this.
  12. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    Came here to say this too. People are not realizing the implications of this assets (and future Aruna, specially which is essentually Unity's lumen, instead of yet-another-voxel-gi ). This is leagues (and I mean it) above of any other shadowing or AO asset ever released. Not even comparable.

    I have tried the new Fortnite version which also uses SDF shadowing and ao (alongside with gi in their case) and it just feels like witchcraft. Shadows of tiny things perfectly rendered when you are seeing the WHOLE island without loss of FPS. Huge objects still contributing to AO in huge scales I never seen before many times the max radius you usually see in screen space AOs, all while small things still contributing to detailed occlusion (a tradeoff you had with screen space AO usually). And the GI part is just magic.
     
    Last edited: Dec 12, 2022
    QGGameDesign, sirleto and TKDR like this.
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Fortnite was always using sdf shadow and ao. What you mean sdf gi, and lumen is a collection of screen space shadow, nanite shadow and sdf shadows. Sdf shadow doesn't solve small details like grass. Technically sdf is in the voxel family too, but I get what you mean.
     
    sirleto likes this.
  14. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
    We removed the demos and paused the asset, demos and asset will be online in store as soon as possible. We need to fix some bugs we missed with HDRP, before we can give you access again. Sorry about the delay!
     
    LudiKha, QGGameDesign and knxrb like this.
  15. TyGameHead

    TyGameHead

    Joined:
    Apr 29, 2017
    Posts:
    25
    MostHated likes this.
  16. GuitarBro

    GuitarBro

    Joined:
    Oct 9, 2014
    Posts:
    180
  17. TyGameHead

    TyGameHead

    Joined:
    Apr 29, 2017
    Posts:
    25
    lol sorry didn't know how it fully works my bad but why not a sdfgi or dfgi to match with it would be a nice touch for the future.
     
  18. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
    This
    Yeah we already have something similar just for characters, but full scenes are possible too if the resolution is not that high, looks like they also use some kind of cascades system, so I would just combine the realtime sdf textures, but sure they are still eat performance.. basically it's like realtime voxel gi just with sdfs.
     
    TyGameHead likes this.
  19. TyGameHead

    TyGameHead

    Joined:
    Apr 29, 2017
    Posts:
    25
    Nice, good to know.
     
  20. Peter-Bailey

    Peter-Bailey

    Joined:
    Oct 12, 2012
    Posts:
    36
    This asset is very exciting. Unity shadows are long due for an upgrade and this looks promising. Not sure why it wasn't done before, considering SDF quality and performance over shadow mapping. Doesn't seem to have as many rendering artifacts and limitations as shadow mapping and it performs better? Would it require any special shaders? Could shadows and AO be integrated into custom Shader Graph shaders? How large are SDF files?
     
  21. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
    btw. there is new video out few days ago,
     
  22. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
    another new video (posting here, since most of their activity seems to be in discord only)
    "Erebus Preview - New Unity Lighting System - DF Ray Traced AO & DF Ray Traced Shadows - Tutorial"
     
    sirleto, blueivy, GuitarBro and 2 others like this.
  23. Wolfos

    Wolfos

    Joined:
    Mar 17, 2011
    Posts:
    951
    How does the performance scale? The "Shogun" demo doesn't seem to run very well (with about 8ms for Erebus), despite not really looking like much.

    8ms is far too much for a 60FPS target (it's similar to Lumen's "epic quality" mode), so are there performance modes? Would be nice to see those included in the demos so we can measure performance. And probably turn off whatever effect it is that makes the Shogun demo perform so poorly (even with Erebus off). It's barely rendering anything. Not really a realistic workload.
     
  24. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,900
    unfortunately i have to confirm that the demo runs pretty crappy here on my rtx 2060: 15 fps with erebus enabled.19fps with it disabled (which adds a ghosted ao on top of the final image). i would expect 90 fps taking the simplicity of the demo scene into account.
     
  25. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
    The "Shogun" demo is only for faster systems, it's very high poly, it's not optimized, but the current Erebus version is now faster, I will soon update the demo...
     
    sirleto likes this.
  26. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
    I uploaded a new version which should be a bit faster now, Erebus is still not fully optimized, so I think we can get it down a few more ms.
     
  27. sirleto

    sirleto

    Joined:
    Sep 9, 2019
    Posts:
    146
    On one hand, 8ms is terribly much.
    On other hand, with current tools this is a reality when assuming for high visual quality (BiRP)

    rough numbers for 3MPix on mainstream RTX 2070:
    - 1ms unity shadows close
    - 1ms screen space shadows medium to far (NGSS)
    - 1ms unity post process AO
    - 1ms low Res screen space GI (or approx. via HBAO)
    - 2+1ms voxel based GI + reflections (I use well tuned SEGI)
    ... And some others maybe matching here

    So the more features this can realistically replace, the more ms one can afford to pay for just one tech.

    Ideally it stays below 6ms or else it's just not enough bang for the buck.
     
  28. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
    I agree that at the moment it eats too much performance, but it's not 8ms, it's around 3-4 ms. If I turn Erebus off I still get 5 ms (in full HD), because the scene is high poly and it runs with HDRP which normally isn't that fast as URP or Built-in.
    So, I think if we we double the speed to 1-2 ms it would be totally acceptable for me.
     
    GuitarBro, sirleto and hopeful like this.
  29. GuitarBro

    GuitarBro

    Joined:
    Oct 9, 2014
    Posts:
    180
    Of course, ms means nothing without listing the hardware used to achieve it. ;) I'm assuming this number is targeting mid-range approximately?
     
    sirleto likes this.
  30. sirleto

    sirleto

    Joined:
    Sep 9, 2019
    Posts:
    146
    well, if Wolfos uses RTX 2070 and has 8ms, then that equals to Nexusmaster having 4ms with RTX 3080 ... as it has aprox. twice the fillrate & bandwidth, assuming both use identical resolution.

    obviously there was a reason why i mentioned 3 MPix and 2070 for my numbers:
    1920x1080 = ~2MPix
    2560x1220 = ~3MPix
    2560x1600 = ~4MPix
    3840x2160 = ~8MPix

    RTX 2070 = 100% (just a base value of power)
    RTX 3080 = 200% (roughly speaking but not far from reality)
    RTX 4090 = 400%

    this works nicely as:
    1080p = N fps at RTX 2070
    1600p = same N fps at RTX 3080
    2160p = same N fps at RTX 3090
     
  31. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,684
    @SuperDrZee

    So, what is the current status on this plugin? I see the most recent update was in Feb 2024. But I guess all discussion moved off to Discord ...?

    After digging through the docs, it kind of looks like there's been some bug fixes, but perhaps not new development. Is this correct?

    (FWIW, a lot of the images on the first page of this thread to demonstrate Erebus ... they are no longer showing up.)

    (Update: Erebus did get another patch today, 4/10/2024. So there is somebody there. They're just not monitoring this thread, it seems.)
     
    Last edited: Apr 10, 2024
    sirleto likes this.
  32. unity_8p6oRXXqiWbRTQ

    unity_8p6oRXXqiWbRTQ

    Joined:
    Dec 20, 2018
    Posts:
    22
    Devs are very active on discord, support is also there