A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Now in Beta! Get 1:1 live lessons on any Unity topic or help troubleshooting your project – Connect with an expert on Unity Live Help
Discussion in 'Works In Progress' started by Lexie, May 24, 2017.
looks interesting, might try it tomorrow. Neat tests.
Yeah its cool. BTW, it comes with tons of sample scenes, but I had to increase the quality settings FYI
Want to throw cash at screen for HDRP version, probably never happen but here's my vote
@hippocoder I didn't see an email on his Github, but maybe someone could message him with an issue or maybe there's another way? If he knew the interest, maybe he'd consider it.
Is this only for LW or what pipeline is this for??
There's a neat little trick written here on how to find almost every users email address on GitHub...
This way I was able to find his email address here.
(Who cares about data protection anyway? /s)
If someone contacts him, invite him to come to the forum and open a topic. His project is very interesting.
Consider it done
Looks interesting! Though I wouldnt say "runs well" just runs I mean to get a decent look you have to go Very High resolution, and on that im around 10FPS. I have to say it looks nicer than the basic SEGI to my taste... only if it would run fast too
I took a break from work and had a go at a custom implementation of DDGI. There was a talk at GDC on it this year. It's pretty fast and the results are nice for indirect lighting only. This method doesn't fair too well with emissive surfaces or tight spaces though, but you could design your spaces around these limitations.
Changes to the scene are almost instant, lighting nearly fully resolves after a few frames. its pretty crazy how responsive it is.
My GI and DDGI are pretty similar, main difference being that I store the data in a sparse data structure and place more light probes near surfaces to better handle the light transport. The downside is sampling a sparse data structure is slower and I tend to place more light probes in the scene so it takes longer to update the volume.
Sounds like DDGI is becoming the winner, because if it's that responsive by default, probably could spread the cost really thin?
I'll have to do some more testing before I can call that one. It a trade off between speed and accuracy.
This stuff looks promising. I don't see the black splotches which are happening with voxels.
The person in the video mentions DXR. Does this solution require currently RTX card and is their system actually rebuilding BVH tree every frame !? I am a bit confused as he says fully dynamic but there is also rebuilding and refitting BVH which is slow !?
In the video version it would be rebuilding the BVH each frame and uses DXR to handle the ray traces. In my version I voxelized the scene and used that data to handle ray traces so it wouldn't need an RTX card. I have no idea what kinda of cost rebuilding the BVH is. I havent had the chance to mess around with an RTX card yet.
A realtime GI without needing a RTX card would be awesome, when do you plan to release your GI or DDGI?
Unity's working on DDGI FYI, but no idea what shape or form that takes or when really. @rizu spotted it when doing the rounds on github.
It was me that did the spotting, unless there was some other spotting too
Oops must've been you. Lose track of all Unity's git hub stalkers. Wonder when this feature will drop?
Dang! they have hidden the talk behind a membership, it must have been a bug because I could have access freely before ... Too late watch it already.
I feel like RTGI is becoming "how to encode and access sampling target (from the scene) above a given lighted point". Ie basically solving hemisphere visibility above each point. Every technique are now using multi pass bouncing, which makes the whole problem only a sample gathering issues, which is more gpu friendly than "ray path finding". It also incidentally allow to have a predictable budget (samples * lighted points), is "caching" friendly, and also allow to spread compute as the gathering converge no matter which order the sample are accessed. All the trouble is about the scene representation to get that visibility check (basically ao), and it's precompute friendly in static environment. I bet we haven't seen the end of it, as cheaper technique might be discovered, or smart composition of various technique to balance the strength and weakness of each.
Wait, Unity is preparing a dynamic global illumination in runtime?
And approximate release date is known?
Hardware-accelerated programmable ray tracing is now accessable through DXR, VulkanRT, OptiX, Unreal Engine, and Unity.
Does DDGI require to have nvidia only DXR card?
DDGI fundamental seems really hardware agnostic, I'm not sure I understand everything correctly yet. WXhile it can benefit from DXR card, I don't think you gonna need it.
I guess the question is... although the API may support non-DXR hardware, is it smart enough to give you a halfway worthwhile result if you don't have that hardware muscle backing it up?
Apparently equivalent card with non dxr are 40% slower and more susceptible to scene complexity.
If anything that makes me disappointed that dedicated hardware is only 40% faster!
There is Unity Experimental HDRP DXR that support realtime ray tracing GI.
The release date is unknown (around end of 2019) but I think it will be before we will get Lexie's implementation, unfortunately.
Oh, thank you very much, I had no idea. It's very good news that Unity is working on this. I would also prefer the implementation of Lexie, but I need a real time GI. If Lexie release his asset before Unity, it could surely get lot of money before Unity launches a free solution.
Faster AND stable, refer to digital foundry video on the matter. Also it's with brute force, I expect technique to develop to speed that up ... Which is what DDGI is basically doing anyway. And in fact I think DDGI can be further accelerated, we just need to ditch the raytracing at step 1 in some way lol
The question is, can it be optimised using better APIs/firmware updates, or do we need to wait for new hardware? I can imagine if you applied similar cheats as we use now (e.g. spreading raycasts over multiple frames, using simplified geometry for raycasting, etc etc) it would do well, but is that an achievable thing with the existing hardware?
One of the main benefits of DDGI is that is does scale well across different hardware. In the talk he mentioned he had an example running on xbox one class hardware. Interestingly the performance is limited by memory bandwidth not ray tracing performance. He also mentioned that the ray tracing step is more efficient than using rasterization because ray tracing at very low resolution even with computer shaders is more efficient than rasterization on the same hardware because at that point there will be many triangles per pixel which is bad for rasterization efficiency but good for raytracing.
The limitation of DDGI is that you need a lot of probes in a fixed grid because if the grid is fixed you don't need to traverse a bvh to find the correct probe which is faster. The issue is that you can then only store and update each probe at a low resolution and colour bit depth because otherwise you run out of memory and gpu bandwidth because of the sheer number of realtime probes needed.
In modern gpus with this technique you run out of memory and bandwidth before you run out of compute shader performance even without dedicated RTX hardware.
DDGI doesn't produce high quality AO or reflections unlike brute force path tracing though so I am guessing Nvidia will use RTX cores to add ray traced AO and Reflections at high resolution as a post process which is something RTX cores will be far faster at than compute shaders.
Interestingly a lot of the ray tracing needed for the DDGI is done is screenspace and without a BVH so I don't think the RTX cores could actually accelerate it much anyway.
Also I think there is probably a way to optimize static scene probe tracing, because the scene don't change around a probe, there should be a way to cache that data smartly, to sample that later since it's mostly single hit, and only raycast changes or dynamic objects. Think like a RT enlighten baking. Also raycasting could probably be replace with rasterization if the scene density isn't that high with sparse probe. I say that because if we find a smart way to cache the data (say a cubemap storing world position I don't know) we would only need to sample the energy difference. Now DDGI in this current form is probably best for high end, but those optimization can translate on very low end who don't have the same density that makes it impracticable to rasterized, and can't rely on light field probes density.
I know that this is your own thing and that we have no right to ask anything from you.
But given the latest poopstorm Unity is in regarding realtime GI, please consider releasing this soon and hopefully on HDRP as well.
Yeah, now would be an excellent time to swoop in with a killer GI solution. Would make many people (including myself) very happy lol.
Damn 2021 for a new unity GI system, crazy. I guess now would be a good time to release something
That would be fantastic! You've got a window of time where Unity's dropped the ball and most consumer PCs aren't good enough to do raytracing. Also I think many people would go for a decent system and stick with it rather than having to pivot their game during development to use whatever Unity's new system is.
To be honest, this looks awesome!
Also, keep in mind, raytracing is more or less "bruteforce" solution to the GI problem.
I do not think just dumping GPU performance for RTX cores is a smart idea.
There should be a proper software realtime GI solution without nasty price tag attached to it.
Now that Enlighten is pretty much trash / gone, I think you can't have a better time to introduce this solution to the market.
Not to mention that RTX will probably never hit built-in renderer.
He winked, you all saw it!
On a forum, I think a wink is a legally binding promise
Yeah... Just keep in mind that talks of releasing one of his older attempts were already raised last year and... well... we all know how that went.
Yes, winks are contractually binding. This is why I myself prefer to use the dubiously ambiguous green-grin.
Yeah, I would never trust a green grin.
I need a grin lantern to see clearly into all this non ambiguous promise binding stuff
Does anyone know if hybrid approaches are possible? For example if you could use a GI solution as the initial pass when scene or lighting changes dramatically so the raytracing doesn't have the appearance of slow convergence. Or alternately if you could get faster frame times by only raytracing certain objects in the scene (for example metallic or translucent objects).
Edit: NVM, I'm dumb, thought about real hardware raytracing.
Yes it's possible, DDGI is such an approach.
Ie use regular lightprobe volume, augmented with a visibility structure to prevent leaking, to get baked GI. Update (more or less slowly) that structure with any raytracing update method.
- it mean you can stream GI from disk for quick initialization.
- or you can trace only once at initialization, and use the result
- or stream and update when needed using tracing
- or update with raytracing and inject light with non raytracing solution and have the raytracing still pick it up.
- well any mix and match you can think off.
I'm working on a custom implementation of Irregular grids to store the world as triangles instead of voxels. Some of the work I've been doing to calculate realtime GI is let down by the resolution of the voxel grid. sparse voxel octree's allowed me to store a higher res voxel version of the world but to get the resolution required was still too much VRAM.
So I've moved onto storing the world as triangles instead. This is a lot faster to generate and takes up a lot less VRAM and should be faster to traverse once its done so big wins all around.
On the left is the traversal cost, on the right is the triangles with their ID.
Little sky occlusion test by sending out 128 rays per pixel to confirm the tracer is working. its not really feasible to do a brute force trace like this. this was just a little test for fun.
This also should fix light leaking eventually !?
It will help create a more accurate representation of the lighting so any lighting artifacts would be due to compression of the lighting data rather then an issue with the traces through a voxel representation of the world.