A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Assets and Asset Store' started by sonicether, Jun 10, 2016.
Well done Sonic. I'm always pleased to see your updates .
Alright, I've just submitted version 0.83 to the Asset Store, which simply provides compatibility with Unity 5.5 and it's new inverted depth buffer functionality. It's still compatible with older versions of Unity.
It seems as though the Asset Store is quite busy with a lot of new submissions and updates, so it may take a little longer than usual to be approved.
This is great. Maybe someone on 5.6 beta can indicate if it works with that too.
I will right after v0.83b is out
So, Unity 5.6 (Beta and Final) will support Vulkan and according to Unity, it will provide a "rendering performance improvement out-of-the box up to 60%". Do you think SEGI could benefit from it (better/highter framerate, for example)?
60?! Jeeeez glorious...
I'm guessing there are very few use cases where you see 60% improvement. Maybe on integrated GPU/CPU situations, would be my guess. Not that that isn't impressive still for those limited situations, but I think the vast majority of configurations won't see anything near that level.
Still, free performance is free performance.
Oh sure not wholesale 60%, but simply the fact that it 'can' be that high in whatever case. Speaks to it's performance range, that even a 20% boost would be wonderful!
anybody using SEGI with opsives TPC?
once i activated segi moving the camera in 3rd person view becomes really choppy
even if the framerate is high
anybody an idea?
Need a lot more info than that. TPC is just a character controller, and while in and of itself could cause performance issues (without SEGI), adding SEGI doesn't have a direct impact on TPC.
You base scene without TPC nor SEGI
Your base scene with TPC only
Your base scene with SEGI only
Your base scene with both TPC and SEGI
Note your settings for SEGI (screenshot would help too)
Beyond that there's not much else to offer.
strangely it just stopped after i once started the scene full screen
hmm i try to narrow it down more
Well as always, never have the Scene View or Profiler windows visible in the editor, and Maximize the Game View on Play. You could always make a standalone build too to compare.
Remember SEGI is a high end graphics feature, so it's going to be expensive period. There are a TON of settings that you can play with to get the fidelity and performance you need.
Find a Quality Preset in SEGI that you like, that gives you what you want, then start turning things down until it stops looking how you want it.
Alternatively, try lower Quality Presets first that look pretty close to what you desire out of your ideal image quality before playing with individual settings.
While tuning get a better idea how each setting contributes to the final image, and which settings are more resource intensive than others.
Consider reducing the size of the Voxel Volume to start...this increases near detail greatly at the cost of possibly no GI in the distance (you may not even notice this anyway), and then you could turn down settings even further as you'll have more voxels closer to the camera for greater fidelity
Make sacrifices to visual quality if it means your scene runs faster; remember you're already ahead of the 'game'...as you have realtime global illumination in your game!
EDIT: Added one important step regarding Quality Presets!!
Guys I could really use the help of this community to understand if the voxelization process of SEGI, in particular, if SEGI works with Geometry Shaders. If geometry is built on the GPU, will it be voxelized correctly and receive appropriate lighting?
@sonicether has been busy and maybe hasn't seen my question.
Sorry about that! A good general rule is that if your geometry can cast shadows "automatically" without any extra steps, it can be voxelized. Though, with a mesh that is only computed in a geometry shader, it is only "visible" within that shader and ceases to exist once it's done being rendered. You might want to check out this thread https://forum.unity3d.com/threads/get-mesh-generated-with-geometry-shader.189378/
So, it looks like your two options are either to move your mesh generation to a compute shader and outputting mesh data via a structured buffer, or the voxelization shader will have to be modified to also generate this geometry in its geometry shader. I'd recommend the first option, since it'll allow that data to be used elsewhere without having to be generated more than once and you'll be able to do additional things with it (like shadows).
0.83 is up...so nice to have SEGI back in my project...
The only downside now is that I don't have SEGI in Scene-View (I was using BeholdR which is now a dead asset).
There's another one called Scene-View FX, but that hasn't been updated since May, and I'm reading complaints of it being borked in 5.5
As I understand it, Image Effects can now be turned in Unity without a separate plugin, but perhaps this functionality needs to be added to SEGI itself?
0.83 works great in 5.6b2 !!!
ExecuteInEditMode, maybe? There's also ImageEffectAllowedInSceneView.
Hey guys, here are some comparison images with SEGI enabled/disabled
@Vagabond_ nice images, but to me (not an expert tough) appear self shadowing is gone from the palm vs the first image and all appear a bit too bright, this is just a SEGI effect?
I mean yea I hope it's trivial to add to SEGI?
Yes it probably is but I doubt sonicether is going to add it because not everyone wants to run it in their editor and I don't think Unity has a toggle for them. You'll probably have to do it yourself.
I'd love to, but I'm not entirely sure where to add it in the SEGI script.
I see he already has:
...in his script, and I added (to start):
...but now I'm not entirely sure what else to do. I've been Googling to no avail
**To Add SEGI Image Effect to the Scene-View**
In the SEGI script at the very top...
...right after this part:
[AddComponentMenu("Image Effects/Sonic Ether/SEGI")]
..simply add this!
If nothing happens in your Scene View, I believe this is a Unity bug (that I don't know how to fix). Creating a fresh scene does in fact work exactly as you'd expect:
Got it to work in a fresh scene...but not in my existing one (gah!!).
I've noticed this with Unity, where lighting (any, not just SEGI) stops behaving correctly in a scene. I'll try an reimport to see if that 'resets' things...
Hi, the self shadowing is there but it is blended with the indirect illuminations ( bounced light ) and the occlusion strength setting is not set at high value... if you look at the trees in right the self shadowing is much more visible...
Second image uses only SEGI and the third image uses SEGI and the new image effects stack form Unity...
Any word on when SEGI will work with point lights? Is this "in the future" a near future, or some distant unforseen 5 projects from now future? .
**Updated my post above regarding Scene-View Image-Effects with a How-To**
Awesome! Thanks for looking into it.
Yea absolutely man! I mean S*** a HUGE part of my process involves being able to see the lighting interactively while building; SEGI absolutely requires Scene View rendering!
SteveB - wow, it works! Thanks ).
Yay! You're welcome though I didn't do very much. At least you and I don't need to worry about Scene-View Fx now!
Has anyone been able to create a windows 64bit build in v5.5.0f3?
I get this on attempting to create a build.
Assets/SEGI/SEGI.cs(384,22): error CS1061: Type `UnityEngine.RenderTexture' does not contain a definition for `generateMips' and no extension method `generateMips' of type `UnityEngine.RenderTexture' could be found. Are you missing an assembly reference?
ok, so I've changed the 5 or so occurrences of the deprecated .generateMips to .autoGenerateMips in SEGI.cs and can now seem to create a win 64 build and results looks ok to me? Might need another IFDEF around it if it's yet another 5.4/5.5 incompatibility??
That's strange, when you import the scripts into 5.4/5.5, Unity should automatically replace all .generateMips occurrences to .autoGenerateMips
Is anyone else having this issue?
Yeeeeup got that exact error as well...did what gmatthews did (change the four instances of generateMips) and now it builds.
That said, the other thing I probably should bring up is the state of it's usage in VR. As expected, we're currently required (remember this is 5.5) to switch from Single Pass to Multi-Pass to get SEGI to work, and this in turn halves my framerate and according to SteamVR, also miss frames.
Thing is, lower settings in SEGI have zero impact. Insane or Low, I still get 45 fps period. I can't tell if this is a Unity thing, SEGI thing or VR thing.
Either way, SEGI appears to be capable of running in VR if we could simply get it to run in Single Pass mode.
If you're getting a constant 45fps, you should check your SteamVR Performance settings, and make sure 'Allow interleaved reprojection' is NOT ticked - it cuts your render framerate to 45 and reprojects it to 90, and is meant mainly for machines that can't hit a constant 90 (though for some weird reason it seems to be enabled by default).
Tried it and it didn't work. Besides, when a machine is hitting 90, it's hitting '90', so this wouldn't be an issue. SEGI is already heavy, so having to do twice the work is just a no-go.
I appreciate the idea and it was worth a look-see!
New Year has come. What are the plans for SEGI in the next months?
Maybe someone here has tested and could elaborate a bit?
Well, besides working on getting cascades polished up, there are a few other things I'm focused on.
Support for sharing resources between instances of SEGI will come soon. This, of course, will greatly improve the efficiency of SEGI with VR or multiple views.
I'd also like to get cubemap support working soon. This will mean that ambient light from a cubemap sky will properly reflect the colors in the cubemap, and reflections will fallback to sky cubemap reflections instead of just the solid "skylight" color. Support for local cubemaps is not planned for now.
I've also recently come across something very interesting. This article is well worth a skim through: http://momentsingraphics.de/?p=127
I'd like to experiment with using blue noise instead of white nose for diffuse GI tracing. This will lead to a more pleasing stochastic sampling result, and will also diminish the noticeable "blobs" when using stochastic sampling and bilateral blur (you can see in the article above that blurred blue noise has a much less noticeable structure to it than blurred white noise). This may even lead to being able to get away with fewer samples per pixel.
So, those are my plans for the next few months. Of course, I'll keep everyone updated on how everything goes.
Oh, and as far as Vulkan providing a speedup with SEGI, I suppose it's possible, depending on how Unity "automagically" leverages Vulkan with general object rendering, that SEGI may see a performance increase with regards to voxelization. Of course I'll have to see for myself.
Hey Cody, how about the point and spot lights support? Any news?
In regard to that blue noise article, we can an actual implementation here, to people who want to see real world usage (in unity):
Starting at 14:40, it doesn't stop there, they come back t it for many many case, at 17:00
Looks pretty efficient and awesome IMHO
The more I think about this asset, The more I think we should have a "baked" voxel volume we can stream depending on need, to let the engine just do the lighting and stop voxelization at run time. Because it looks like it make the engine real time on middle range computer too, which would be insane to just think about .
@sonicether could you explain how SEGI might bake lightmaps, or is that simply not possible?
The naive baker will be like this:
- It's a screen space solution, all you need is to match the screen projected coordinate to each texture UV ... which mean you need to be exhaustive in term of viewpoints.
- Since it's voxelized (don't know if it's froxel though) maybe be can simply enumerate triangle intersecting a voxel, then create "viewport" strategically to get the accumulation in this voxel for a given direction.
Knowing more now how SEGI works, baking seems to make little sense. At that point one might as well use Enlighten as, after giving it another go and learning exactly how it works and how best to achieve great results quickly, I'm now able to get very quick bakes that look great. I still prefer the way SEGI renders my scenes, but I'm no longer disappointed in Enlighten and it's obviously fast AF.
All that said, upcoming support for VR and Cascades will keep me firmly entrenched in SEGI lit worlds!
Are the demos using the latest version and does this ever have a chance to run on mac with opengl 4.1?
Great to see those short term goals! I think these features make a lot of sense.
In regards to baking, comparisons to Enlighten are obvious, but lets not forget that baking for procedural/real-time environments would be amazing and would allow for experiences impossible to achieve with Enlighten or any other way.
I didn't even consider it's value to procedural worlds, so I guess I'm back on board with baking!
I wonder about this too. I'm working on procedural things and would love to be able to use this, but the current Windows only limitation is enough to make it not suitable for my multiplatform desktop projects.
I don't know about the specifics about how SEGI works, but with Vulkan and Metal Compute on OSX in Unity 5.6 I have my fingers crossed!
Macs do not support Shader Model 5 equivalent features (at least with OpenGL 4.1) which means that they aren't capable of running SEGI. Perhaps Vulkan and Metal Compute support will change this and I will be able to use features from these APIs to do the things that SEGI needs to do in order to work.
The information that is needed for cone tracing is simply the position of the shaded pixel in voxel-space, and the world normal. Yes, this information can be determined in screen-space, but it could also be determined in lightmap-texture-space. I haven't looked into the specifics yet, but this is essentially what would need to be done for baking. The more difficult aspect of baking is that the voxel volume would have to be moved around and the scene revoxelized in order to calculate indirect lighting for all surfaces in the scene.
Since the faster part of the algorithm is the actual diffuse GI tracing in screen-space, if the voxel GI data itself could be precalculated/baked and streamed as it's needed, we would be able to skip the slowest part of SEGI (which is voxelization and preparation/mipmapping of the GI volume). I think this would be a simpler and more promising approach than fully baking all lighting, since we wouldn't have to worry about light probes/reflection probes and shading dynamic objects or characters.
I know nothing about streaming this kind of data from the disk, but I'm sure it's something I could manage if I spent enough time learning about it.
Actually, while I was looking for an injection point for screen-space shadows, I came across this: https://docs.unity3d.com/ScriptReference/Rendering.LightEvent.AfterShadowMap.html
The challenge with point and spot light injection into voxel GI data is not the light itself, but the shadows. If I can attach a command buffer to a point or spot light and use this LightEvent to get the shadowmap after it's rendered, then I could use that shadowmap to inject the light data with proper shadows.
The other information I need is the view/projection matrices for transforming from world space to light shadowmap space in order to sample these shadowmaps during voxelization. I haven't found any way to get this information from the inner workings of Unity's rendering, so I may have to calculate it manually. That's where the real challenge will be.
And of course, adding shadow casting lights to the voxelization step will slow things down. The other option for providing point/spot light shadows during voxelization is to do an actual cone trace to determine visibility instead of sampling the light's shadowmap. It'll take some testing to see which approach will end up being faster.