Search Unity

SEGI (Fully Dynamic Global Illumination)

Discussion in 'Assets and Asset Store' started by sonicether, Jun 10, 2016.

  1. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Ex:

    Declare like this
    RWTexture3D<float4> tracedTexture0;
    RWTexture3D<float4> tracedTexture1;


    Write to like this:
    tracedTexture1[voxelCheckCoord0] = float4(0, 0, 0, 0);


    Read from like this:
    gi.rgb = tracedTexture0[voxelCheckCoord].rgb


    At the start of a loop/for that writes the values, add this tag:
    [allow_uav_condition]


    And in .cs, make it a UAV window like this:
    tracedTexture0.enableRandomWrite = true;
    context.command.SetRandomWriteTarget(1, tracedTexture0);
    context.command.SetRandomWriteTarget(2, tracedTexture1);


    Then you can rw to your hearts content

    Only cavaets, you have to write to the entire array at once. You can't individual r/g/b/a elements... And if your doing any major calculations. A compute shader will be faster than doing in the fragment.
     
    jefferytitan and neoshaman like this.
  2. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    You can mask the compute to set "individual" elements though?
     
  3. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    Thanks! In your code you use tracedTexture0 and tracedTexture1. Does this mean that you have to write to a copy of the texture, not the same texture? I suppose it's not surprising, just not clear how compute shaders work.
     
  4. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Na..... That's just because I've been experimenting with it using a couple of secondary grids.... Because SCIENCE!
    I just copy and pasted from visual studio because also lazy. You can r/w the same texture as much as you like.

    In regards to compute shaders. I don't know if the per element restriction also exists there. I only know about it due to shader compile errors saying as much when playing with the fragment shader. The documentation is somewhat lacking here, and had to google out frustrated forum threads.
     
  5. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Just a question about the passes in the final shader, some seems unecessary, don't they bloat and slow down the code? I mean there is pass specifically tagged as unused, but there is no visible mecanism to disable it :eek:
     
  6. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    While they are in the code... They're not actually called from the .cs .... So, no.. there is not performance impact from them being there as already disabled.
     
  7. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I had no idea we could call pass individually in code :rolleyes: I need to stop doing small local shader and start going in the shader fu level like you.
     
  8. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    If you check the .cs source inside the OnRender() function... You'll see lots of Blit commands... That's where the passes are called from.
     
    jefferytitan and neoshaman like this.
  9. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    This is the frustration that I get. I haven't found anywhere that is a good general end-to-end reference for shaders. There's too many architectures and versions and optional capabilities and instruction limits etc etc. For example I had a couple of (what I thought were good) ideas which I couldn't quite find the references that I needed to get started on. Like:
    • For a big bank of security monitors can you preserve a low-res G-buffer each of the static geometry and simply add dynamic objects and re-light them?
    • Could you similarly choose a couple of spots in each room with good visibility and use a low res render of them to enhance Screen Space Raytraced Reflections by catching rays that go out of range of the primary camera? I love SSRR but it suffers badly from not working for an obvious case - flat surfaces facing the camera.
     
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Yes, that's easy.

    A multires like cascaded shadow mapping is quite a good idea, I like it, maybe you should look at cascaded shadow mapping at a start.

    But that basically answer both your question. It looks like a multiple pass shader render pipeline.
     
  11. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    That's where the tears begin lol. I don't recall exactly what problems I encountered, but I think the vast majority of Unity examples out there concern the standard rendering pipeline, plus the odd extra texture and post-processing shader for fun. At the time it wasn't quite clear how to skip large chunks of the rendering pipeline, do them out of order, etc. Maybe it's easier in 2018. Also I recall many wails from the forums about accessing the G Buffer only to find whole channels were missing.

    If you have any words of wisdom on how to save an old G Buffer, and then continue rendering to it later, it would be much appreciated.
     
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The key is that you don't access the gbuffer, you create an equivalent, since you would have a fixed view scene it won't update everything every frame. So you just apply the concept of Gbuffer not access the gbuffer of unity, like sSEGI did with shadow, it implemented his own depth texture.

    I mean what's a GBuffer? it's just a depth image, a color image and a normal image. Then you just do standard shader on them by sampling those image. So you can prebake these image if the view are fixed or unvarying, or cache them at runtime. To blend dynamic object, you render them similarly and compare their depth to the image you blend them into.

    In fact you can look at SEGI code to see how he implemented custom shadow and the sort, I commented some of these shader right here!

    I think blit and render texture, camera replacement shader are a good place to start.
     
    jefferytitan likes this.
  13. Pode

    Pode

    Joined:
    Nov 13, 2013
    Posts:
    145
    I'm in the following situation, with our current project:
    - archviz rendering (so inner scene, nothing outdoor) => GI is very important
    - 15 to 20 furnitures that can be dynamically swipped (changed)
    - all of those furniture are made like that: a 3D model, and the attached textures (everything in those textures can be pre-baked) => that's were I can add the step of baking a SDF per model
    - linux/osx machines

    There's nothing on the market that can address this situation. SEGI would be the closest.

    With MRT, you can write from a Fragment shader to a 2D RenderTexture.
    You can then store all the 'slices' of your 3D texture in your RT (each slice will seats next to its neighbour).
     
  14. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    The trick with gBuffers and some channels being missing when they shouldn't be.... Is that certain channels are only written when certain other rendering options are either on or off.... I forget which applies to which. But I discovered this peculiarity when working on SEGI.

    If you want to emulate your own gBuffers, I'm working on that for my forward path implimentation. There's already an example of gBuffer0 in my code... And I'm gonna add in support for gBuffer1 emulation next.
     
    Shinyclef and jefferytitan like this.
  15. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    @ The person with the VR memory leak. Sorry, can't remember who. That's now fixed in the repo head.
     
    Shinyclef likes this.
  16. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    Just to clarify, I assume the standard shaders are all hardcoded to use the "proper" Unity G buffer. Therefore I would create render textures which are formatted the same as the appropriate G buffers. I would then need to replace any standard shaders that use the G buffer with variants that use my texture instead?
     
  17. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Is there anyone watching this thread *really* good at math who'd be up for helping me with my staggered tracing logic?
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Depend which math lol

    My personal analysis is that it's not just a shader problem you are facing, it's a render pipeline problem. The gbuffer of unity isn't just a shader and texture format, it's also part of a process with a certain order of operation, which is why they are doing SRP now. We didn't need SRP to emulate thme, it was just more costly.

    So in your case, you want to capture scene and then do partial rendering on them, there is custom culling, custom resolution and custom rendering. The Gbuffer just do all in one pass, but if you want to reuse the "background buffer" and only add dynamic object on top, then that's not how the render pipeline work at all, it doesn't "add" objects on the deferred buffer. SO next you would want to capture the buffer in a texture and then use it, well there is an option that does that already https://docs.unity3d.com/Manual/SL-CameraDepthTexture.html but then you still have to make shader varient yourself, so you can compare depth and have a lighting shader ready and do the pass yourself. https://docs.unity3d.com/Manual/SL-DepthTextures.html But unity make it simple for you to write varient of standard shader with the shaderlab's surface shader, they already do the lighting and you basically just write (pass the values) to a "surface" that get lit properly, so your depth comparison would be made there. https://docs.unity3d.com/Manual/SL-SurfaceShaders.html
     
  19. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    I'm good at specific aspects of maths, probably not the exact ones you need. ;) Maybe post what you're currently stuck on, and someone like me who has specific strengths may be able to help.
     
  20. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Right, so I have a nearly functional implementation of ray caching across frames.
    Nearly, in that the important bit works great.... But I have a problem I could use some help with with... Which is calculating the voxel coordinates to read/write the cache data with. I basically suck at math. So I'm hoping there is a mathematically literate sort out there who might have any suggestions?

    At present the implementation is as follows

    C#
    Code (CSharp):
    1.             //Render diffuse GI tracing result
    2.             context.command.SetRandomWriteTarget(1, tracedTexture0);
    3.             context.command.SetRandomWriteTarget(2, tracedTexture1);
    4.             context.command.SetRandomWriteTarget(3, tracedTextureA0);
    5.             //context.command.SetGlobalTexture("tracedTexture0", tracedTexture0);
    6.             context.command.Blit(RT_gi1, RT_gi2, material, Pass.DiffuseTrace);
    7.  
    8.             tracedTexture1.IncrementUpdateCount();
    9.             if (tracedTexture1.updateCount >= tracedTexture1UpdateCount + 32 || tracedTexture1.updateCount < tracedTexture1UpdateCount)
    10.             {
    11.                tracedTexture1UpdateCount = tracedTexture1.updateCount;
    12.  
    13.                 //context.command.SetComputeTextureParam(transferIntsTraceCacheCompute, 0, "Result", tracedTexture0);
    14.                 //context.command.SetComputeTextureParam(transferIntsTraceCacheCompute, 0, "RG0", tracedTexture1);
    15.                 //context.command.SetComputeIntParam(transferIntsTraceCacheCompute, "Resolution", 512);
    16.                 //context.command.DispatchCompute(transferIntsTraceCacheCompute, 0, 512 / 16, 512 / 16, 1);
    17.  
    18.  
    19.                 transferIntsCompute.SetTexture(1, "Result", tracedTexture0);
    20.                 transferIntsCompute.SetTexture(1, "RG0", tracedTexture1);
    21.                 transferIntsCompute.SetInt("Resolution", 512);
    22.                 transferIntsCompute.Dispatch(1, 512 / 16, 512 / 16, 1);
    23.  
    24.                 clearCompute.SetTexture(0, "RG0", tracedTexture1);
    25.                 clearCompute.SetInt("Res", 512);
    26.                 clearCompute.Dispatch(0, 512 / 16, 512 / 16, 1);
    27.  
    28.                 clearCompute.SetTexture(0, "RG0", tracedTextureA0);
    29.                 clearCompute.SetInt("Res", 512);
    30.                 clearCompute.Dispatch(0, 512 / 16, 512 / 16, 1);
    31.  
    32.                
    33.  
    34.                 //context.command.SetComputeTextureParam(clearCompute, 0, "RG0", tracedTexture1);
    35.                 //context.command.SetComputeIntParam(clearCompute, "Res", 512);
    36.                 //context.command.DispatchCompute(clearCompute, 0, 512 / 16, 512 / 16, 1);
    37.  
    38.                 //context.command.SetComputeTextureParam(clearCompute, 0, "RG0", tracedTextureA0);
    39.                 //context.command.SetComputeIntParam(clearCompute, "Res", 512);
    40.                 //context.command.DispatchCompute(clearCompute, 0, 512 / 16, 512 / 16, 1);
    41.             }
    Shader
    Code (CSharp):
    1.         Pass //0
    2.         {
    3.             HLSLPROGRAM
    4.             #pragma vertex VertSEGI
    5.             #pragma fragment Frag
    6.             #pragma fragmentoption ARB_precision_hint_fastest
    7.             #pragma multi_compile_instancing
    8.             #if defined (VRWORKS)
    9.                 #pragma multi_compile VRWORKS_MRS VRWORKS_LMS VRWORKS_NONE
    10.             #endif
    11.  
    12.             int FrameSwitch;
    13.  
    14.  
    15.             sampler2D NoiseTexture;
    16.  
    17.             float4 Frag(VaryingsSEGI input) : SV_Target
    18.             {
    19.                 float2 coord = input.texcoord.xy;
    20.                 float2 uv = input.texcoord;
    21.                
    22.                 //Get view space position and view vector
    23.                 float4 viewSpacePosition = GetViewSpacePosition(coord, uv);
    24.  
    25.                 //Get voxel space position
    26.                 float4 voxelSpacePosition = mul(CameraToWorld, viewSpacePosition);
    27.                 voxelSpacePosition = mul(SEGIWorldToVoxel0, voxelSpacePosition);
    28.                 voxelSpacePosition = mul(SEGIVoxelProjection0, voxelSpacePosition);
    29.                 voxelSpacePosition.xyz = voxelSpacePosition.xyz * 0.5 + 0.5;
    30.  
    31.                 //Prepare for cone trace
    32.  
    33.                 float3 worldNormal;
    34.                 if (ForwardPath) worldNormal = GetWorldNormal(coord).rgb;
    35.                 else worldNormal = normalize(SAMPLE_TEXTURE2D(_CameraGBufferTexture2, sampler_CameraGBufferTexture2, coord).rgb * 2.0 - 1.0);
    36.  
    37.                 float3 voxelOrigin = voxelSpacePosition.xyz + worldNormal.xyz * 0.003 * ConeTraceBias * 1.25 / SEGIVoxelScaleFactor;
    38.  
    39.                 float3 gi = float3(0.0, 0.0, 0.0);
    40.                 float3 traceResult = float3(0, 0, 0);
    41.  
    42.                 const float phi = 1.618033988;
    43.                 const float gAngle = phi * PI * 1.0;
    44.  
    45.                 //Get blue noise
    46.                 float2 noiseCoord = (input.texcoord.xy * _MainTex_TexelSize.zw) / (64.0).xx;
    47.                 float4 blueNoise = tex2Dlod(NoiseTexture, float4(noiseCoord, 0.0, 0.0));
    48.  
    49.                 float depth = GetDepthTextureTraceCache(uv);
    50.                 blueNoise *= (1 - GetDepthTexture(uv)) * 50;
    51.                 int tracedCount;
    52.  
    53.                 //Trace GI cones
    54.                 int numSamples = TraceDirections;
    55.                 uint3 voxelCoord = uint3(0, 0, 0);
    56.                 float latitude;
    57.                 float longitude;
    58.  
    59.                 for (int i = 0; i < numSamples; i++)
    60.                 {
    61.                     float fi;
    62.                     if (i > 1) fi = (float)i + blueNoise.x * StochasticSampling;
    63.                     else fi = (float)i * StochasticSampling;
    64.                     float fiN = fi / numSamples;
    65.                     longitude = gAngle * fi;
    66.                     latitude = asin(fiN * 2.0 - 1.0);
    67.  
    68.                     float3 kernel;
    69.                     kernel.x = cos(latitude) * cos(longitude);
    70.                     kernel.z = cos(latitude) * sin(longitude);
    71.                     kernel.y = sin(latitude);
    72.  
    73.                     kernel = normalize(kernel + worldNormal.xyz * 1.0);
    74.  
    75.                     //voxelCoord = float3(voxelOrigin.xy + kernel.xy + blueNoise.xy, 255 - voxelOrigin.z + kernel.z);
    76.                     voxelCoord = float3(uv.x, uv.y, depth * 255);
    77.                     tracedCount = DecodeFloatRGBA(tracedTextureA0[voxelCoord]).r;
    78.  
    79.                     if (tracedCount.r <= 32) traceResult += ConeTrace(voxelOrigin.xyz, kernel.xyz, worldNormal.xyz, coord, 0, TraceSteps, ConeSize, 1.0, 1.0, depth);
    80.                 }
    81.  
    82.                 traceResult /= numSamples;
    83.                
    84.                 //Nin - Keep this for debugging
    85.                 //voxelCoord.z = depth * 256;
    86.                
    87.                 // 0.03125
    88.                 if (tracedCount <= 32)
    89.                 {
    90.                     interlockedAddFloat4(tracedTexture1, voxelCoord.xyz, float4(saturate(traceResult.rgb), 2));
    91.                     //tracedTexture1[voxelCoord] = EncodeFloatRGBA(float4(traceResult.rgb, 2));
    92.                     tracedTextureA0[voxelCoord] = EncodeFloatRGBA(float4(tracedCount + 1, 0, 0, 0));
    93.                     //interlockedAddFloat4(tracedTextureA0, voxelCoord.xyz, float4(1, 0, 0, 0));
    94.                 }
    95.                 //traceResult.rgb = DecodeFloatRGBA(tracedTexture1[voxelCoord]);
    96.                 //traceResult.rgb = (tracedTexture0[voxelCoord].rgb * 0.03125 + traceResult.rgb) * 0.5;
    97.                 traceResult.rgb = tracedTexture0[voxelCoord].rgb * 0.03125;
    98.                 //traceResult.rgb = SAMPLE_TEXTURE3D(tracedTexture0, samplertracedTexture0, float3(voxelCoord.xyz));
    99.                 //traceResult.rgb = DecodeFloatRGBA(tracedTexture1[voxelCoord]) / 32;
    100.  
    101.                 gi = traceResult.rgb * 1.18;
    102.  
    103.                 return float4(gi, 1.0);
    104.             }
    105.             ENDHLSL
    106.         }
    And this is it nearly working. I just need the magic math that is currently escaping me:
     
  21. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Oh coordinate conversion, that's okay I can deal I'll look at it!
     
  22. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    Maybe I should spy on this too. At one stage I was playing with a post-processing effect that used the world-space coordinates to get an effect like changing the resolution of voxels rapidly, but I seem to recall it had weird rounding errors that got worse the further a point was from the middle of the screen.

    When it comes to shaders I am the proverbial coder than knows enough (and only enough) to be dangerous. ;)
     
  23. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    >weird rounding errors that got worse the further a point was from the middle of the screen
    Ah, the old floating point precision issue... Yeah, that's a pain inherent in all 3d positional systems.

    I always think the more eyes on a problem the better. Level of skill isn't such a barrier to problem solving, with it being a more a conceptual than technical pursuit. Actually, those with less hands of experience can often see things a more seasoned dev can miss... There's less ingrained habits and dogmatic conventions to blind you along the way :)
     
  24. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    I suspected, but I'm not sure whether it's inherent or due to the particular operators or order of operators that I used. It's a shame they don't have doubles.

    I achieved a semi-voxel look by taking the screen coordinates, combining with the depth to get a world-space co-ordinate, rounding by whatever granularity I wanted, then converting back to screen space co-ordinates to get a color sample. So it could be either conversion, the rounding, imprecision of world-space coordinates, or a combination thereof. I didn't want to use actual voxels as the cost would vary wildly depending upon the granularity.

    What I like about shaders as a mid-level (I hope) user is the artistry. For example there are more ways to screen transition than there are stars in the sky. ;)
     
  25. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Be careful that depth buffer aren't always stored linearly, they can have a logarithm curve
     
  26. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Yeah, I have this custom function to take care of that.

    inline float SEGILinear01Depth(float z)
    {
    int x = voxelSpaceSize / 0.01;
    return 1.0 / (x * z + _ZBufferParams.y);
    }


    It scales it to match the voxel space size.
     
  27. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    I finally managed 90-120 fps with latest version of SEGI.

    But its giving random d3d11.dll access violation crashes. When I disable SEGI the crashes are gone. It happened with the old version too (sonic's one). Any way to solve this? It apparenly only happens ins some machines, but on enough for it to be important. I have latest version of Unity 2018 and all my drivers updated (clean windows install). Actual performance is very good for me now, using Ninlilizi version.

    Switching to Unity baked GI is a no-no
     
  28. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Weird... I've never encountered that before... Can you tell me a little more about your software/hardware environment?
     
  29. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    They are random. They used to be triggered by the postprocessing stack v2 vignette effect, so we changed that effect. Now they started to happend just randomly (independant of code, just walking around for example), and disabling SEGI apparently stopped them. Tried with both old sonic's SEGI and with this new yours. Our game uses Steam too. My machine is a third generation i7, 8 gb of ram, GTX 1070. The other machine this happens in is a i5, 12 gb of ram, gtx 970. Mine actually was completely cleaned some days ago. We have 2-3 more testers on which this problem is not present, which have both a mid range and a high range rig. Everyone of us ran memtests and dxdiag and everything seems to be fine.

    I was always able to solve other crashes, but with this d3d11.access violation thing, i'm at a loss.

    Here is an example error log we get. Its on Unity 2017 but its that we tried every version of unity to no avail:

     
  30. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Weird... You all running fully updated Windows 10?
    Any particular AV software?
    Lastest Nvidia driver?
    Any game overlay injectors like geforce experience of twitch or such?
     
  31. Oniros88

    Oniros88

    Joined:
    Nov 15, 2014
    Posts:
    150
    Yes fully updated. Tried both with and withouy AV which is Avast. Both default and fully updated Nvidia Driver. Clean windows install twice. Tried basically everything.
     
  32. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Hmmm.... Guess we need to find the commonality between the machines with the issue somehow... With that type of crash it's prob a good idea to first of all file a report to the Unity issue tracker site.
     
  33. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    If you attach the Visual Studio debugger to the editor process... Might be possible to get a more detailed stack trace. Think the editor has debug symbols available somewhere.
     
  34. jefferytitan

    jefferytitan

    Joined:
    Jul 19, 2012
    Posts:
    88
    I have some experience dealing with crash bugs in a production game. I'd also suggest looking at which features may cause or exacerbate the crashes. Off the top of my head I suspect the voxelisation, both in terms of memory usage and non-standard operations. It may be worth looking at whether using a low voxel resolution or turning off voxel updates after a few frames has some effect. Obviously neither is a workaround, but it may point in a direction.
     
  35. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    Maybe related but probably not: I was having d3d11 access violation crashes on exiting my game if a compute shader was in use, that got fixed only recently in 2017.4.10.
     
    jefferytitan likes this.
  36. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Finally figured out the math... ITS ALIVE!

    *Cackles maniacally*

    Check this out.... 132 cones... For the price of 4! ... Ray caching in action


    How it works is it samples x cones per frame for 128 frames.... Then over the next 32 frames it copies the cache into the live buffer.... Then starts again... You can still move around within the voxel volume without invaliating the cache. And at 60fps the tracer cache refreshes ~
     
    Last edited: Nov 1, 2018
  37. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    Oh wow this makes me want to take a new look at segi. Now, how about similar for the voxelisation step... Voxelise smaller parts of the scene per frame?

    Was the cone tracing step a very heavy step? Does this have a big overall performance impact?
     
  38. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Cone Tracing is the major gpu cost... It's a big deal.
     
    Shinyclef likes this.
  39. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    *hug* You're doing a great service for humanity.
     
  40. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Here, have a shot of the Sponza... 4 samples per frame ^_^
    300fps+ at 1080 on a GTX1060.

     
  41. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    Is this with or without cascades? I'm hoping my large open worlds will be doable! It's very pretty :). What a huge performance breakthrough wow well done.
     
  42. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    With cascades... Just pushed this to the repo head. So people can give it a spin and give me some feedback.

    This feature isn't done yet... Consider it the first pass of many. My goal is to bring performance to a level it becomes usable in VR.
     
  43. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Hi,

    why i still think the result is inverted - where it should be occluded seem to be lit and vice versa !
    Not sure though. Might give it a try later !

    However, glad about the effort you put on this :)
     
    TooManySugar and Ninlilizi like this.
  44. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Thank you for pointing that out... I'll look into that.
     
  45. Ninlilizi

    Ninlilizi

    Joined:
    Sep 19, 2016
    Posts:
    294
    Oh... That might have done it... What ya think?

     
  46. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    yep, that's look way more correct to me :)
     
    Ninlilizi likes this.
  47. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    Think you could show us a screeny with an emissive material as a light source?
     
  48. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Hi, i just downloaded the zip from git and got this error:

    The script 'SEGI_NKLI' does not derive from MonoBehaviour, but has been added to Game Object 'Main Camera'. Please remove the script from the Game Object to avoid crashes.

    Having post processing stack 2.0.16 installed !

    I am using Unity v2018.3b7. Which is preferred version to test !
     
  49. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,148
    Hey i am getting like 0.1 fps in editor.

    Editor actually freezes for small periods !

    What should i try to optimize ?
     
  50. Shinyclef

    Shinyclef

    Joined:
    Nov 20, 2013
    Posts:
    505
    I can confirm. I have the same. Just loading the low poly scene. Also had the same error message: 'SEGI_NKLI' does not derive from MonoBehaviour...