Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice
  4. Dismiss Notice

Octree rendering experiment

Discussion in 'General Graphics' started by ArConstructor, Aug 12, 2015.

  1. ArConstructor

    ArConstructor

    Joined:
    May 27, 2010
    Posts:
    24
    Recently I stumbled upon a blog about Ecstatica (in Russian), where in one of the posts a simple octree drawing algorithm is described (later I learned it's called "front-to-back splatting"). The author of the post even provided a demo and the source code of his C++ implementation, which made me wonder if it's possible to do something similar in a managed environment.

    Unfortunately, I have very little experience with C/C++, so trying to analyze the original author's code quickly bogged me down (plus, the code deals with more than just rendering) -- in the end, I decided to try to make some very primitive octree renderer from scratch, just from the basic ideas outlined in the original post.

    Fortunately, I already had some octree/point-cloud data to work with (from my old experiments with point-cloud rendering), so after some days I had the basic implementation working. Of course, I'm hardly an expert at C# or optimizations, so this is more a proof of concept rather than a measure of the method's theoretical efficiency in Mono/.Net.

    If anyone is interested, I placed my scripts in a repository and there are also Unity project and demo builds. (Note: you'll need at least 1 Gb of free RAM to run the demo).


    Of course, even with not too detailed models, it can hardly give realtime FPS (I have 4-core 3.7 GHz processor), so it's not usable for any kind of game... Perhaps it's pointless to try to do software renderers in a managed language, but I wish there was some voxel rendering library usable from C# :) While certainly having major drawbacks compared to the conventional (polygons with GPU) rendering, even such a primitive voxel rendering has some lucrative features:
    * Consistent/evenly distributed detail (both textural and geometrical) and the absence of hard straight edges. Coupled with finite thickness, this gives the models more "substantial" and pleasing-to-the-eye look (a person just interprets a model as a volumetric image of finite resolution). Of course, polygonal models can look like that too, but you need practically subpixel mesh/texture resolution for this.
    * In the case of a CPU renderer, all the data resides in the main memory, so there's a less strict limit on the amount of "batches" (individual objects with individual looks).
    In short, such rendering (if it was fast) would have a natural application for the kinds of games which are going for the "pre-rendered" aesthetic. Using voxel models (instead of actual 2D renderings at a fixed number of angles) allows one to more freely compose the level in a true 3D sense, plus the player can look at the objects and scene at arbitrary angles without requiring the storage of multiple rotated scene backgrounds.

    So... any opinions? Do you think a fast enough C# voxel renderer can be actually made, or if it's possible to use some tricks (like keeping a buffer of visible portion of static objects, which shifts+updates when the isometric camera moves; or maybe drawing some models on CPU while drawing others via point clouds) to use it in a game?
    I'm mostly posting this to share the results of my experiment, but I'm curious to know the thoughts of other developers.
     
  2. jason-fisher

    jason-fisher

    Joined:
    Mar 19, 2014
    Posts:
    133
    That's really cool. Have you tried revisiting this with Compute Shaders and AppendBuffers? It's starting to look like you're heading towards the Dreams engine (PS4 -- a few dozen compute shaders, "cloud of cloud of pointcloud" engine).
     
  3. ArConstructor

    ArConstructor

    Joined:
    May 27, 2010
    Posts:
    24
    Um, I'm not sure that comparing front-to-back octree splatting and clouds of pointclouds is correct, these algorithms have very little in common. Such an octree splatter, if ported to GPU, would be much closer to an octree raycaster than a point cloud rasterizer.
    The Dreams engine idea is pretty interesting. I wonder if they achieve realtime simply due to PS4's brute force, or their engine actually implements some sort of occlusion culling.

    I suppose that for certain kind of games even an ordinary point-cloud rasterization (i.e. no compute/geometry/tesselation shaders) would be viable with a modern gaming video card.
    If I remember correctly, my GPU (AMD Radeon HD 8670D, about 4 year old) was able to render approximately 10 million points at 30 FPS -- so I could, in principle, try to use "point cloud style" in my particular project, but it would mean that the game would only be playable on high-end computers. However, I suppose at some point GPGPU support will become essentially ubiquitous, so a cleverly organized renderer would then be fast enough to run point cloud games on consumer hardware.

    Right now, I have zero experience with compute shaders, so I haven't revisited these ideas, unfortunately.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,243
    Which is actually about on par or even a little faster than the PS4's GPU. The main thing Dreams is doing is making heavy use of temporal anti-aliasing and rendering at greatly reduced number of samples each frame. They might also be doing some reprojection tricks to reduce the rendering requirements, like this:
    http://voxels.blogspot.jp/2015/05/sparse-voxel-octree-svo-reprojection.html
     
  5. jason-fisher

    jason-fisher

    Joined:
    Mar 19, 2014
    Posts:
    133
    @ArConstructor --

    These notes/screenshots are good reading -- http://advances.realtimerendering.com/s2015/AlexEvans_SIGGRAPH-2015-sml.pdf -- the associated presentation:


    What's most interesting to me are the debug screenshots that show time spent per compute shader and the names of the shaders revealing some insight into their pipeline structure..

    I think they are subdividing the frustum into a coarse voxel grid, turning point cloud octree CSG lists into brushes (that the artist/editor has control over creating in real-time) and then splatting those with a painterly pixel shader. The frustum grid controls LOD and the local octrees stores the edits that effect the cells, with one object having ~100,000 edits.

    No, it isn't quite the same technique, but what you have here is as good a starting point as any towards something along those lines. :)
     
    deus0 likes this.
  6. ArConstructor

    ArConstructor

    Joined:
    May 27, 2010
    Posts:
    24
    Ah, right. I watched that presentation a while ago and forgot they were using temporal antialiasing. Interestingly, the slides mention that (due to the severe undersampling they use) the result may have a "ghostly" look, but in the demos I've seen so far everything looks pretty solid. Maybe it's not apparent due to slow camera movement and/or video compression? Or perhaps they use some sort of hole-filling postprocessing.
    In any case, their method seems a perfect fit for dream-like/painterly aesthetic, but it would be interesting to see if other visual styles can be done like that too.
     
  7. ArConstructor

    ArConstructor

    Joined:
    May 27, 2010
    Posts:
    24
    I made a simple point cloud reprojection test (on CPU), and it seems that reprojection on its own doesn't help much. At least a gap-closing post-processing is necessary, or the temporal antialiasing mentioned in the Dreams presentation (though I have absolutely no idea how they managed to make it work for stochastic splats).
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,243
    The whole thing with temporal anti-aliasing is you can feed in horrible noise and get out smooth, soft results. The style of Dreams lends itself well to that softness and the ghosting artifacts common with the technique.
     
    jason-fisher likes this.
  9. ArConstructor

    ArConstructor

    Joined:
    May 27, 2010
    Posts:
    24
    From what I've read about temporal antialiasing, it calculates a match between the pixels of the current and the previous frames, and then does a linear blur between their screen positions (maybe I got it wrong). If they render only a fraction of points each frame, there is literally zero overlap, so it's a mystery for me how they manage to find a match (maybe look for neighbors instead of exact matches?) and how their TAA deals with gaps between the points. :confused:
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,243
    Current frame and an accumulation buffer, not really the previous frame. It's a bit dependent on the implementation, but the accumulation buffer is essentially all previous frames ever rendered. The accumulation buffer may or may not be the image displayed to the user the previous frame.

    Yes, it's doing neighbor checks, as well as using a camera relative velocity buffer to adjust the look up into the accumulation buffer.

    Dreams being a custom compute based renderer means they can limit the areas they need to spend time tracing to just the areas that have changed.
     
    Last edited: Apr 22, 2017
    jason-fisher likes this.
  11. ArConstructor

    ArConstructor

    Joined:
    May 27, 2010
    Posts:
    24
    I see. Thanks for the clarification!