Search Unity

WIP: Space Shooter including procedural elements (and hopefully Rift support)

Discussion in 'Works In Progress - Archive' started by joergzdarsky, Sep 29, 2013.

  1. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    I was experimenting with procedural generation of a universe with procedureal caluclated planets you can land on, using Ogre3D and C++. Was working on that for quite a while but didnt continue for a few months, when I stumbled uppon the golem video of Eve VR with Oculus Rift support.

    This remotivated me to work again on that topic, but completely change the setting and goal. Idea is to do a mixture of the Eve VR setting but add procedural elements:
    - Asteroids (predefined meshes) at quasi-random locations (I plan to use random functions and perlin noise to realise asteroid field elements).
    - At least one procedural planet in the scene done by a cubesphere and octree where the player can get close to.
    - Inside Cockpit View and Oculus Rift support.

    I am currently testing the Unity 3D features to get used with Unity (remember I was until now only using Ogre3D with C++ and Axiom3D with C#), playing with the features, building up a scene just for fun.
    Right now I did a simple sphere with a earth texture splash, added a custom shader, a sun, a few objects from the NASA website (so test Unity import), and created a Asteroid Mesh with Blender (being also new to Blender). Additionally a randomly placed a few Asteroids all over the scene. Everything for testing purpose up to now.

    Next steps will be to decide in which scale I'll be using the Unit of Unity (remember I need to place a procedural planet in the scene), and start with the implementation of the procedural generation (hopefully I can reuse some of my C# Code where I created a procedural planet with horizon culling and all that stuff with Axiom3D).

    In parallel I want to create the cockpit and the controls to move through the scene but have free look in the cockpit for the oculus support. The cockpit mesh is my worst pain (besides the common issue of large environments and the float issue), no idea how to model that yet. Looking for something like this, any ideas where or how to start (a blender tutorial wich explains the strategie on doing inside/cockpit meshes)?

    $phzx.jpg

    As soon as the cockpit mesh and the free look is done I plane to order the oculus dev kit (and switch to the pro version :-/).
    But right now I am amazed by Unity 3D, fast progress, and a lot of the stuff is really more handy compared to do everything from the very scratch.

     
  2. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    - DONE First version of procedural generated plate (cubesphere)
    - DONE Planet organized in Quadtrees (detailed quadtreenodes are generated in runtime
    - DONE LOD (using quadtree) considering camera distance to each plane in the quadtree
    - DONE Horizon Culling
    - DONE Dynamic textures (using quadtree) - Simple textures for testing purpose right now, considerung only the perlin factory and changing colors.
    - DONE Perlin Noise easy implementation
    - PENDING Optimize perlin noise using octaves and further parameters to add detail when going to ground level
    - PENDING Optimize of LOD considering the perlinnoise factor too (so that hills are considered at LOD too)
    - PENDING Threading for parsing the quadtree and preparing the vertices / triangleindices and the texture.
    - PENDING Optimize textures and add detailed textures
    - PENDING Add shaders etc., fancy stuff etc.


    Still a very long way to go.

    With regards to the threading, I already did a number of preperations to thread the plane prepation (so calulcate the vertices, triangles, noise etc), not in the main thread, anyway right now thats still not done and I find Unity a real pain in threading compared to my first tests using Ogre3D. :sad:

    Next wednesday I have the luck to test the Oculus Rift. In case I get blown away that might the moment to switch to Unity Pro and the Oculus and continue with this :)

    $ve15.jpg

    $b4qt.jpg

    $jnv9.jpg
     
    Last edited: Feb 15, 2014
  3. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Right now I am reengineering the whole application, combining the procedural planet with the goal to create a procedural universe at very large scales. Most techniques for large scaled universe scenes are working fine now. Currently I concentrate on the procedural planet generation and the spaceship controls.

    - Floating Origin (to keep high precision near the camera for large scaled scenes) - done
    - Scaled Spaces (to render near and very far objects at the same time) - done
    - Reference Frame Velocity (to allow high speed at multiple light speed) - done
    - Procedural Planet - ongoing
    - Cubesphere Planet - done

    - Quadtree LODing - done
    - Planet Terrain based on Simplex Noise and FBM - done
    - Dynamic Terrain Textures - done / optimization - ongoing
    - Horizon Culling - ongoing (first implementation requires optimization)
    - Water Surface - pending
    - Planet Atmosphere - pending
    - Proceduraly place planets - pending
    - Procedural asteroids, fog, nebula - pending
    - Optimize Spaceship controls - pending
    - Include Oculus Rift DK2 Support (first tests work, but some crashes here and there) - pending

     
    Last edited: Dec 31, 2014
  4. Sir-Spunky

    Sir-Spunky

    Joined:
    Apr 7, 2013
    Posts:
    132
    Nice work!

    I'm working on a similar game. I just got scaled space to work, and I'm also playing around with a DK2. Pretty cool to be able to see the real scales of planets in VR. However, my Moon looks a bit too small at the correct distance from Earth, but it might just be an illusion because of the limited resolution of the DK2.

    I'm impressed by your planets. Did you figure out how to do the planet lod and terrain stuff on your own, or did you find some tutorials on it? I haven't really given it a go yet because it seems so hard, although I also watched the KSP talk where they explained the details (great video).

    At the moment I'm trying to implement Kepler orbits for better planetary orbits. I found various code sources online, but it's a bit tricky as I'm not very good with maths.
     
    Last edited: Jan 3, 2015
  5. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Hi Sir Splunky,
    thanks for the kind work. Basically I worked into it myself as I was not aware of a tutorial that covers the full topic, starting from how to create efficiantly a plane (as I was a beginner in 3D programing), the differences of how to render a sphere (cubesphere, icosphere, etc) and it ups and downs, using noise functions for hieghts and textures, organize everything in a quadtree and parse it including split and merge, precalculate stuff, and how to utilize everything for terrains.
    Of course there are MANY artices and tutorials on each topic in separate, so yes, although I figured out most of it myself it was a lot of reading articles on each topic, and some cool help of a guy over from an Ogre3D Forum that did something similar but was working on it for a longer time. Thats why a am currently writing a short series of articles myself that covers the whole thing and one way of doing it (as there are for sure better ways by professionals).

    This was more or less the sequence how I worked (I dont know where you are starting) on the planet:
    • Create a sphere from scratch
    • Create a simple plane programmatically (so no asset, just C# and Unity in my case) and apply a texture (a ready image, not programmatically yet. Understand vertices, triangleindices, UV-coordinates.
    • Read into the different ways of how to create a sphere.(Seach at least for cubesphere (or quadcube) and Icosphere (or Icosahedron)
    • Brute force program (no worry about code structure) different ways for the sphere (I did an Icosphere and a cubesphere).
    • Decide for one way of doing it (I took the cubesphere, as I saw clear advantages over the Icosphere when it came to LOD and some other things). Thats why the further steps base on the cubesphere strategy.
    • Create your classes that hold vertice positions, triangle indices and UV coordinates (and further stuff later on). You will need that for precalculations.
    • Write efficent service classes that render a plane (in coordinates from -1 to 1). It should be that generic that you have a class that calculates the vertices, indices and stuff by only throwing in the 4 border or edge coordinates (and further parameters, like how many vertices should be created) and it renders a plane anywhere in 3d space within these coordinates. You will need this (at least for the quadsphere).
    • Call this function six times and create a cube.
    • Call this function again and normalize the vertices also. You get a sphere.
    • Multiply a radius and get a sphere. Now you are basically ready to go to teh interesting stuff. Be prepared to reorganize you code again when you put everything together. But the good thing is, you wont write that much code, its more of understand what you have to do (at least that took time for me).
      It is important (!), talking about your class that holds your plane data (the vertices, indices and UV coordinates) that you can work on the data on ON A SEPARATE THREAD!
      So avoid using to Unity stuff that is dependend on a main thread! Just use Vector3 coordinates in Arrays/Lists etc. You the only thing the main thread should need to do is to take the arrays that hold vector3 vertices, vector2 UV coordinates and indices, and create the Unity3D mesh out of it.
    • Read into noise algorithms and strategies (Search at least: Perlin Noise, Simplex Noise, FBM). There might be different ways of creating procedural landscape, but still I like the simplicity of it, its interesting and creates, if clever combined, impressive results. Other great applications like Infinity and Space Engine use noise, so that cant be wrong.
    • Read further into noise algorithms (there are some interesting noise strategies of rendering rivers and stuff I also still have to do, but great read, cool topic) :)
    • Implement or use a noise algorithm (e.g. Simplex Noise) and apply it to your normalized sphere. Get a feeling of how it changes your world, play with different amplitude, frequency values etc.
    • If you want, try to implement your own dynamic texture now, to get a feeling how UV coordinates work, and better see your noise. Again, If you want, start with one plane. Just color the vertices based on their height, create a texture2d yourself, and apply it. I'd personally do this later if you need to get back to this when it comes to LOD.
    • Read into quadtrees (or maybe further strategies to organize your data) And how to split and merge. Understand it. You will need it to hold your terrain and organize your "vertices/triangleindices/UV-coordinate/etc"-class in a node and within that tree, and it will greatly support you when it comes to LOD and rendering more detailed planes by splitting these. This is why you need a good generic service class that only requires edge coordinates for your plane, you need it while splitting and merging.
      That took me a while, in fact it was the really toughest part!! Understanding how these works and how I can use these. And a few implementations. If you have the same issues like me, keep cool, try again.
    • If you understand the concept of quadtrees, implement it. Your only goal is now to create ONE quadtree and render ONE FLAT PLANE. Try to consider the distance of the camera e.g. to the center of the plane.
      Then, implement split and merge the quadtree. Still, work on a flat plane. If you got both going,
      create SIX quadtrees, and create SIX planes that form your cube again. A cubesphere now consists of six separate quadtrees, each side of the cube has its own. Check that really every side is rendered correctly and splits and merges when you move the camera around. Put all six quadtrees in a class (if you want call it "Planet" ;-), and have its update function parse the quadtree for split and merge. You will later implement threads that do the precalucation work, that for now, you invoke from the main thread.
    • If that works, again form your six planes to a sphere (normalize and multiply radius) and apply noise to make a landscape out of it.
    • Create dynamic textures and understand that you have to change/split/merge the UV coordinates in the same way like you do for the borders of each plane. Try textures thats just a colored map based on height. Then load pixels from a texture atlas. Make sure the texture applies to the height data / vertice positions, and that both if correct (water should be always on the lowest level and hills should be grey, not the opposite ;-) ).
      Texturing is a topic I am also still working and reading on. I guess the best result is a combination of a color based on height together with loading from an atlas (grass/stone, etc). But again, also a large topic for me. Avoiding patterns when using prepared grass textures for example is tough and I yet havent figured out how to do it that you dont recognize patterns.
    • From now on performance and memory is your enemy! I am still working into getting deep enough into the quadtree for near surface details without memory or performance issues. For a size of earth and a resolution you need to split a lot and traverse down.
    • Optimize your code! Reuse noise calls for textures and vertice positioning, avoid new noise calls where possible.
    • Use separate threads for the precalculations of data including the noise calls
    • Throw data away that you dont need when a plane is ready to render to take care of memory (might be on the other hand a bad strategy when it cames to performance optimization, but anyway.
    • Read into Frustum Culling, Horizon Culling to avoid invoking precalculations when going through the quadtrees.
    • Enjoy. Procedural Planets are a great topic! :) If you want I can add a few sites that helped a lot for some things.
     
    Last edited: Jan 8, 2015
    Sir-Spunky likes this.
  6. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Using a color texture instead of prepared grass/stone atlastextures. in the future I plan to use both

    Color Texture:


    Result:











     
  7. Abs1981

    Abs1981

    Joined:
    Feb 21, 2015
    Posts:
    4
    This looks great. I've subbed to the thread with the hopes of seeing more updates soon.

    Abs
     
  8. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Not so much changes right now. I got rid of the Texture Seams. LOD seams still to be fixed. Another bugfix was that planets were "jumping" when switching between differently scaled Spaces (e.g. "ScaledSpace" 1:1000000 to "LocalSpace" 1:1) which is required to render close and very far away objects at the same time. There was an issue when rescaling and repositioning the planets, as I didnt consider the camera position when repositioning. Works now.

    Still focusing on the planet creation. I did a few changes in the way what the Unity3D main thread does and what is done by separate threads. Right now the main thread only creates the final precaluclated splitted or merged planes of planets, there is one separate thread for parsing the quadtrees, one thread that checks a queue if anything is to be precalulcated.
    and a threadpool that is used to precalculate planes that are to be splitted or merged. Works nice now.

    I slowly get desperate in implementing a working Horizon Culling to work. A friendly guy over in the Inovaestudios Forum gave me a hint to this picture:
    https://inovaestudios.blob.core.win...9ac3050d37171659bed14ba21b139d006494c1841.png

    It seems to work fine until a certain planet distance (probably until the double planet radius distance roughly) and then it seems my implementation rates everything as not visible/occluded. It correctly detects when the camera goes below rMin, so basically distance calculations should be right. Maybe I dont see the issue in the implementation of the "Node Visibily Determination" picture?!

    Code (CSharp):
    1. // Returns true if testPosition is outside horizon,
    2. // false if testPosition is inside horizon (not occluded)
    3.     public bool HorizonCullingAlgorithm3(Vector3 cameraPosition, Vector3 testPosition, float radius) {
    4.         float planetRadius = radius;
    5.         float cameraHeightFromOrigin = Vector3.Distance (new Vector3 (0, 0, 0), cameraPosition);
    6.         float rMin = planetRadius + planetRadius * (-0.005f);
    7.         // due to algorithm: planetRadius *= 1+noisePushDown*noise; noisePushdown = 0.005f, noise = [-1,1];
    8.         float rMax = planetRadius + planetRadius * (0.005f);
    9.         // due to algorithm: planetRadius *= 1+noisePushDown*noise; noisePushdown = 0.005f, noise = [-1,1];
    10.         double horizonDistance = Math.Sqrt (Math.Pow (cameraHeightFromOrigin,2) - Math.Pow (rMin,2));
    11.         double overTheHorizonDistance = Math.Sqrt (Math.Pow (rMax, 2) - Math.Pow (rMin, 2));
    12.         double radiusOfTheVisibleSphere = horizonDistance + overTheHorizonDistance;
    13.         double testPositionDistance = Vector3.Distance (testPosition, cameraPosition);
    14.         if (cameraHeightFromOrigin < rMin)
    15.             Debug.LogWarning ("HorizonCullingAlgorithm3:: Camera is below rMin!");
    16.         // A Point is visible if its distance is less than the radius of the visible sphere
    17.         if (testPositionDistance < radiusOfTheVisibleSphere)
    18.             return false; // point is not occluded
    19.         else
    20.             return true; // point is occlued
    21.     }
    Pictures (not so different to the last ones, Texture Bleeding gone, LOD seams still there. Next topic will be real textures, after I get the Horizon Culling to work somewhen.
















     
    Sir-Spunky likes this.
  9. Sir-Spunky

    Sir-Spunky

    Joined:
    Apr 7, 2013
    Posts:
    132
    Thanks for all the detailed information. I'm not so advanced yet to consider building this on my own, but I'm following your progress with much interest. Keep up the good work!
     
  10. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Thanks for the kind words.

    I've created a description how you can work with different spaces to realize large space scenes and show close and near objects at once, if anyone is interested or wants to do something similar.
    Guess KSP uses a similar strategy, but I nowhere found a good description for it. I realized this in Unity3D, works like a charm, and functions perfectly together with floating origin.

    I need to check if I can upload the whole Unity3D script for the "TransitionManager" somewhere, or maybe a small example project. Maybe in a unity wiki or something?! Anyway I should be ready with my blog soon where I can upload this.

     
  11. FreakForFreedom

    FreakForFreedom

    Joined:
    Jan 22, 2013
    Posts:
    156
    That's a pretty neat image you have there. To be honest, I also researched this matter a few month ago and finally implemented this kind of transition manager (after seeing the KSP presentation). Glad to see that I was not the only one that deemed this solution viable. :)
     
  12. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Over at the inovae studios forums a good fellow, NavyFish, implemented the Visible Sphere check to cull out a lot of planes during the procedural planet generation. He created that paper that i implemented in Unity3D, works fine!





    however there can be problems when the Visible Sphere check reporting "false" Horizon results when starting near the planet surface but at an initalliy low level quad / plane. The impact is that the surface does not start to split because you check only against the raw four edges of the plane which are behind the horizon, thus the Visible Sphere culling says the plane is not visible (but you are directly in front of it). My solution approach, by friendly help of joeydee from zfx.info, is to perform a volume check of the plane beforehand, to check if the plane MUST be visible (and not perform the visible sphere check then). This is approach can be a little extended to also calculate the nearest distance from camera to the spherical plane, and only use this one coordinate for the visible sphere check. But to solve the issue of the plane not splitting this is not required. If someone faces the same issues, this is how I got around it.



     
    PrimalCoder likes this.
  13. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    I recently extended the algorithms to a combination of SimplexNoise and RidgedMultiFractals:

    SimplexNoise:


    RidgedMultiFractal:


    SimplexNoise and RidgedMultiFractal combined:
     
  14. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Extended the LOD now to a LODSphere approach. Around the camera there are 22 LODSpheres of different radius. Larger radius means lower resolution area. Strategy is to rotate all BoundingBoxes of the planes to the north pole, do a AABBvsSphere check (for each LODSphere) and based on the result do the split or merge.

    BoundingBoxes:




    LODSphere Strategy:


    And the rotation and AABBvsSphere Strategy:


     
  15. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Update: good quality video. In the video you can see in the end that I prepare local space fog (based on global noise results based on players position). After adding dark cloud particles this will give a nice effect. One of the next topics:
    - Water Surface
    - Procedural texturing of planes
    - Atmosphere Shader
    - Noise investigation (for low level planet surface)
    - Space travel effects (warp effects, space particles)


     
    Last edited: Jun 28, 2015
  16. Abs1981

    Abs1981

    Joined:
    Feb 21, 2015
    Posts:
    4
    I really enjoy watching your updates. Please, keep up the good work!

    Abs
     
  17. Abs1981

    Abs1981

    Joined:
    Feb 21, 2015
    Posts:
    4
    Hi Joerg,

    I hope all is well! I really enjoyed reading the posts you made here: https://forums.inovaestudios.com/t/procedural-terrain-rendering-how-to/765/97

    I am not registered on that forum, so I thought I might post here in the hopes that you see it. Regarding this image:

    I wanted to suggest that instead of creating a third and much larger collider cube (as outlined in the Optional box), you might want to consider just having the ScaledSpaceToStellarSpace function check the scale of the object instead. Then, when the scale of the object is unrenderable (let's say smaller than 1:1,000,000) then the script deletes the object. This will allow you to render larger objects that can still be seen (eg: a star) and delete smaller objects that are no longer seen by the player (asteroids, stations, planets, etc).

    Anyway, I hope that you are still continuing this, and I secretly wish that you will release a demo project for those of us "less experienced" to base our work on. Until then I will do my best to recreate what you outline in your posts, but I am only an artist and coding does not come naturally to me. :)

    Take care,

    Abs
     
  18. Testfor

    Testfor

    Joined:
    Jan 22, 2015
    Posts:
    55
    Great job, I like a lot your informative posts !
     
  19. MiniMe_

    MiniMe_

    Joined:
    Jan 7, 2015
    Posts:
    14
    Nice work!
    Your doing amazing, keep it up!
     
  20. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Hi, thanks for the kind words and the support! I have started to continue a blog describing the project and the procecural topic in more detail. Still some work to do, but its going along. Will link this here shortly, and update this post as soon as more chapters are added.

    Abs1981, thank you for the continous hints and additions, there are very valuable.
    Honestly, I am a little stuck in integrating the transition between spaces using the colliders. Thats why there were no more updates for a while. Of course there would be much more to work on in the meanwhile, but I want to get this topic finished and a suiting space-transition integrated first, as I feel it is very essential for the overall implementation.

    The problem with the colliders and multiple spaces is that there is a risk that objects start to bounce between spaces.
    If looking at the picture in post #10 where I described the transition in multiple spaces (the behavior would be the same no matter if I use colliders or distance checks), reading the image from top to down (so the objects moves from localspace to scaledspace and stellarspace), you can see that I hinted for each transition "scale down factor x / reposition "nearer".
    So, while doing this, and given the fact that the size of the objects can vary a lot (remember the different scales of e.g. a moon, a large planet, and a sun), there is the risk that while repositioning an object that transitions from localspace to scaledspace, it is resized but repositioned closer to the localspace collider, and can hit the localspace collider again.
    I somehow need to get managed that when an object has left the outer collider, it is not repositioned into the inner collider during transition.

    My hope is that this can be handled by using the right functions of the colliders (hasEntered (for inner collider), hasExited (for outer collider)) and the right amount of distance between colliders and the scale-factor. But yet I have not been able to get the bounce under control.

    I will need at least one outer collider for the transition between localspace and scaledspace. Remember localspace is valid for some distances of zero to ~100 km distance to the object (in earth scale, the transition from scaledspace to localspace would in this case happen when a player reached the atmosphere as it is 100km thick), due to floating point precision issues after 100.000 units. So objects in localspace are VERY close. Due to the large size of objects and the large distances, I think I will need another transition from scaledspace to stellarspace, as, even if I scale down objects a lot in scaledspace, they will soon hit the 100.000 units maximum distance again after which floating point precision issues in scaledspace occur.
    Some would argue "why, for Kerbal Space Program a ScaledSpace next to a LocalSpace was sufficient", but a) they faked the size of the planets a lot, e.g. Kerbin width is only 600 KM) (see http://wiki.kerbalspaceprogram.com/wiki/File:Kerbol_sizecomp_chart.jpg ), and b) they did "only" one solar system.
    But I nowhere near want to rate their work down as I looked and used a lot of their strategies and approaches and I think they did great with KSP, and wonder how they work around when to scale an object from scaledspace to localspace vice versa, and if they used colliders, and if yes, how. The problem I have they should have had also.

    So I think I would a least need two outer colliders for transitions into the next smaller scaller, one for localspace, one for scaledspace. And I continuesly look at how to use them still, as I like the approach.
    The idea to remove objects when they should be invisible is nice. This check needs to consider also the distance to the camera, just using the size might not be precise enough to decide when to remove an objects.
    I need to check how this suits into my approach of creating an object (I continuously check a 3D matrix (positons) around the player until a certain distance in a separate thread using simplenoise and compare the results to the list of objects already created). There could be a risk of objects being deleted and created all over again if not in sync. For this approach I might need to reactive (implemented this already) my approach to check every position only once and only check new positions while the player is moving. Anyway I am going to have a look into this, as removing objects if not visible is the most good and precise decision option.
    Despite from that I am also thinking about converting planets / suns into particles at certain distance. This should me more resource friendly, and hopefully gives the option to render more stars at once.

    Will keep this updated. :)
     
  21. Abs1981

    Abs1981

    Joined:
    Feb 21, 2015
    Posts:
    4
    Insightful response, Joerg. :)

    So to counter the collider distance issue, why not perform a check on real world distance as well? Bear with me here if I do not make sense, but I think that a distance check would solve your problem.

    The LocalCollider may be used to show everything at a 1:1 ratio at a distance of 10,000 editor units. Let's say that at the 1:1 scale, the editor units match real world units - so 10,000 editor units = 10,000 real world units (m or whatever).

    When the scene object (eg: planet) gets switched over to the ScaledSpace layer, it begins to move closer to the camera in editor units. So, for example (and pardon my flawed numbers because I am not doing the math.) if the planet is now at 5,000 editor units because it is moving closer to the camera, then the real world units at a 1:10,000 scale would be 500,000m away.

    So when you perform your logic check, it would be something like:

    if (planetDistance < 10000 && LocalCollider == true) {
    echo "Planet must be in Local space.";
    } else {
    echo "Planet is in Scaled space...and maybe add a check for Stellar space using this logic.";
    }

    I am looking forward to subscribing to your blog. If I can provide you with the occasional model, please let me know. :)

    Abs
     
  22. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    I've put the collider strategy aside, as it is really complicated together with the scaled space strategy.

    I paused with the planet generation (waiting for a hint for shader use for terrain surface of a good fellow from another forum) I started on asteroid generation, which works fine right now. I didnt use Unity's LOD system but created an own one. Each asteroid checks its distance to its camera (the size of the asteroid is to be considered also, thats still to be done) and recalculates its mesh on demand. Each has 7 LOD levels. In the lower 4 ones I use an Icosphere for the creation. As the lowest icosphere has only 12 vertices which I think is a fairly low amount for a round object far away, and works quite well. I was first testing a plane (4 vertices) for the farest LOD, but the fact that I continuesly need to check and rotate the plane to always face the camera every several frames made it less usable in my tests.
    So after the Icospheres with 0 up to 3 subdivisions depending on the LOD, I switch to a Cubesphere with non-shared vertices for better lightning. So the highest LOD then is made out of cubesphere with ~60.000 vertices.
    For the deformation I first reposition the vertices by a Worley Noise (a cellular noise, for an organic structure) algorithm and additionally a Simplex Noise for some roughness on top.
    The vertices are calculated by a background-thread (one per asteroid, iniated when a LOD changes)). Performance is pretty good, and an asteroid field of ~1000 visible asteroids is created pretty quick.

    I now want, besides planet creation with shaders, revisit the positioning of objects. Right now I position suns and planets independent. Each type is positioned by simplex noise and a certain threshold on the noises result. I am going to change that so that only suns are positioned by simplex noise, and planets are afterwards created nearby suns in a certain orbit / distance. So then lightning should become easier then. As I do want to create not only asteroids fields in empty space, but also in ringformations around a planet, I need to check out how to calculate the placement of asteroids in a ring-formation around a planet/center coordinate.

    Thats it for the moment.













     
    Last edited: Oct 11, 2015
  23. Fazan

    Fazan

    Joined:
    Mar 8, 2015
    Posts:
    1
    What you're doing here is just extraordinary, good luck with it!

    UPD:
    A question: earlier you mentioned that as the camera approaches a planet, it (the planet) moves to localspace. But if a planet is actually 6000km in radius, transfering it to localspace would cause floating point precision issues. Could you please explain that moment further?
     
    Last edited: Oct 29, 2015
  24. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    There is a gap in the frequency of updates right now. Thats because I am right now trying two switch from CPU based planet planegeration to GPU-based. As I am relatively new to shader thats takes a while but a good fellow is supporting me here so hopefully soon I should be able to get back to my current planet stage but fully done on shaders.

    @Fazan: Yes thats indeed a problem you face. Well there are two ways to deal with it. First one is that you do not change the localspace of the whole planet, but only to the planes facing nearest to you step by step.
    Or, at at least for earth like planes, you accept that you have floating point precision issues farer away. If thats noticeable depends on when you switch the space / how close you are to the planet. If you are a few kilometers close, the horizon's angle already helps enough that the precision issue only occours behind it. It least that was why I noticed so far. But I dont say its not an issue.
    Especially while I tried to deal with transparent water or an ocean layer, precision was indeed an issue when switching spaces. So yeah, I still need to deal with this topic further.
     
  25. Sackstand

    Sackstand

    Joined:
    May 26, 2015
    Posts:
    10
    hi, i go a similar approach but until today i can´t get around the problem with the Particlesystem and shifting it (floating origin). in the frame when it shift the scene/objects the Interpolation of the Particlesystem breaks up and create disorted particles/flickers and so on. How do you have managed this?
     
  26. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Hi Sackstand, well I dont think I can help here. I didnt notice bigger issues when using particles to render volumetric nebula, space particles (for movement indication) and dust (dark particles in slightly enlightened area). But all my particles were static (one shot), maybe thats why they worked for me.

    Well. But even with some drawbacks I think I will still stick with the floating origin approach.
    More headache gives me the scaled space approach. I most likely will do some changes here as discussed with abs1981. But I am not sure if I will go the collider route, I would really love to do something more generic without caring too much in which scale an object currently is.
    I am thinking on something where an object remains in the same layer or space, but moves away after a certain threshold distance more slower the farer it gets, and gets scaled down by using the angular diameter formula. https://en.wikipedia.org/wiki/Angular_diameter
    This would be done nearby each frame or during the floating origin actions.
    Not sure if this will work, need to investigate into this further.

    I didnt focus on that as my current prio is completely on switching to GPU based rendering especially for the planets.
     
    Last edited: Dec 28, 2015
  27. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Video of current state. I completely moved away from CPU based terrain calculation and do everything on the GPU now.
    The terrain is now calulcated in a Compute Shader, its results (in a Compute Buffer) are handed over to a vertex and surface shader where a one time created Unity "prototype"-mesh is manipulated (vertex displaced, colouring etc) for each plane.

    Works nicely, I got rid with a quite good number of problems (see through light in terrain, planet gets culled by Unity when the planet center gets out of camera frustum, etc.). But I need to optimize the culling to keep vertices and the underlying quadtrees small.

    Right now I render 32x32 vertices-planes and a quadtree until depth 20. I use a LODSphere approach by NavyFish to define the required LOD while recursively going down the quadtrees. During parsing the quadtree down I check if something can be splitted or merged (I put these nodes in a split or merge queue then that is worked on after the quadtree traversing has finished) I do a
    1. Plane Volume check (check if the camera is within the spaned volume of a plane
    2. Horizon Culling check
    3. Frustum Culling check (well, very roughly, lacking a good strategy in Unity, it only checks if an object is behind the camera, but not if its within the frustum)
    Urgently required is Frustum Culling, unfortunately I havent yet been able to implement a good frustum check in Unity while traversing down the quadtrees for splits/merges.

    But well thats ongoing, and things start to look nice. Here's a video of the current state I recorded once I got the light issue and culling working, which allowed to go down to the terrain for the first time. While leaving the planet again, I noiced there is still a bug as some part of the planet didnt merge back to low LOD. But it looked so beautiful that I needed to keep recording and take a separate picture ;)



     
  28. USFTethys

    USFTethys

    Joined:
    Apr 4, 2016
    Posts:
    1
    Are you designing this with multiplayer support in mind? I want to create a similar project. How much customization will this game have? It looks really awesome, are you going to make this open source at all? I would be interested helping but I am very new to this. KSP has inspired me to try to make something with a little more depth, more exploration, and multiplayer. Good luck with your project, I have bookmarked this topic for further study.
     
  29. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    I have multiplayer in mind, yes, however its not my current focus, as I am still on prototype level and looking for the best way how to setup these huge scenes optimally. Especially when it comes to procedurally create different kinds of universe-bodies (planets, asteroids etc., which I currently work on) and render everything in a large scale with good performance.

    As far as I can see the worst thing you can do is to start with something in a small scale Unity is komfortable in (so a few kilometers) and then later switch to larger scales. You need to use such a lot of tricks to overcome the limitations that very likely stuff you've created before wont work well anymore. So I think a long time study and research or prototyping phase is a good advice when you start working on such a kind of project in Unity.

    Multiplayer, anyhow, I am continuously having in mind and thinking about it here and there how it could be done. But when you stick to techniques like floating origin, reference frame velocity and so its I think its pretty sure you wont be able to use much of built in defaults in Unity but have to implement your own multiplayer code to synch everything yourself.
     
    PrimalCoder likes this.
  30. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    I've been reworking some parts of the procedural content generation code so that it either can create a planet or smaller objects like asteroids, with more flexible configuration per object when it comes to LOD depth, number of vertices, normalmap etc. Works quite well now.



    Howevery a current problem of mine is the number of draw calls per frame. Even if you push down the number of vertices etc. you still have to draw a lot of planes (surface segments) even at the lowest LOD for an object (which is 6 plane-meshes or draw calls per quadsphere). In an asteroid or planet scenario this multiplies and increases a lot with each additional asteroid to be drawn, or each plane split to increase the LOD (for a planet or an asteroid). The worst problem you face is when getting at a medium height above a true scale planet where you are in a situation that you need to increase the number of planes for terrain height detail, but where culling techniques such as horizon culling dont have a huge effect yet as they have closer above ground.

    So without efficient rendering you can quickly get stuck in terms of good performance when you create a huge planet or asteroid fields. As I am creating everything on GPU now (which basically is a loooot faster than on CPU) batching is the way to go. But with intensive DX11 feature usage of the ComputeShaders (which are really cool) to create the terrain on the GPU, batching under Unity fails in the current stable Unity release as setting the buffer to the material shader will prevent batching (batching is currently only supported with textures and types like vector or float, but not with computebuffers. So you quickly have thousands of unbatched drawcalls.

    I've requested at the Unity forums to have the MaterialPropertyBlock to support a SetBuffer() and thanks to Ara it finaly found its way into 5.4 (still beta). So my current next steps are to have everything running batched. Once I get this to work there should be a huge performance gain, as very single object is made out of a unique single (flat plane-)mesh, which is displaced by the computebuffer in the vertex shader.

    After using the MaterialPropertyBlock with the ComputeBuffer() in 5.4 I see another change upcoming, as as far as I can see batching wont work with the DrawMesh() command as well which I currently use (its ideal for procedural content). As far as I can see batching will only work with separate gameobjects and a MeshRenderer applied. DrawMesh would be way better as gameobjects per plane would be a unnecessary overhead. But anyway, researching into usage of ComputeBuffers and get batching to work is my current focus.
     
    Last edited: Apr 22, 2016
  31. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    After looking into MaterialPropertyBlock with the final 5.4 version I had to realize that this won't help me solve the problem of draw calls, as my problem is that changing textures (normalmap and surfacemap) and buffers (for height data) for each single plane to the material will break the batching.

    Now I am trying to implement a hint by zeroyao from Unity3D in this very interesting thread about GPU instancing.
    http://forum.unity3d.com/threads/gpu-instancing.376635/page-4#post-2651648

    To sum the process up shortly: To render planet planes, I pass over (per planet plane) a RenderTexture and a ComputeBuffer to a ComputeShader which fills the ComputeBuffer e.g. with position data and the RenderTexture with NormalMap Data. Everything is then passed to Vertex/Surface shader that displaces the meshes vertices with the ones from the ComputeBuffer, and applies the NormalMap and SurfaceMap RenderTexture.

    What I am now trying is to create one large tiled RenderTexture (so, a TextureAtlas) where I draw into each tile e.g. a NormalMap of a plane. So in case if my normalmaps are 64x64 each, I create a large 1024x1024 RenderTexture I can use across many planes. The ComputeShader and the Vertex/Surface shaders are receiving the necessary Offset information to write (in the ComputeShader) and read (in the SurfaceShader) from the right location of the RenderTexture.
    When done something similar with ComputeBuffers, so that lots (thousands) of plane scan share the same large RenderTexture and ComputeBuffer, Batching should start to do its magic, resulting in a lot less draw calls.

    I am facing one problem now while implementing this for RenderTextures I havent solved yet. I implemented a SharedTextureManager that on demand creates a new mega-RenderTexture and manages which of these slots are used. If a new plane requires a RenderTexture to create e.g. a NormalMap, it requests a slot (= a tile of that mega RenderTexture) from the SharedTextureManager. it receives the reference to the texture and some offset information. If all slots of a texture are used, the SharedTextureManager creates a new one and adds it to the list. Then the usual process keeps going, the RenderTexture reference including the offset information is being passed to the ComputeShader which fills the Pixels with the Normal-Information.

    C#
    Code (CSharp):
    1. this.sharedTextureManager = new SharedTextureManager(nPixelsPerEdge, 1, 2, RenderTextureFormat.ARGBHalf);
    2. quadtreeTerrain.sharedNormalMapTextureSlot = this.sharedTextureManager.GetSharedTextureSlot();
    3. quadtreeTerrain.patchGeneratedNormalMapTexture = quadtreeTerrain.sharedNormalMapTextureSlot.Texture();
    4. ....
    ComputeShader
    Code (CSharp):
    1. RWTexture2D<float4>  patchGeneratedNormalMapTexture;
    2. #pragma kernel CSMain2
    3. [numthreads(1,1,1)]
    4. void CSMain2 (uint2 id : SV_DispatchThreadID)
    5. {
    6.   // Get the constants
    7.   GenerationConstantsStruct constants = generationConstantsBuffer[0];
    8.  
    9.   [... calculate normals...]
    10.  
    11.   // Prepare Texture ID
    12.   uint2 textureID = uint2(id.y+constants.sharedNormalMapTextureSlotPixelOffset.x,id.x+constants.sharedNormalMapTextureSlotPixelOffset.y);
    13.  
    14.   // Create the ObjectSpace NormalMap
    15.   float w = constants.nPixelsPerEdge;
    16.   float h = constants.nPixelsPerEdge;
    17.  
    18.   // Store the normal vector (x, y, z) in a RGB texture.
    19.   float3 normalRGB = normal.xyz /2;
    20.   patchGeneratedNormalMapTexture[textureID] = float4(normalRGB,1)
    21. }
    After the dispatch to the ComputerShader the RenderTexture and Offset information is pased to to the Vertex/Surface Shader. A _NormalMapOffset float4 is passed, where X and Y define the downscale of the tile compared to the overal texture, Z and W define the UV offset.

    C#
    Code (CSharp):
    1. quadtreeTerrain.material.SetTexture("_NormalMap", quadtreeTerrain.patchGeneratedNormalMapTexture);
    2. quadtreeTerrain.material.SetVector("_NormalMapOffset", quadtreeTerrain.sharedNormalMapTextureSlot.Offset());
    Shader
    Code (CSharp):
    1.   uniform float4 _NormalMapOffset;
    2.  
    3.   void vert(inout appdata_full_compute v, out Input o)
    4.   {
    5.   UNITY_INITIALIZE_OUTPUT(Input, o);
    6.   #ifdef SHADER_API_D3D11
    7.   // Read Data from buffer
    8.   float4 position = patchGeneratedFinalDataBuffer[v.id].position;
    9.   float3 patchCenter = patchGeneratedFinalDataBuffer[v.id].patchCenter;
    10.  
    11.   // Perform changes to the data
    12.   // Translate the patch to its 'planet-space' center:
    13.   position.xyz += patchCenter;
    14.  
    15.   // Apply data
    16.   v.vertex = float4(position);
    17.   o.uv_NormalMap = v.texcoord.xy;
    18.   o.worldPos = mul(unity_ObjectToWorld, v.vertex);
    19.   o.objPos = v.vertex;
    20.   #endif
    21.   }
    22.  
    23.   void surf(Input IN, inout SurfaceOutputStandard o)
    24.   {
    25.   // Apply normalmap
    26.   fixed3 normal = tex2D(_NormalMap, IN.uv_NormalMap * _NormalMapOffset.xy + _NormalMapOffset.zw);
    27.   o.Normal = normal;
    28.   }
    My current problem: Everything works fine until there is a "free" slot in the RenderTexture, "free" meaning not all areas of the RenderTexture were written to previously in the ComputeShader.

    If the RenderTexture is 1x1 slots, everything is fine.


    If the RenderTexture is 1x2 slots, everything is still fine. You can see that for each RenderTexture both slots were used.


    If the RenderTexture is 1x4 slots, the texturing gets wrong, the area is textured grey. In the debugger you can see that only half of the RenderTexture was used. Although by using offsets to read from the RenderTexture, my impression is that this (uninitialized RenderTexture areas although not read from in the shader) seem to lead to this behavior.


    Is that possible or the expected behavior when using RenderTextures? I was hoping that this wouldnt become an issue, as I guessed using the Offset information and only reading areas of the RenderTexture that were written too previously should work.

    Does anyone know, please? That would really help me a lot as I get to struggle a little (while I hope that getting this to work could be my breakthrough in the number of drawcall issues!
     
  32. joergzdarsky

    joergzdarsky

    Joined:
    Sep 25, 2013
    Posts:
    56
    Found the issue, I had one RenderTexture.Release() command in my quadtree code called at the point a quadtree node is splitted. Of course this may not be done when working with a shared RenderTexture.

    Works like a charm now. Only thing I now need to get under control is the bleeding due to the atlas.

    Asteroid:


    Shared RenderTexture:


    EDIT: Simple half pixel correction while calulcating UVOffset and tile scale did the trick for the bleeding. Time to move on to create a shared ComputeBuffer now to enable batching.



     
    Last edited: Aug 20, 2016
    PrimalCoder likes this.
  33. IO-Fox

    IO-Fox

    Joined:
    Jul 14, 2014
    Posts:
    86