Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

After playing minecraft...

Discussion in 'General Discussion' started by jc_lvngstn, Oct 8, 2010.

Thread Status:
Not open for further replies.
  1. Royall

    Royall

    Joined:
    Jun 15, 2013
    Posts:
    120
    @alexzzzz
    Got it working by adding extra checks when block,light and mesh generation is complete instead of start function only.
    My generation time for 7x7 visible chunks is 12.2 seconds at the moment. Is this normal, or rather slow?
     
  2. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,447
    Last edited: Feb 4, 2015
  3. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @Royall
    Sorry I was unable to get back to you about lighting sooner however Im glad Alexzzzz was able to help you with the lighting (his and goldbug's posts are the main ones i figured it out from).

    Alexzzzz is right about the generation times, does seem rather slow, however without specs on your machine, it is totally a guess. I do most of my development on my laptop, which is an ancient (6 years) old. AMD Athalon II. The generation time between my laptop and my desktop are a WORLD apart. As an example, on my laptop a totally new project with a completely empty scene I do not even break 200fps on a good day, with nothing else running in the background.

    [EDIT] Also are those times in the editor, or standalone build, Big difference there as well.

    Two separate but connected big things that when used together I saw the biggest overall speed increase.
    • Single Flat Array for all loaded blocks in the world - Helps speed EVERYTHING up, especially when it comes to chunk bounds checking, as there is no boundaries to check between chunks, which means no messy calculations to figure out which chunk a block is in, and what the position of the block is in that chunk. I tried numerous methods to handle data on a chunk by chunk by chunk basis, ALL of them were slower then a single flat array.
    • Bit Shifting and Powers of 2 - Goldbug covers this on his blog (http://www.blockstory.net/node/56) on one of the posts and should definitely by used in conjunction with a flat array.

    Also, look at your lighting and mesh generation functions, those are probably your slow point. Look for ways to skip processing segments of blocks. How and where really depends on your code, lighting setup, etc.
     
  4. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @alexzzzz
    I hate to make a post here for just this, but I'm not going to register on vimeo just to leave a comment and cant message you directly here. Ive only ever watched the videos you have posted on this forum directly, and just recently watched your a bunch of your vimeo videos.

    Your In-Game building Mechanic videos are phenomenal. Is that using your voxel engine? I was unable to tell from the vimeo page, and i noticed that most of the voxel ones are called Minecraft-like engine and these ones are not.
     
  5. RJ-MacReady

    RJ-MacReady

    Joined:
    Jun 14, 2013
    Posts:
    1,718
    O.K., I have read this entire thread but I still don't understand.... How do I render a cube??????
     
  6. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @RJ MacReady
    Questions kinda Vauge....

    Ill assume you are asking how do you build and render a cube from code. Which is pretty easy to find the answer to by searching anywhere (the forums, google, unity answers) on procedural mesh generation... however ill give you the basic steps, no code included.

    Create an instance of a mesh
    assign vertices to the mesh
    assign triangles to the mesh
    assign uvs for to the mesh

    Assign the mesh to the meshFilter and meshRenderer of a game object.

    Make sure you have assigned a material to the meshRenderer.

    In the case of a cube you have 6 faces, and each face has 4 vertices and 2 triangles comprised from the 4 vertices. Each of the vertices has a UV position.
    So in total you will have 24 verticies and 12 triangles. with 24 UVs.
     
    RJ-MacReady likes this.
  7. Royall

    Royall

    Joined:
    Jun 15, 2013
    Posts:
    120
    Im on a desktop with:
    Intel Pentium G860 dual-core @ 3ghz
    Geforce GTX 660
    16Gb Ram

    My 12 sec rendering of 7x7 visible chunks is in the editor though. Remember it actually renders almost 11x11 chunks (with the invisible chunks), based on alexzzzz his lighting technique.

    After seeing his video I kind of lost hope haha

    Here is the project btw (package): http://s000.tinyupload.com/?file_id=10420999754570835190
    I also switched to C# in the hope things will speed up, but this wasnt the case...
     
    Last edited: Feb 9, 2015
  8. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @Royall

    Ok yeah given those specs it is a bit slow. However don't get discouraged watching alexxxx's video. I know its hard his stuff is insane but hes been working on it for quite awhile.

    Ok now to some helpful stuff-

    1) All your calls to getComponent and FindObjectsWithTag - These need to go, they are gonna slow be a source of slow down, especially how often you are calling them. You need to store the relevant information in a way that does not require these kind of calls.

    2)MathF.PerlinNoise - This is also slow, especially when compared to simplex noise, switching will help.

    3) Your noise Generation also needs a slight adjustment. Technically you are doing it right, but you are generating multiple samples of perlin noise for each block which is slow. Look in to Tri-linear Interpolation. Using triLerp you can greatly reduce the number of blocks you have to sample, which will greatly speed up the generation process.

    4) Your lighting - As i said in a post above you are storing everything on the chunk level which is causing you to have to check for chunk boundaries. This is slow. Switch to a single array (preferably a flat one) that is stored somewhere you can access without the nee for GetComponent or FindObjectsWithTag.

    6) Flat Array & Bit Shifting & Powers of 2 - this goes with the lighting, and terrain gen, and just about everything. You will see the biggest benefit from making this switch.

    7) Checking if a chunk has a neighbor. When a chunk is created or destroyed it should find its neighbors and store them, and also alert its neighbors that they have a new neighbor for them to store. This prevents constantly having to "look for" a chunk and check if its a neighbor.

    8) Remove the logic for everything but rendering from the chunk component (which would happen anyways if you go to flat array). Why? Because everything else should be done in a background thread, and other threads cant access Unity objects (not 100% sure this applies to components but i think it does).

    Quick Question:
    Why is your terrain data array bigger by 2 in all dimensions then the actual size of the chunk?



    Hope that helps some man
     
    Last edited: Feb 9, 2015
  9. Royall

    Royall

    Joined:
    Jun 15, 2013
    Posts:
    120

    Thanks for your reply! I started right away!
    I have some questions and comments though:

    1) How would you check for chunk neighbours without it? Can't think of another way finding a neighbor gameobject without looping trough all of them and compare coords...

    2) I will look into c# simplex, thanks for the tip!

    3) It's just some test noise... Not really what I want to use in the end, but I will look into the interpolation.

    4) What do you mean by this. If I have 1 array of terrain/light data for all visible chunks, wouldnt this be a problem when new chunks spawn and destroy as the char walks. How will this array shift right when new chunks spawn?

    6) I tried to convert my array's to flat arrays. I did it like this:
    Code (csharp):
    1.  
    2. int[] terrainData2 = new int[32768];
    3. int[] lightData2 = new int[32768];
    4.  
    5.     int getTerrainData(int x,int y,int z) {
    6.         return terrainData2[x + 16 * (y + 128* z)];
    7.     }
    8.     void setTerrainData(int x,int y,int z,int val) {
    9.         terrainData2[x + 16 * (y + 128* z)] = val;
    10.     }
    11.     int getLightData(int x,int y,int z) {
    12.         return lightData2[x + 16 * (y + 128* z)];
    13.     }
    14.     void setLightData(int x,int y,int z,int val) {
    15.         lightData2[x + 16 * (y + 128* z)] = val;
    16.     }
    It looks like this speeded things up with 2/3 seconds which is a good start!

    8) Don't really get this part...

    And about your question. I did it to have an extra row of terrain data on each side so I can check for data outside the chunk boundries, instead of checking the other chunk which is what I do now.
     
  10. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @Royall

    1) There are a couple different ways to do this, but a common one is to use a singleton.
    http://unitypatterns.com/singletons/
    This will allow you to access an instance of a class (and its properties) from pretty much anywhere. Once that is setup its just a matter of having needed references stored in the class.

    I use a circular flat array inside my class to store references to my chunks. Ill explain more on this in a minute.

    3) Dont forget this one!! The repeated calls to PerlinNoise (MathF or Simplex) in your current code that happen for every block is probably (guessing since i didn't check times) one of the biggest slowdowns in your code, so even if you optimize the rest of it this is still gonna lag you down till its fixed. With Interpolation you only have to sample 9 blocks for a given area (i do 16x16x16 area). So im only having to generate noise for 9 blocks (with 6 octaves of noise) versus 4096 (with 6 octaves of noise) for a 16x16x16 area. Which as you can see is a big difference right there. The difference is even bigger once you realize that the noise function itself is actually performing interpolation to generate the noise.

    4) Im not to good at explaining this one. Basically as you move you are overwriting old data in the array with new data. Somewhere in this thread (i cant find it off hand) this is covered very well and even some nifty diagrams to help explain if i remember correctly. Also this is where powers of 2 and bit shifting comes in handy and provides a huge speed increase. Goldbug goes over powers of 2 and bit shifting in his blog though he does not cover flat circular arrays i dont think.
    http://www.blockstory.net/node/56

    Ill see if i can find the posts about the circular array in the thread here somewhere and post links if i can.
    Otherwise just search for information on circular arrays.

    6) That is exactly how it should be setup. The next step is setting it up in powers of 2 so you can use bit-shifting to calculate the array index which is much faster then multiplying and such. Bit shifting also fixes an issue with chunks with a negative position index do to rounding.

    8) When your doing all your generation and lighting functions in the main thread of the program you essentially lock up the thread until whatever you are trying to do is done. You can "hide" this issue with Coroutines and yield statments, but then your just making it take longer to finish your code because your spacing it out over multiple frames to prevent the game from locking up. All the while causing horrific FPS drop. This is why when you chunks are generating the game becomes very jittery and unresponsive till its done.

    So to prevent the game from locking up during generation and lighting and stuff you throw all of the code that handles this sort of stuf seperate from any sort of Unity object (Classes that derive from any unity class). The reason you cant have the code in a class that is derived from anything Unity is because all of the unity classes are thread locked (can only be accessed by the main thread). So by separating this stuff out you can do MOST of the processing in a separate thread.

    So Terrain Gen, Lighting Gen, and Mesh generation should all be handled in a class that does not derive (inherit) from any unity classes. This will allow you to do these processes in a separate thread as quickly as they can without having to worry about Coroutines and yield statements to keep the game from locking up.

    The only part of generating a chunk you do in the main thread is actually assigning the vertices and uvs to a Mesh Object, and assigning that Mesh object to the MeshFilter.

    Hope that makes sense.



    Ok so to try to explain the circular flat array real quick, please excuse the crappy text diagram. For simplicity sake I am not using powers of 2 for the size of the loaded map, but it ideally should be.

    A
    B
    C
    D
    E
    1 2 3 4 5​

    So if this were a flat array it would be flatArray[25] with position 0 holding E1, position 2 holding E2, position 5 holding D1, position 6 holding D2, 7 holds D3, etc etc till position 24 holds A5.

    A 5x5 map means the character can see 2 chunks in any direction (not counting the chunk they are in).

    So now for simplicity sake we will say the character is at chunk position 2,2(we are ignoring Z plane for now).
    This means the character can see from chunk 0,0 to chunk 4,4.

    So now assume 0,0 is E1 and 4,4 is A5. This put the character at C3. Let say the character moves from C3 East to C4 which is 3,2.

    The character can now see from 1,0 (E2) to 5,4 (out of the array bounds). The character can no longer see ANY of the chunks in Column 1 because they can only see 2 chunks away and moved 1 chunk west so all the chunks are out of viewable range.

    The new chunks would be loaded in Column 6, except there is no column 6. So we wrap around instead. Column 6 is actually Column 1.

    So the chunk that would get loaded in to A-E6 actually get loaded in to A-E1. Does that make any sense? If not ill look for those other posts, they had much better diagrams.



    If you have any more questions just ask and ill help where i can.
     
  11. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,447
    I can't message you either. :)

    The stuff on vimeo is not as phenomenal as the stuff in the subsequent versions, I just feel lazy to capture new videos. I'm kind of fed up with voxels for a moment. The project uses polygons only, they allow more freedom and are not that scary as they seem.
     
  12. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @alexzzzz

    So apparently there is a setting to allow people to start a conversation with you. I assume its off by default cause I have never messed with my settings and it was setup to not accept conversations. From your account page its under contact details, and you have to check to allow people.

    You should definitely grab a new video or two, cause Im totally interested in seeing how it could be even more phenomenal then what you have up there. Seriously, I almost never watch videos from start to finish (I either get bored and stop or skip through parts) and i must of watched almost all of the building mechanic videos all the way through.

    I've also considered moving away from Voxels.....kinda, or at-least how its all handled. I've been thinking that most of all the data i need is actually stored in the mesh once its built (positions, uvs, etc) and that there is a way to manipulate the mesh without the need for updating the underlying data structure that blocks are currently stored in when updating the mesh. As much as i hate to use the comparison, think EQ Next in the way they edit volumes.

    The easy part is adding to a volume, the hard part i have not totally wrapped my head around yet is subtracting from a volume and adding in new triangles to create holes and new faces where there was none before. Im sure its easier then im thinking, but i have not put to much time to it yet either as ive just kept chugging along on my standard voxel engine. Plus im debating how quick it would be to sift through a mesh and find the closest vertices to the area being edited.
     
  13. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    If you want to investigate this then 'Constructive Solid Geometry' is probably the term to search for.
     
    Last edited: Feb 12, 2015
  14. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @DavidWilliams

    Ooooohhhh, that is exactly the kind of stuff I was talking about.
    Thank you for the proper terminology for this, should make figuring it out a lot easier. You the Man!
     
  15. Cherno

    Cherno

    Joined:
    Apr 7, 2013
    Posts:
    515
    Ladies and Gentlemen, I am again at a point where I need your help. What I currently have is a terrain of tiles in two dimensions (no blocks, just tiles @ six vertices each with different height values based on a heightmap). No endless terrain, and I use the normal chunk-based approach to keep vertex counts manageable.

    So, the map is 256x256 tiles big. Each chunk has 16 tiles, so it's 16x16 chunks. These are all there permanently. Now I want to load and destroy all the other gameobjects like trees, buildings and so on based on their distance from the player, so I don't have to always have them in the scene, it could get very resource intensive quickly.

    How am I to do this? The most basic approach would be to check if the player has entered a new tile or chunk, then iterate through all currently existing gameobjects and check which ones are farther away than the threshold, and if yes, save their position, rotation and other important values to array and/or file, and then destroy them. Likewise, To make new objects appear, I would iterate through all currently not-existing objects in an array and check their positions vs. the player's, and if it's below threshold, instantiate them and assign all values from the array class.

    This would of course work, but it feels dumb having to go through all objects every time an update occurs. What better ways are there? I can imagine a chunk-based approach where each chunk keep an array of the objects that are inside the boundaries, and then I would only have to iterate through those object arrays that belong to the chunks in question. This, however, would mean a fairly good amount of management in the background to ensure objects that move, are destroyed, and so on, are properly updated in their respective arrays.
     
  16. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @Cherno

    You hit the most appropriate answer on the head with your chunk based approach. Yes there is some management that would have to be done for this, but its a trade off. You either going to have to loop through ALL objects, or keep an organized up to date list or objects in a given chunk no other way around it.

    Depending on how many objects you plan to have loaded on average it could be quick enough to just loop through all objects and avoid any background management, but that is a question only you can answer.

    Also if you do decide to go the way of keeping things organized per chunk it might be quicker to give each object a unique-id and use a dictionary over an array for accessing objects in that chunk. Arrays you have to loop through the whole thing (or some part of it) till you find the object you want. Dictionaries use some voodoo magic in the background to avoid having to loop through everything to find something.

    Hope that helps
     
  17. Cherno

    Cherno

    Joined:
    Apr 7, 2013
    Posts:
    515
    Thanks for the suggestions. I think I will indeed try to implement a chunk-based approach to object loading.
     
  18. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    Hey,

    So I came across another voxel tutorial that I dont think I have seen posted in this thread before. I gave it a brief glance over and it seems useful so I figured id give it a post here in case it can help anyone else.

    http://alexstv.com/index.php/category/voxels
     
  19. MasterAcer90

    MasterAcer90

    Joined:
    Feb 24, 2015
    Posts:
    4
    Hi everyone,

    54 Sites later, :-$, but this is a great Thread for Voxel engins, with a lots of good Ideas.

    @alexzzzz
    Thanks for great Movies of your Voxel Engine. Can I get or buy the Project Files. Like it against a donation.
    About an answer I would be happy and thank you once before in advance for your effort.

    Sincerely: Acer

    P. S. Sorry for my bad English. ;-D
     
    Last edited: Feb 25, 2015
  20. RatherGood

    RatherGood

    Joined:
    Oct 12, 2014
    Posts:
    21
    Wanted to say thank you to this thread and share my own fun with MineCraft blocks.

    Construction:


    Destruction:
     
  21. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    What does your script organization/ object orientated design look like for the typical Unity voxel project? I'm curious of how you guys organize your classes and separate your logics. For instance, putting blocks in their own separate class with constants to reference them.
     
  22. overture

    overture

    Joined:
    Mar 17, 2015
    Posts:
    3
    I really wish that someone could write a nice tutorial for a voxel engine that is actually efficient and playable. I've looked at all of them, and tried the best ones. The end product often can't even run in the debugger due to Garbage Collector issues, clearly ironing that part out is hard.

    I found that of all the Voxel tutorials out there none give beginners/intermediates basic information about where to learn how to structure programs (you know, like a flow chart on paper).

    Some of the stuff that you guys have shared on here is phenomenal, especially the game engine videos Alex has posted.
    Would anyone be interested in writing a detailed tutorial for a basic Voxel engine that works efficiently? I would love to find out what to read to learn how to do that in C#.
     
    Last edited: Apr 4, 2015
  23. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @CorWilson

    I would have to say everyone has different organization for their scripts and classes that varies greatly based on the way their brain works, past coding experiences, and if they are self taught or formally taught programmers.

    To use your example of blocks, I have a basic IBLOCK interface that all block classes implement. All references to a block elsewhere in the code are of IBLOCK type. Each block class implement IBLOCK and sets its own TypeID value. I then have a dictionary <byte, IBLOCK> that each block class is loaded in to by using its TypeID as its Key in the dictionary.

    I have avoided hard coded references to things like BLOCK_GRASS etc because the end goal is to have block definitions be an external file that is loaded in so new block types can be added easily, and existing ones can be modified.

    All of the files for blocks and block related things are in a BLOCK folder which is in a VOXEL_ENGINE folder (which contains all that is related to the base engine) which is inside my Scripts folder (holds ALL scripts) which is inside my Assests folder. Inside my VOXEL_ENGINE folder there is also folders for; Chunks, Region, IO, Lighting and Mesh, which hold all the related script files for the voxel engine. All game related scripts are kept in another folder called GAME which resides inside the Scripts folder.

    Is this the best way? I dont know, but its the way closest to how it works in my brain so i structure everything like that. Hope that helps.


    @overture

    I figured i would respond to you as well.

    First, you are kinda touching on the same thing as CorWilson was as far as looking for a flowchart, or any sort of organizational guideline for structuring your program. Its going to depend entirely on who is writing it, and there is no definitive "right" way, but there are lots of wrong ways.

    Second, what you are asking for in terms of a tutorial is actually a lot, and is essentially asking someone else to read through and surmise all the important details of this thread (which is possibly one of the longest running threads on the site, little over 4 years). But not only this thread, but several other sites and threads that have been linked to from here as well. That is a lot. I would wager to guess if someone compiled everything in an orderly and forward fashion it could most certainly be an actual book, especially when you consider some of the non-beginner topics you would have to cover in order to create an efficient engine. Also as far as I know there is no "best" way and several people branch off and take different approaches to achieve similar goals depending on what they are looking for in the end.

    With that said, here are some useful links that cover pretty much everything.

    http://forum.unity3d.com/threads/after-playing-minecraft.63149/ - Start at page 1
    http://www.blockstory.net/node/56
    http://alexstv.com/index.php/category/voxels
    http://0fps.net/2013/07/03/ambient-occlusion-for-minecraft-like-worlds/

    Hope that helps.
     
    GibTreaty likes this.
  24. overture

    overture

    Joined:
    Mar 17, 2015
    Posts:
    3
    @RuinsOfFeyrin,

    Actually I think my question was a bit more basic then his; I was more asking "what is a software flow chart of, what's it actually called, and where can I learn more about that?"

    As far as tutorial went, I was hoping someone could take part of their already completed code and explain how it works. There are a bunch of efficent voxel engines shared on the thread, but none of the authors have made tutorials.

    I have already read though most of the links you provided (save for ambient-occlusion), and have completed both versions of alexstv's voxel tutorials. I think his tutorials are very good, but not quite as efficient as I'm looking for. Definitely though, one of the best sources of info out there for how the game actually works. I'm sure there are many approaches and solutions, but I have no found one explained that is actually as efficient as the ones seen here. Some people shared source code but i couldn't understand it.. lol.

    I've decided to take this free course to learn more about C#,
    http://www.microsoftvirtualacademy.com/training-courses/c-fundamentals-for-absolute-beginners

    It's very basic but I came over from the 8 other programming & interpreter languages I already knew, and I neglected to go over C# properly. Thanks for your reply though, I appropriate the input.
     
    Last edited: Apr 4, 2015
  25. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    Hi everyone,

    May I ask you how are you handling local lights in a dark scene together with a shader that supports sunlight amount (0-dark....1-bright)?
    I'm talking about vertex lighting systems similar to what most of you are using (I'm using one too in my voxel engine), not interested in Unity point lights and such.

    I mean for example, imagine you have a totally dark scene (sunlight set to 0), all vertex colors (with their baked light values) will be multiplied by a 0-value sunlight, hence resulting in pitch black vertex colors.
    So how would you be able to have both bright local light sources and a completely dark sunlight without having to run the light flooding algorithm all over again for all visible chunks whenever you change the global sunlight value?

    Currently I have a working voxel engine with a basic lighting system and day/night scene support, but local light sources are obviously completely dark at night together with everything else, and using Unity built-in lights has no meaning at this point for performance reasons.
     
    Last edited: Mar 21, 2015
  26. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @OMGWare

    Heres the trick. You are actually sending two sets of information to the shader via the vertex color.

    The R,G,B channels are used to calculate and hold your local light values (conveniently this allows you to have R,G,B lighting).

    The A channel is used to store how much sunlight CAN read this given vertex. So the A channel is actually representing how much sun reaches that vertex its representing what percentage of the current sunlight value is reaching that block.

    You then also have a float on your shader that sets the current level of the sun (lets call it _SUN).

    Then in your shader you use the Vertex A channel multiplied against your _SUN value to determine how much sunlight is affecting that vertex (we will call this _FINAL_SUN)

    Then you combine your new _FINAL_SUN with your R,G,B channels, and viola you have sunlight that you can adjust simply by changing the _SUN variable on the shader.
    Then in your shader your combine your R,G,B channel along with your sun and viola problem solved.


    Hope that helps and makes sense, if not just let me know and ill try to clarify.
     
  27. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    @RuinsOfFeyrin

    Thank you, your actually helped me understand that I already had everything set up but was using my shader slightly wrong.
    Maybe I should have an extra alpha channel for blocks affected by local lights so they won't receive sunlight and will be completely bright at night.
    Do you use an extra channel for that too?
    Furthermore, I am currently just using a single Sunlight channel, but would like to expand to full RGB lights, if I understand correctly I should be running the light flooding algorithm for R G B channels separately, can you confirm this?
    Thanks for your time!
     
  28. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @OMGWare

    Not sure I exactly follow the questions, I can think of two ways to interpret it. So instead i will just give an overview of how I have it setup.

    I store a Color32 (which is R,G,B,A) for every block that is loaded ( i do this in an array separate from my BLOCK array).

    For simplicity sake you have two separate passes for lighting. Sunlight, and Block lighting( torches, blocks that glow etc).

    During your sunlight pass you are determining how much sunlight CAN affect the block not how much IS. This is not actually how much light is currently affecting the block, but is instead more like a percentage value. You assign this to the A value of the Color32 of the corresponding block. I use a value between 0-255, for a reason explained later.

    Now you have your block lighting pass. Here each block type should have a color it emits as well as an "intesity" value.
    The intensity is used to tell how far it can travel, for simplicity it is essentially the value you would use to calculate light if you were not doing R,G,B. So now you process the light out from the block interpolating between the color the light is emitting, and black based on the intensity. For each block you mix the interpolated value of the current light you are processing with the blocks current lighting values stored in the R,G,B channel (from the Color32 mentioned above) and store the outcome back in the R,G,B channel.

    So now you have run your two passes and you have a Color32 field for each block.
    The R,G,B channel of the color 32 field holds the combined/mixed outcome of all the block lighting.
    The A channel holds the value representing what percentage of the available light can reach this block. I use a 0-255 value so that when it is passed to the shader, and interpolation is done in the fragment shader im getting a value between 0 and 1.

    Now when you build your mesh you use this Color32 field as the Vertex Color (mind you I'm assuming no ambient occlusion).

    Your shader should have a float value you can assign on it for sunlight level. (Optionally it can also have an additonal R,G,B field in you want the light from the sun/moon to cast a colored glow).
    We will call this variables SUN_LEVEL and it can be between 0 and 1.


    Inside the shader.

    This value will be between 0 and 1.
    This is telling you what percentage of the available sunlight to use. So you multiple A * SUN_LEVEL and you have how much sunlight is affecting this vertex now. this value is somewhere between 0 and 1 we will call FINAL_SUN. Use this value to generate a color for your lighting. If you have an R,G,B field for sunlight level you would multiple it by FINAL_SUN to get your sunlight color, otherwise multiple FINAL_SUN by (1,1,1) to get a "white" light.

    The R,G,B channel for the vertex color already contain the values for the lighting from the blocks as we did all that work before we got to the shader.

    From here it is a matter of how you combine/mix/etc your sunlight color and your block light colors as to what your outcome will look light. Different ways make it look different.


    Hope that makes sense and helped.
     
  29. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    @RuinsOfFeyrin Thank you so much for your help, you completely answered my questions.
    I think I got it in theory but still need to go through the whole process bit by bit to be sure I got it right, then I'll probably post a few screenshots.....or ask for further help if possible :)
     
  30. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    @RuinsOfFeyrin

    Thank to your last reply everything is clear, RGB lights are done, color mixing is so cool :)
    Don't know exactly why you were assuming no Ambient Occlusion though as it came out pretty easily, here are a couple of screenshots.

    Pure RGB Lights:
    rgb.png

    Mixed:
    rgb mixed.png

    In case anyone is interested, my world is composed of 16x128x16 chunks with infinite x/z, I use flat circular arrays for blocks and light channels, but still use a dictionary for chunks.
    The chunk generation distance is 15x1x15 chunks, although the visible world is a bit larger, and everything is still running on a single thread.
    Need to develop a multithreaded system and optimize chunk/mesh generation times, these are my current approximate timings:

    - Chunk generation with simplex noise: 15ms.
    - Mesh generation with unity mesh collider (I think I already know how to optimize this): 10ms.
    - Lighting (all channels): ~1ms

    I'm still not using greedy meshing nor any noise interpolation for chunk generation (I'm really interested in this if anyone cares to elaborate how it's done).

    @Everyone What are your timings and how is your world composed? :)
     
  31. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @OMGWare

    Looks very nice. I was just mentioned with out ambient occlusion in my guide cause there was an extra step or two for ambient occlusion that I wasnt going over (was describing the process without it), glad you figured it all out though the screen shots look good.

    Greedy Mesh - So i have this in my engine and here are some thing to think about with it.

    1) Greedy Mesh and AO (ambinet occlusion) : These are a bit tricky to get to work together. Its not impossible, just not as easy as it seems at first. Reason being is now you need to compare AO values for the 4 corners of every face you are drawing to the 4 corners of the neighbors face to see if they can be drawn together. Its a pain, but not impossible.

    2) Greedy Mesh & Texture Atlas: I have yet to get a texture atlas to look proper with Greedy Mesh. The problem comes in with having to wrap the UVs and mip mapping, and texel offsets, etc etc etc. So far the "best" look i got is with no mipmaps on, and point filtering. Which inevitably makes the textures look like total crap the farther away you are.

    I have been tempted to remove greedy mesh SEVERAL times, as other people have told me it wasnt a performance issue for them. However for me, i do a lot of work on an old ass laptop (though it does have a Radeon video card) and greedy mesh makes the difference between smooth and playable, and completely jerky garbage, so it stays for now and i just hope i can figure out this greedy/texture atlas/mipmap problem.


    Noise Interpolation - This one is actually pretty good and easy.

    Here is the idea behind it. Some when you generate perlin noise, you are actually using interpolation on a table of data to get a value for a specific point, and you do this for each octave you generate. Lots of interpolation going on then when you are generating X octaves for EVERY block in a chunk.

    So what if instead of generating a multi octave noise value for every block, you only generate it for a few specific blocks( 9 are needed if i am remembering correctly) within a chunk, and then interpolate every other blocks value in the chunk from those. In theory the interpolated values for the rest of the blocks should be "kinda" close to what the sampled noise values would look like though not exactly and reduce the behind the scenes math dramatically for a nice little speed boost.

    So lets put some "real" numbers to this to make sense then ill explain it some more.

    Lets say you generate 4 octaves of noise value and each performs 1 interpolation to get its value, so that's 4 interpolations per block.

    if you have a 16x16x16 area that is 4096 blocks x 4 interpolations per block = 16384 interpolations performed.

    Now if instead you only sample the 9 blocks for their real noise values and then interpolate every other blocks values from those you have
    (9 sample blocks x 4 interpolations) + (4087 remaining blocks * 1 interpolation) = 4123 interpolations performed. That is a little more then 1/4 of the interpolations that will be performed.

    So what are these 9 magic blocks? You need to sample the blocks at the 8 corners, and one at the center of the chunk.

    From there you use tri-linear interpolation along with the values from those 8 sampled blocks(as well as the locations) and the location of the block whose value you want to get and it spits out to you the value for said block.

    With that said, i sample in 16x16x16 areas within a chunk because i found sampling chunks that do not have equal width and heights seems to "stretch" the features some. Also sampling likes this has the down side of "smoothing" over some of the smaller details that would be generated by using purely sampled noise values.

    Hope that helps.
     
  32. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    @RuinsOfFeyrin

    Thanks for the explanation, the interpolation bit seems interesting I might be able to give it a try and see how performance changes.
    About greedy meshing though, to tell you the truth I still can't see the need for it, maybe because I'm not targeting mobile devices yet and my computer is on the high end.
    What I found really vital in terms of mesh performance was to increase the chunk size from 16x16x16 to 16x128x16, obviously that's related to the fact the my world height is fixed at 128 blocks, if I'd wanted infinite y I'd have most likely chosen 16x16x16 chunks.

    By the way I was hoping you would share your timings for chunk generation, especially considering the trilinear noise interpolation you have implemented :)
     
    Last edited: Mar 23, 2015
  33. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    @RuinsOfFeyrin

    I think I got the trilinear interpolation wrong, I'm using this snippet for trilinear interpolation:

    Code (CSharp):
    1. public static float lerp(float a, float b, float t) {
    2.     return a + t * (b - a);
    3. }
    4.  
    5. public static float trilerp(float q000, float q100, float q010, float q110, float q001, float q101, float q011, float q111, float tx, float ty, float tz) {
    6.     float x00 = lerp(q000, q100, tx);
    7.     float x10 = lerp(q010, q110, tx);
    8.     float x01 = lerp(q001, q101, tx);
    9.     float x11 = lerp(q011, q111, tx);
    10.     float y0 = lerp(x00, x10, ty);
    11.     float y1 = lerp(x01, x11, ty);
    12.     return lerp(y0, y1, tz);
    13. }
    Then I calculate the noise values at the 8 corners of the chunk, and pass in the coordinates of the block for which I want the interpolated noise as tx,ty,tz together with the qxxx values from the corners.
    But the result is wrong...how do you interpolate considering also the noise value of the 9th block (the chunk central one)?


    EDIT:

    Tried also with something like this:

    Code (CSharp):
    1. public static float lerp(float t, float x1, float x2, float q00, float q01) {
    2.     return ((x2 - t) / (x2 - x1)) * q00 + ((t - x1) / (x2 - x1)) * q01;
    3. }
    4. public static float trilerp(float x, float y, float z, float q000, float q001, float q010, float q011, float q100, float q101, float q110, float q111, float x1, float x2, float y1, float y2, float z1, float z2) {
    5.     float x00 = lerp(x, x1, x2, q000, q100);
    6.     float x10 = lerp(x, x1, x2, q010, q110);
    7.     float x01 = lerp(x, x1, x2, q001, q101);
    8.     float x11 = lerp(x, x1, x2, q011, q111);
    9.     float r0 = lerp(y, y1, y2, x00, x01);
    10.     float r1 = lerp(y, y1, y2, x10, x11);
    11.     return lerp(z, z1, z2, r0, r1);
    12. }
    passing the chunk bounds coordinates as (x1,x2) (y1,y2) (z1,z2), the result is not completely garbage but is still wrong.
    Again for clarity I'm using Simplex Noise and not Perlin, but I don't think it matters in terms of interpolating values across a grid.
     
  34. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    @OMGWare

    Hey sorry it took so long to reply, was a long morning and didnt get a chance to jump on here.

    So its been forever since i added the interpolation (over a year) so i had to go back and look at the code. I was mistaken it is 8 values instead of 9 (I remember thinking now 9 would of made more sense as it gives you a middle point as well).

    The first example of TriLinear interpolation assumes the points are you are sampling are in a 0-1 range.

    The second example of triLerp (and lerp) you showed is correct (the one like below, its the same one i use) as it asks for the minimum and maximum values for the points in the "area" you are interpolating on (which in this case is 0-15,0-15,0-15).

    Code (CSharp):
    1. public static float triLerp(float x, float y, float z, float q000, float q001, float q010, float q011, float q100, float q101, float q110, float q111, float x1, float x2, float y1, float y2, float z1, float z2) {
    2.         float x00 = lerp(x, x1, x2, q000, q100);
    3.         float x10 = lerp(x, x1, x2, q010, q110);
    4.         float x01 = lerp(x, x1, x2, q001, q101);
    5.         float x11 = lerp(x, x1, x2, q011, q111);
    6.         float r0 = lerp(y, y1, y2, x00, x01);
    7.         float r1 = lerp(y, y1, y2, x10, x11);
    8.      
    9.         return lerp(z, z1, z2, r0, r1);
    10.     }
    11.  
    x,y,z is the point you want to sample.
    q000-q111 are your pre-sampled points. They actually have to go in a specific order in here or it messes thing up.
    x1,y1,z1 - These are the lower bounds of the area you are sampling
    x2,y2,z3 - these are the upper bounds of the area you are sampling.



    When i said i interpolate in 16x16x16 blocks i didn't mean my chunks are that size (my chunks are 16x128x16).
    The reason for this is that if the bigger the distance between points, the less like the actual noise values they are going to be because you have less percentage of the total area to interpolate from. The bigger the distance between points, the less like actual noise values it will look like. 16x16x16 interpolation areas cause a good amount of smoothing, but still left very nice features in my generator (after a little tweaking). If you lower it to 8x8x8 it starts to look even more like the raw sample data, etc.

    The outcome will also depend entirely on your sampling of the original 8 points as well. If the noise values you sample create a high diversity of values in an area that is smaller then the interpolation box you will loose these possibly with interpolation..

    Interpolation will NOT give you an exact representation of the values you would get from sampling your noise function for every block. It is a trade off between how "cool" you want the terrain to look vs how fast you want it to render.
    My terrain generation function without interpolation created some really AWESOME looking landscapes (i also use 8 octaves of noise). The first time i got interpolation in and working i was VERY displeased with the outcome as it was barely recognizable from the terrain i was previous generating, the reason being is with the samples i was taking 4 out of the 8 octaves were at a resolution that would get "lost" in the interpolation. I had to play with my values all over again to get some very good looking (but not as awesome) landscape.

    Perlin or simpleX noise makes no difference either.

    Its really is a trade off between look and speed, as you will not get an exact representation of pure sampled data.
    Hope that helps. Also if you want send me a message (as i cant send you one) and I can give you some messenger contact info for me if you want as its quicker and easier to get a hold of me if you need more help.

    Right now some of the code in my project is torn apart so ill have to fix that up before I can get you some recent times for how long my generation, lighting, and mesh building times are. should be later tonight though.
     
  35. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    For the sake of anyone following this thread and having a similar issue as OMGWare we figured out the issues. The main issue was that the triLerp we are using is expecting Z instead of Y to be the vertical axis. Simply swap your Z and Y values when assigning your qXXX values. You also need to oversample on x,y,z positives so that the positive edge of an area blends with the negative edge of the next area.
     
  36. OMGWare

    OMGWare

    Joined:
    Mar 4, 2014
    Posts:
    24
    Exactly right, following up on this I just wanted to elaborate more on the last bit.
    Assuming a chunk of 16x16x16 blocks, say you are interpolating a chunk at position 16,16,16, just sample the values at the corners of the cube 16,16,16 - 32,32,32 (instead of 31,31,31).
    When you will be sampling the chunk at position 32,32,32 the values of its negative bound will overlap the samples from the positive bounds of the previous chunk, resulting in more blended interpolation across the chunks.
     
  37. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    So, in the attached picture, I implemented what this guy did to make a greedy mesh in his game by following the article he wrote on greedy meshes: https://github.com/mikolalysenko/mikolalysenko.github.com/blob/gh-pages/MinecraftMeshes/js/greedy.js, except in c#. However, as you can see, two things came out wrong: the collider is raised one unit high and the vertical colliders in the x- and z- axis face directions are reversed. That means any raycasts or collisions in general don't work on those faces. It can still take collisions against the x+ and z+ faces of a cube, along with from the top. So that means the algorithm is drawing them in reverse. For the unit high problem, I initially just put -1 in the vertex setting for the y axis. It fixed it, but it seems naive, and I don't want to take that solution unless I know it's correct.

    I can use some help with this.
     

    Attached Files:

  38. kenlem

    kenlem

    Joined:
    Oct 16, 2008
    Posts:
    1,630

    Here is what I'm using. For your purposes, offset can be Vector3.zero. You can probably remove it if you don't need it. Size should be the size of your chunk. Just updated it to call your getVoxel method. You'll note I updated the syntax of the conditional comparing faces to be a little bit easier (at least for me) to read.

    Code (CSharp):
    1.  
    2.  
    3.     /////////////////////////////////////////////////////////////////////////
    4.     //
    5.     // PORTIONS OF THIS CODE:
    6.     //
    7.     // The MIT License (MIT)
    8.     //
    9.     // Copyright (c) 2012-2013 Mikola Lysenko
    10.     //
    11.     // Permission is hereby granted, free of charge, to any person obtaining a copy
    12.     // of this software and associated documentation files (the "Software"), to deal
    13.     // in the Software without restriction, including without limitation the rights
    14.     // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    15.     // copies of the Software, and to permit persons to whom the Software is
    16.     // furnished to do so, subject to the following conditions:
    17.     //
    18.     // The above copyright notice and this permission notice shall be included in
    19.     // all copies or substantial portions of the Software.
    20.     //
    21.     // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    22.     // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    23.     // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    24.     // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    25.     // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    26.     // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    27.     // THE SOFTWARE.
    28.  
    29.     public void BuildGreedyCollider (ref Mesh mesh, Vector3 offset, int size)
    30.     {
    31.         Vector3[] normalDir = {
    32.             -Vector3.left,
    33.             Vector3.up,
    34.             Vector3.forward
    35.         };
    36.  
    37.         List<Vector3> vertices = new List<Vector3> ();
    38.         List<Vector3> normals = new List<Vector3> ();
    39.         List<int> elements = new List<int> ();
    40.  
    41.         int[] mask = new int[(size + 1) * (size + 1)];
    42.  
    43.         int index = 0;
    44.  
    45.      
    46.         // Sweep over 3-axes
    47.         for (int d = 0; d < 3; d++) {
    48.             int i, j, k, l, w, h, u = (d + 1) % 3, v = (d + 2) % 3;
    49.          
    50.             int[] x = new int[3];
    51.             int[] q = new int[3];
    52.          
    53.             q [d] = 1;
    54.          
    55.             for (x[d] = -1; x[d] < size;) {
    56.              
    57.                 // Compute the mask
    58.                 int n = 0;
    59.                 for (x[v] = 0; x[v] < size; ++x[v]) {
    60.                     for (x[u] = 0; x[u] < size; ++x[u], ++n) {
    61.                         //int a = (0 <= x[d] ? isSolid(x[0], x[1], x[2]) : 0);
    62.  
    63.                         Voxel aVoxel = GetVoxel (x [0] + (int)offset.x, x [1] + (int)offset.y, x [2] + (int)offset.z);
    64.                         int a = 0;
    65.                         if (0 <= x [d]) {
    66.                             //a = isSolid(x[0], x[1], x[2]);
    67.                             if (aVoxel.voxelType == Voxel.VoxelType.Solid) {
    68.                                 a = 1;
    69.                             }
    70.                         }
    71.  
    72.  
    73.  
    74.                         //int b = (x[d] < size - 1 ? isSolid(x[0] + q[0], x[1] + q[1], x[2] + q[2]) : 0);
    75.  
    76.                         int b = 0;
    77.                         Voxel bVoxel = GetVoxel (x [0] + q [0] + (int)offset.x, x [1] + q [1] + (int)offset.y, x [2] + q [2] + (int)offset.z);
    78.                         if (x [d] < size - 1) {
    79.                             //b = isSolid(x[0] + q[0], x[1] + q[1], x[2] + q[2]);
    80.                             if (bVoxel.voxelType == Voxel.VoxelType.Solid) {
    81.                                 b = 1;
    82.                             }
    83.                         }
    84.  
    85.                      
    86.                         // KLL: a and b can never be -1
    87.                         //if (a !=-1 && b !=-1 && a == b) {
    88.                         if (a == b) {
    89.                             // KLL: I believe this means no face
    90.                             mask [n] = 0;
    91.                         } else if (a > 0) {
    92.                             mask [n] = a;
    93.                         } else {
    94.                             mask [n] = -b;
    95.                         }
    96.                     }
    97.                 }
    98.              
    99.                 // Increment x[d]
    100.                 ++x [d];
    101.              
    102.                 // Generate mesh for mask using lexicographic ordering
    103.                 n = 0;
    104.                 for (j = 0; j < size; ++j) {
    105.                     for (i = 0; i < size;) {
    106.                      
    107.                         var c = mask [n];
    108.                      
    109.                         if (c > -2) {
    110.                             // Compute width
    111.                             for (w = 1; c == mask[n + w] && i + w < size; ++w) {
    112.                             }
    113.                          
    114.                             // Compute height (this is slightly awkward
    115.                             bool done = false;
    116.                             for (h = 1; j + h < size; ++h) {
    117.                                 for (k = 0; k < w; ++k) {
    118.                                     if (c != mask [n + k + h * size]) {
    119.                                         done = true;
    120.                                         break;
    121.                                     }
    122.                                 }
    123.                              
    124.                                 if (done)
    125.                                     break;
    126.                             }
    127.                          
    128.                             // Add quad
    129.                             bool flip = false;
    130.                          
    131.                             x [u] = i;
    132.                             x [v] = j;
    133.                             int[] du = new int[3];
    134.                             int[] dv = new int[3];
    135.                          
    136.                             if (c > -1) {
    137.                                 du [u] = w;
    138.                                 dv [v] = h;
    139.                             } else {
    140.                                 flip = true;
    141.                                 c = -c;
    142.                                 du [u] = w;
    143.                                 dv [v] = h;
    144.                              
    145.                             }
    146.                          
    147.                          
    148.                             Vector3 v1 = new Vector3 (x [0], x [1], x [2]);
    149.                             Vector3 v2 = new Vector3 (x [0] + du [0], x [1] + du [1], x [2] + du [2]);
    150.                             Vector3 v3 = new Vector3 (x [0] + du [0] + dv [0], x [1] + du [1] + dv [1], x [2] + du [2] + dv [2]);
    151.                             Vector3 v4 = new Vector3 (x [0] + dv [0], x [1] + dv [1], x [2] + dv [2]);
    152.                          
    153.                             if (c > 0 && !flip) {
    154.                                 //AddFace(v1, v2, v3, v4, vertices, elements, offset);
    155.  
    156.  
    157.                                 vertices.Add (v1 + offset);
    158.                                 vertices.Add (v2 + offset);
    159.                                 vertices.Add (v3 + offset);
    160.                                 vertices.Add (v4 + offset);
    161.                              
    162.                                 elements.Add (index);
    163.                                 elements.Add (index + 1);
    164.                                 elements.Add (index + 2);
    165.                                 elements.Add (index + 2);
    166.                                 elements.Add (index + 3);
    167.                                 elements.Add (index);
    168.  
    169.                                 Vector3 normal = normalDir [d];
    170.                                 normals.Add (normal);
    171.                                 normals.Add (normal);
    172.                                 normals.Add (normal);
    173.                                 normals.Add (normal);
    174.  
    175.                                 index += 4;
    176.                             } else if (flip) {                        
    177.                                 //AddFace(v4, v3, v2, v1, vertices, elements, offset);
    178.  
    179.                                 vertices.Add (v4 + offset);
    180.                                 vertices.Add (v3 + offset);
    181.                                 vertices.Add (v2 + offset);
    182.                                 vertices.Add (v1 + offset);
    183.                              
    184.                                 elements.Add (index);
    185.                                 elements.Add (index + 1);
    186.                                 elements.Add (index + 2);
    187.                                 elements.Add (index + 2);
    188.                                 elements.Add (index + 3);
    189.                                 elements.Add (index);
    190.  
    191.                                 Vector3 normal = -normalDir [d];
    192.                                 normals.Add (normal);
    193.                                 normals.Add (normal);
    194.                                 normals.Add (normal);
    195.                                 normals.Add (normal);
    196.  
    197.                                 index += 4;
    198.                             }
    199.                          
    200.                             // Zero-out mask
    201.                             for (l = 0; l < h; ++l) {
    202.                                 for (k = 0; k < w; ++k) {
    203.                                     mask [n + k + l * size] = 0;
    204.                                 }
    205.                             }
    206.                          
    207.                             // Increment counters and continue
    208.                             i += w;
    209.                             n += w;
    210.                         } else {
    211.                             ++i;
    212.                             ++n;
    213.                         }
    214.                     }
    215.                 }
    216.             }
    217.         }
    218.  
    219.         mesh.Clear ();
    220.         mesh.vertices = vertices.ToArray ();
    221.         mesh.triangles = elements.ToArray ();
    222.         mesh.normals = normals.ToArray ();
    223.      
    224.         mesh.RecalculateBounds ();
    225.         //mesh.RecalculateNormals();
    226.     }
    227.  
     
    Seromu and GibTreaty like this.
  39. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    I tried your code, even placing the offset to -1. It still ends up one unit above the terrain. I know this is being done because the chunks I created are at a 0.5f y origin distance at (0,0,0), and then chunksize + 0.5 for each vertical chunk. So for some reason, it's still rounding up to the next unit for the collider rather than going to the same y integer as the chunk's mesh. If there's a way to shift the collider mesh one unit, however, I believe this will be a good fix.
     
    Last edited: Apr 3, 2015
  40. magic9cube

    magic9cube

    Joined:
    Jan 30, 2014
    Posts:
    58
    Doesn't that method only deal with whole numbers? If you need a 0.5f offset, is that method capable of that? Otherwise, you may be missing a Mathf.Floor somewhere prior to this?

    [Edit] Ahh, yeah that vertex offset should move it I guess.
     
    Last edited: Apr 4, 2015
  41. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    Apparently I got it working by doing the following:

    Vector3 v1 = new Vector3(x[0], x[1]-1, x[2]);
    Vector3 v2 = new Vector3(x[0] + du[0], x[1] + du[1]-1, x[2] + du[2]);
    Vector3 v3 = new Vector3(x[0] + du[0] + dv[0], x[1] + du[1] + dv[1]-1, x[2] + du[2] + dv[2]);
    Vector3 v4 = new Vector3(x[0] + dv[0], x[1] + dv[1]-1, x[2] + dv[2]);

    Added -1 to the y coordinates of the above vertexes and it shifted the colliders down. With this fix, everything works, including the collisions to all sides. I'm skeptical cause the -1 fix seems too convenient, but I'll keep my eye out for any errors. Thanks, kenlem!
     
  42. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    Hey,

    So I have a question for anyone else using greedy mesh since that is what is being talked about right now.

    How are you handling wrapping the uv in the shader?
    I have a working solution, but there is a problem with it. Long story short, because of the way i handle the wrapping of the uvs in the shader I can not flip or rotate a Texture on the face of a block. So cant rotate an image 90 degrees or flip it on the vertical axis.

    Hoping someone has a better solution.

    Also, I was wondering if anyone has done Run Length Encoding for the columns of data in the block array.
    I've thought about doing it a few times, but Im not sure the best way to approach this. Simply use a string for each column? That seems like a lot of overhead building it up and seems like it would defeat the purpose, maybe im wrong. Array of bytes pre-allocated to the maximum potential length of a column (Height * 2)? That seems like there is a potential for allocated but unused memory. Or perhaps use a single array of maximum size to build the RLE data for a column, then use Buffer.BlockCopy to copy it in to an appropriate sized array?

    Obviously I can think of plenty of ways to do this, and i could obviously test them all for the "fastest" method, I am however hoping someone has already tackled the issue, and is willing to share some helpful advice.

    Thanks.
     
  43. Tempetek

    Tempetek

    Joined:
    Jul 28, 2014
    Posts:
    6
    Hey,

    I've been reading every last one of thoses posts and learned a lot, truly a lot. I've been coding my own engine and it seems to work well, but I find that it is too slow. So, as a solution, I wanted to use the ANTS performance profiler (as Alexzzz mentionned page 9) but it gives me no results ? Any idea on how to make it works ? I found nothing on the internet.

    - Tempetek
     
  44. RuinsOfFeyrin

    RuinsOfFeyrin

    Joined:
    Feb 22, 2014
    Posts:
    785
    Hey @Tempetek ,

    Im fairly certain (could be wrong) that you can not use ANTS directly with unity,or a Unity compiled project.
    If i remember the posts you are talking about correctly (and i might not be) I think what alexzzzz did what create a .net project that contained code from his unity project he wanted to do testing on.
     
  45. Cherno

    Cherno

    Joined:
    Apr 7, 2013
    Posts:
    515
    I am again in need of guidance :) I'm currently trying to implement my lava terrain. So far, it's just a normally generated mesh with a lava texture, but I want it to faintly glow to illuminate surrounding cave walls etc. Since global Illumination can't be used, I was thinking about creating point lights at random positions on the lava surface. I'd like to find out where connected areas are (similar to a greedy mesh algorithm) and then create a light for, say, every 3x3 surface area that is connected. Anyone got experience with computing something like this?
     
  46. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    For those who have made a mobile optimized version of your game, how did you deal with reducing rendering verts draw calls? I reduced camera draw distance to help remove a good chunk of vertices to draw, but there's still a good 400K vertices if I'm looking at the ground or a good mass of vertices in general. Though I'm wondering why there's that many being possibly drawn with my draw distance and when I'm only staring at the ground. The vertices are very small when looking at a head level view of a landscape, but they get big fast the closer to terrain the camera is looking at. Is this perhaps because of the caves that exist below? Which is odd, cause I'd assume the camera automatically culls them, so it shouldn't be drawing them since an opaque object is obscuring them.
     
    Last edited: Apr 17, 2015
  47. jpthek9

    jpthek9

    Joined:
    Nov 28, 2013
    Posts:
    944
    Given the requirements of Minecraft (many, many, many physics objects), one would need a really efficient physics engine. Luckily, because of Minecraft's lack of orientation, a kick-ass engine for the game. Instead of using SAT edge checks, simple inequality checks can be used to check for collisions. That's probably why MineCraft can simulate so many blocks (that, and culling).

    @CorWilson Ever notice how MineCraft often has large patches of nothing in the distance? I'm guessing they render so many blocks by not rendering most of them.
     
  48. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    Another voxel related question. How do you solve the problem of placing a block and then having your player fall through the terrain? Seems to be an annoying bug of mines whenever the block is placed very close to the player or the player is moving while placing a block, or when the player places a block below them while jumping. Ideally, I want to be able to place a block in voxel space where my character's collider does not occupy it.
     
    Last edited: Apr 23, 2015
  49. Vanamerax

    Vanamerax

    Joined:
    Jan 12, 2012
    Posts:
    938
    Just check the distance between the voxel to be placed location and the player position against some minimum distance? if the distance is shorter than the minimum required, dont place it
     
  50. CorWilson

    CorWilson

    Joined:
    Oct 3, 2012
    Posts:
    95
    Yeah, I ended up using a bounds check and checking the location to place the block, and if it's greater than half the distance of a voxel, that means it's outside the block location.

    I forgot that Unity had those features.

    So, I have one more little question. Good caves. Apparently, my caves are too wide. I was wondering how people are making interesting caves with their simplex and perlin noise. I'm having trouble making good ones that will make players like to explore them.
     
Thread Status:
Not open for further replies.