A public service announcement about modeling mesh LODs. You are: Usually making too many LODs Usually introducing pop-in with them Usually wasting tons of memory with them Usually making batching and other optimizations less efficient with them A lot of the lore around "polygon counts" is based on software rendering in 1995 A modern screen at 1080p has 2 million pixels. Rendering in deferred, you will fill 4 buffers of this size, then resolve that to the main screen. If you're doing post processing, then you can expect several more passes over that data. And that's not including overdraw, transparency, etc. Any reasonably modern game easily processes tens of millions of pixels per frame, and the computations used per pixel are usually much more complex than the ones used for a vertex. In most cases, all a vertex shader does is convert the vertex from object to screen space, and perhaps compute a few things to pass over to the other shader stages. So, pixels are expensive, vertices are cheap, and "polygon counts" are truly irrelevant since they aren't really a thing anyway. Often you'll see a rock with LODs, which goes from 1000 vertices, to 750, to 500, to 250. In this case, the only LOD which will likely help is the last LOD. Why? An extra 750 vertices being transformed is a microscopic amount of work when you're drawing 20 million pixels per frame. If your scene has thousands of rocks, most will be in the last LOD. Very few will be in the middle LODs. So the last LOD is the only one which actually reduces vertex counts by any reasonable amount. Those mid LODs might be saving a few thousand vertices, while the last one is saving you 900*750 vertices. Your two mid range LODs are consuming more memory than the original mesh and final LOD. MicroTriangles are the real killer What is a MicroTriangle? Well, on most modern hardware, when a triangle gets small enough, it starts to incur a larger cost. This is because GPUs rasterize in 2x2 pixel chunks, as this allows them to share texture lookups between those pixels and compute proper mip maps. However, when one of those pixels is on another triangle, it cannot use that data; so it computes the whole 2x2 grouping, and throws the result away. As triangles get smaller on screen, you have more and more edges, and thus more and more wasted work. And when triangles get smaller than a pixel, the whole 2x2 block may get thrown out. Some timings on MicroTriangle throughput can be found here. Generally speaking, the critical point for this is about 10x10 pixels in size, in which work required per pixel grows exponentially. This is all view dependent, of course, so you're always going to have some of these cases, but when your mesh gets small enough that most triangles are this small, that is the time to consider LODs. A good way to figure out when you need to have an LOD is to turn on shader wireframe mode and zoom until the mesh is mostly wireframe with little of the shaded object coming through between the wireframe. That is the distance you should design your LOD for. Anything sooner is just introducing extra pops, extra memory for the meshes, etc. And often at that distance, especially for things like rocks, an imposter which can be a single triangle is just fine.