Does anybody know and can explain the specifics of how the "pixel error" for terrain is implemented? The documentation simply describes it as "Amount of allowable errors in the display of Terrain Geometry. This is essentially a LOD setting.". By setting the scene view to wireframe mode and adjusting the slider, it is clear to see the general approach of how the vertex density of the terrain mesh is reduced in patches depending on distance, but it is not entirely obvious how that relates mathematically to the pixel error value set. What I'm trying to do is align the vertices of the edge of a procedural mesh to place it exactly lying on the ground. When pixel error = 1, I can achieve this exactly using Terrain.SampleHeight, or accessing the heightvalues in the TerrainData.GetHeights array. However, with higher pixel error values, these methods return the "true" height of the terrain, rather than the approximated height at which the terrain is actually drawn in the game camera based on the pixel error. As a result, my mesh sometimes cuts through the terrain, and sometimes floats slightly above it. If I knew the specifics of how pixel error was implemented, I could reproduce it in my own procedural function that generates the mesh to ensure the vertices match perfectly, but I haven't quite been able to achieve this yet. Any pointers?