The default LOD calculation is based on screen percentage. This means that edges and FOV directly affect the current LOD. You can have everything lined up in a certain way to limit popping and such, but will the LODs will behave completely differently in different headsets. This is far from ideal. There are some half-measured solutions which will account for the difference in FOV by multiplying the global LOD bias. This makes everything match closer to what you would expect. This is better than nothing, but it doesn't account for everything. https://forum.unity.com/threads/lodgroup-in-vr.455394/ Due to rectilinear projection, the edges of the screen will take up a much larger proportion of the screen. This means that anything you look directly at in the middle of your HMD will have a lower LOD than anything in the edge of your vision. This is exactly the opposite of what you want happening. https://physics.stackexchange.com/q...near-image-projection-in-image-stitching?rq=1 Ultimately what this all comes down to is that is that this method work okay on a screen but are really bad for a HMD. This is a big issue that needs a real built in solution. Perhaps something based on distance that doesn't care what the shape of the screen is since our perception of a virtual object isn't directly based on the screen. There are some other third party plug-ins which are based on or have options for distance or cells, but will every VR project require such plug-ins? I certainly hope not.