I know this isn't the correct forum, but generally you guys seem to really know your stuff and I can't get an answer anywhere else. I'm trying to better understand the performance considerations of Unity's deferred lighting system. From what I've read most deferred rendering systems work in something like a O(num lights * pixels), Unity's system doesn't seem to really follow that. Although the manual states: "The rendering overhead of realtime lights in deferred lighting is proportional to the number of pixels illuminated by the light and not dependent on scene complexity." I'm working on a desktop deployment with what I think is generally simple geometry on screen (200-300k tris/verts). The camera is most often top down, so that number is usually pretty steady. I've been getting better performance using forward rendering with a shader using fullforwardshadows on receivers to render shadows using point/spot. I'm trying to understand why forward rendering has been faster under these circumstances and under what circumstances that might change. Am I getting faster performance using forward rendering because of my generally simple screen geometry, is it because I'm only using a handful of dynamic pixel lights? Is it because those dynamic lights don't generally overlap much? If I generally cap the number of dynamic lights affecting a given area to 2-3 will deferred lighting ever be faster? Am I just missing the point?