Search Unity

A question about using meshes to build levels

Discussion in 'General Discussion' started by splattenburgers, Jun 3, 2019.

  1. splattenburgers

    splattenburgers

    Joined:
    Aug 4, 2017
    Posts:
    117
    I see some guys on youtube use meshes/boxes to build levels. One thing that I always wonder when I see this is: Doesn't this affect performance?

    What I mean is that they use 3D meshes for things like walls, floors etc but only a portion of the entire mesh is actually seen inside the level. Doesn't this destroy performance by making the game render polygons that aren't actually seen inside the level?
     
  2. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,175
    No, because game engines and the graphics cards that do the actual drawing don't render objects and polygons that aren't visible unless you tell them to.

    https://en.wikipedia.org/wiki/Z-buffering (specifically Z-culling)
    https://en.wikipedia.org/wiki/Hidden-surface_determination (aka Occlusion Culling)

    By the way this is standard practice. Just about every game out there does this. Both indie and AAA.
     
    Last edited: Jun 3, 2019
    splattenburgers likes this.
  3. splattenburgers

    splattenburgers

    Joined:
    Aug 4, 2017
    Posts:
    117
    Ah that makes sense! Nice to know.
     
  4. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Last edited: Jun 3, 2019
    xVergilx likes this.
  5. Deleted User

    Deleted User

    Guest

    Honestly I was on a team at one point where the devs were so new to game dev, they had no concept of performance or what might affect it.

    They had this scene with a number of major problems that could have been avoided if they understood that concept and best practices.

    The two biggest ones were: everything had a differentiated arbitrary scale and there were many objects which were parented together. Even in the editor there were issues manipulating the scene.

    The team posted a video recently (last week) of the progress they had made (almost nada) a year on, still the same scene but with improved performance. I was the one who had advocated 1. rewriting the whole game and 2. scaling everything properly but nooooo they had to kick me off the team when I was having a rough time in life. Guess I was right about those things, huh?

    Anyway the kicking me out was really a blessing in disguise, as I had a lot of tough stuff ahead of me that fall.

    :)
     
    Ryiah likes this.
  6. CityGen3D

    CityGen3D

    Joined:
    Nov 23, 2012
    Posts:
    681
    Any polygon in the frustrum is rendered regardless of whether the camera can see it or not.
    It is called overdraw when your GPU is rendering stuff you can't see.
    (I think there's a Scene view setting to show you where your scene may be losing performance from overdraw).

    So techniques like Occlusion Culling are used to solve this problem, I just wanted to make clear that it doesn't happen by default.
     
  7. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,637
    I can see how these two problems could lead to performance concerns, but these seem like things that should be particularly easy to fix with some automatic utility. I'm sure there's a few on the asset store already. You could just let everyone build their levels carelessly and then fix it with two clicks.
     
    Ryiah likes this.
  8. Deleted User

    Deleted User

    Guest

    Omgosh I didn't know that xD well not my problem anymore so they just wasted almost a year recalling everything hahaha
     
  9. splattenburgers

    splattenburgers

    Joined:
    Aug 4, 2017
    Posts:
    117
    So Ryiah was wrong and all polygons are rendered after all?
     
  10. CityGen3D

    CityGen3D

    Joined:
    Nov 23, 2012
    Posts:
    681
    The implication it's done automatically is wrong.

    Unity does Frustrum Culling for you automatically.
    That is, it wont render things outside your field of view.
    It wont do Occlusion Culling automatically though, so polygons behind other polygons will still be rendered and take processing time.

    You can test this fairly easily by creating a terrain with loads trees on it and putting a wall between the camera and the trees.
    All the trees will still be rendered and your frame rate wont go up just because you are behind the wall.
    It will go up if you look into the sky and the trees aren't in your FOV anymore.

    You can find out more about baking Occlusion Culling here (and there are other third-party solutions too).
    https://docs.unity3d.com/Manual/OcclusionCulling.html

    "Occlusion Culling is a feature that disables rendering of objects when they are not currently seen by the camera
    because they are obscured (occluded) by other objects. This does not happen automatically in 3D computer graphics since most of the time objects farthest away from the camera are drawn first and closer objects are drawn over the top of them (this is called “overdraw”)."

    That aside, boxing out levels in the way you describe is fine though really. Back-face culling will help and modern hardware can handle really huge numbers of polys so unless you are being really wasteful it’s nothing to be overly concerned about.
     
    Last edited: Jun 3, 2019
    Ryiah likes this.
  11. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Depends on material management how much overdraw will affect performance. If its a few extra set pass calls there is no worry. But when you start to get set pass calls in the hundred extra performance start to drop.
     
    CityGen3D likes this.
  12. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    It's more complicated than that...

    Rendering happens in a pipeline, and one of the early stages of that pipeline is a "depth check". When a pixel is written to the screen, its depth value is also written to a different buffer. Then, when later pixels are being drawn, their depth is first calculated, and the rest of the pipeline is only executed if the depth of the new pixel is less than* the depth of the old pixel.

    So every polygon is iterated over, but the expensive parts of rendering most occluded pixels are skipped. I say "most" because it's entirely possible for a far away thing to be rendered early on in a frame, and then completely written over by closer objects which overlap it.

    There are rendering techniques which take this same principle further, and do a depth pre-pass of all objects, and use this to construct a new render queue with only those objects which are visible after that. I don't think this ever became standard practice, though, probably because there's all sorts of complexities when it comes to mixing depth buffers and transparent objects and post effects. (So the details of the implementation would need to be tweaked depending on exactly what's going on.)

    * This is configurable, so it's not always "less than".
     
    Ryiah likes this.