Search Unity

Large 100 x 100 ground or 100 small 10x10 ground ?

Discussion in 'General Graphics' started by slake_it, Mar 27, 2019.

  1. slake_it

    slake_it

    Joined:
    Aug 2, 2015
    Posts:
    71
    Hi,

    I have 2 questions:
    1- can you recommend a good source to understand unity graphics ( so I can answer this question my self ) ?
    2- I have a 3D mobile game & I wanted to know is it better to split the ground into 100 small parts or keep it as a single large mesh ? ( Does unity render the pixels of the entire ground mesh even the not visible parts ? )

    - ground 1 = flat mesh with a texture
    - ground 2 = transparent mesh with visible fish & buildings under the ground
    - ground texture size = 1 mega ( 1024 x 1024 )
    - ground size = 100 x 100 units
    - target platform = mobile


    Thanks
     
  2. Sh-Shahrabi

    Sh-Shahrabi

    Joined:
    Sep 28, 2018
    Posts:
    53
    This really depends on the type of performance issue you might be facing. So here is some break down:
    If you have one giant mesh, the advantage is that you only have one material, one draw call. Disadvantage is that if the mesh is high poly, you are wasting the opportunity to reduce the number of vertices you need to calculate through techniques such as frustum culling, or baked in occlusion culling.

    That was on the vertex shader side. On the fragment shader side, if the geometry you are rendering is opaque (non transparent), Unity uses the center of the bounding box of the mesh visibility (so the center between the two vertex with furthest distance from each other) to sort the objects that are supposed to be rendered in order of their distance to the camera. Then it would typically render the objects from the closest to the camera to the one with the most distance. For each pixel, it saves the depth value of the pixel it has rendered to the Z buffer. This has the advantage, that since the objects closest to the camera are rendered first, as you start rendering objects behind the closest objects, you can get rid of pixels that are behind existing rendered pixels. (so you don't render things that are behind other objects). Depending on hardware, and complexity of your fragment shader, this could save you on performance. However if you have one giant mesh you are sending to the GPU, no sorting will happen, and everytime you have fragments overlapping in the view space, you will have overdraw.

    If you take the same mesh and reduce it to smaller chunks, and assigning each chunk with a different materials, it will cause you having more than one draw calls. Again depending on situation, this can be bad. Another disadvantage is the time the CPU needs to spend on sorting these chunks and culling (this is not much, so it is mostly irrelevant).

    Best way for most cases would be to have a texture atlas, one material for all chunks, and a segmented mesh. Then using static batching you can batch these segments in one draw call.

    This was for the opaque geometry, transparent geometries are only in that sense different, that they don't write to the depth buffer. So fragment overdraw will happen with them anyway (unless they are behind opaque geometry). They would still benefit from parsing the mesh in regards to frustum culling and Occlusion culling (though transparent, they can still be occluded)

    As for resources, that is a hard one. Specially if you want to learn specifically about Unity rendering.
     
    slake_it likes this.
  3. slake_it

    slake_it

    Joined:
    Aug 2, 2015
    Posts:
    71
    Thanks a lot for the thorough explanation.
    Just for clarity
    1- If the entire ground is a single mesh, all the mesh will be rendered by fragment shaders ( the pixels calculated & drawn ) even if they are not visible by the camera ?
    2- does static batching differ from having a single mesh ( if it will combine the meshes & generate a single mesh anyway ) ?

    Thanks again ;)
     
  4. Sh-Shahrabi

    Sh-Shahrabi

    Joined:
    Sep 28, 2018
    Posts:
    53
    To your first question, the fragment shader won't render stuff off screen. so if 80 percent of your mesh is off screen, it will only render the twenty percent. What I was talking about before, regarding the overdrawn, is when two surfaces are both withing the frustum of the camera but one is being occluded by the other. The general pipeline goes like this, after a fragment is done, it writes to the depth buffer it's depth value. When the next fragment wants to render, it first checks if it is closer to the camera than the currently existing depth in the depth buffer. If it is not, then it wont bother going through the fragment shader. If your mesh is parsed, the CPU will sort your objects for you based on their distance. This sorting ensures that you render the things closer to camera first, so that the fragments of the stuff on the back don't have to be rendered. However if it is just one mesh, there won't be any sorting.

    Static batching is different because you will still have stuff like frustum culling, it combines everything in one Set Pass call. So the chunks which are not within the visible frustum won't be culled
     
    slake_it likes this.
  5. slake_it

    slake_it

    Joined:
    Aug 2, 2015
    Posts:
    71

    Thanks the help