Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question Are micropolygons bad?

Discussion in 'Shaders' started by flogelz, Feb 3, 2023.

  1. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    141
    Ok, this is more a general question that I stumbled across while looking at the internal Billboard Grass Shader from Unity. Under a terrain, you can set a radius on how much grass/details you want to render. Now I thought they would cull the meshes beforehand and what you see on screen is what the pipeline works with.

    But after looking inside of the shader, actually a lot more grass gets rendered and grass planes outside of that above mentioned range just get scaled down to 0 in the vertex shader. Which, yeah, "culls" them, but aren't they still eating up perfomance? Because it would be new to me that the graphicscards don't render polygons if they are all at zero-

    Which brings me to the point if the resulting micropolygons are bad performance wise or if this is a usable technique to "cull out polygons" cheaply?
     
  2. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    If a polygon is back-facing or any of the verts have an invalid position, then it's culled by the GPU and never reaches the rasterizer. An easy way to cull a vertex (and all triangles it touches) in the vertex shader is just to output NaN for position.

    As for whether small polygons are actually bad for performance, it gets more complex. I like this (slightly outdated, but still quite relevant) series which discusses how the rasterizer actually works: https://fgiesen.wordpress.com/2011/07/06/a-trip-through-the-graphics-pipeline-2011-part-6/ .

    In general, a long, thin triangle is probably worse than a tiny one, but lots of tiny ones can end up generating some overdraw. Remember the smallest thing that gets shaded is a 2x2 quad of samples. So if a triangle is smaller than that, it still generates that 2x2 quad. If there are tons of triangles that are small on screen, there could be many more of these quads than really needed, especially if MSAA is on. As to whether it's your bottleneck or not, you'll need a profiler to figure that out. But, hey, that's one argument for some of the modern GPU-driven rendering techniques like visibility buffers or deferred texturing (which require bindless so aren't supported by Unity).
     
    flogelz likes this.
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    It should be noted that any triangle with two vertices at the same position, or for which all 3 vertices are on the same line, will also be culled and never rasterized. These aren't micro polygons, these are degenerate triangles. The GPU culls these the same as off screen triangles, frustum culled triangles, back facing triangles, or triangles with NaN position vertices. There's functionally no performance difference between any of these: a culled triangle is a culled triangle that gets rejected before it's rasterized.

    Micro polygons are just very small triangles relative to the screen size. The actual definition of micro polygon is somewhat vague, and can be more strictly defined as a triangle that is smaller than a 2x2 quad of pixels, or smaller than a single pixel.

    Beyond the above reasons @burningmime mentioned. A more straightforward reason why micro triangles are bad is how many vertices you're calculating compared to the number of pixels that triangle shades. If you have 4 triangles each covering only single pixels, that's potentially 12 vertices being calculated for 4 pixels. Whereas a larger triangle you're calculating 3 vertices and getting many more pixels than that.

    A lot of "micro triangle" renderers skip the triangle aspect entirely, or at least the triangle level vertex data interpolation, as there's no performance benefit to doing that when only 1-2 pixels are being shaded. Nanite for example does do interpolation, but entirely in the compute shader.
     
    Last edited: Feb 3, 2023
    flogelz and burningmime like this.
  4. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    I should add there's one more issue with small triangles I've run into. I had a 300k poly mesh of a sweet anime mecha. It had a lot of facets, complex normal maps, etc. When viewed far away during animation or camera movement, it was "flickery" since you had may polygons facing different directions fighting for a single pixel. Some were bright, some were dark, some were emissive, etc... whichever one "won" got that pixel, and neighboring pixels often had normals facing completely different directions. The next frame, if the camera moved slightly, a completely different set of triangles would be chosen.

    Which is another good argument for LODs.
     
    flogelz likes this.
  5. flogelz

    flogelz

    Joined:
    Aug 10, 2018
    Posts:
    141
    Thank you two for shining more light on this topic! This was super helpful and I didn't knew you could cull triangles like this, which has some nice usecases I can think of already!