Search Unity

Performance friendly grid renderer

Discussion in 'General Graphics' started by drew55, Mar 11, 2020.

  1. drew55

    drew55

    Joined:
    Dec 13, 2017
    Posts:
    44
    Hi y'all,

    I'm making a grid background drawer (similar to the Unity editor's grid shader) except that it's with dots instead of lines and currently looking at a non-shader approach for platform compatibility.

    I've looked at a number of different approaches, and I think the best choice in terms platform compatibility and performance is to scale a lattice (a mesh) each frame and a the scale factor for each group. The idea is that its lightweight:
    (a) the mesh points are static (with only the scale factor changing), and
    (b) the texture is static and reused for the entire mesh.

    I'm was astonished a quality grid asset isn't available in the Unity asset store as it's such a useful item, so I'm pledging that any help here will go into sharing a polished asset in the asset store. I'm an experienced 3D dev, but I've been learning Unity the last few months so I'm still new to many Unity design idioms.

    So if you know Unity well and can suggest 1 or 2 approaches that preserve performance and platform compatibility, then I'm all ears. I'm expecting the best approach will be to make a runtime-generated mesh (of grid point positions) and when its set to a new scale, all that happens is the parent object is scaled.

    The other area of operation is billboard (vs xzPlane 2D texturing) for each dot. It will be consistent for all points, but it'd be nice to offer a billboard vs xzPlane, etc option. Last but not least, per "sprite" point needs to be constant screen size. This means that for each point on the mesh, a scaling factor needs to be computed so that point's size remains independent of world and camera positioning. Sure, this can be done with stuff like `Camera.WorldToViewportPoint`, but that's costly and requires per-point edits (which will kill performance if it's not smartly batched, etc).

    I was considering that DOTS might be a suitable choice, but I have concerns over the platform compatibility and I have no idea about something slick that generates pixel-constant sizes efficiently.
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Why not use a shader?

    Seriously though, there's no reason to avoid at least partially shader based approaches for this kind of thing. Every platform out there is going to support most everything you'd need to do most of this.

    My suggestion would be to construct a mesh of multiple quads all facing along one world axis (probably +z), store that quad's center pivot in an unused UV set (can just be the world xy position and use the vertex position's z, at which point you could use the first UV set to store both the texture UVs in the xy components and the world position in the zw of the mesh's Vector4 UVs). Then use a billboard shader to aim the quad at the camera, and scale it based on distance & screen resolution.
     
    drew55 likes this.
  3. drew55

    drew55

    Joined:
    Dec 13, 2017
    Posts:
    44
    Solid point, especially these days... I think I'm still traumatized from the days of getting shaders to work on early Dell budget laptops 10 years ago.

    Ok, many thanks for the suggestion, friend! Will put that in the hopper and share when there's progress...
     
    Last edited: Mar 13, 2020
  4. drew55

    drew55

    Joined:
    Dec 13, 2017
    Posts:
    44
    Isn't a downside of this approach that quad scaling is redundantly computed? I'm assuming this is yes and that the redundant 2 (of 3) mul() ops are negligible (per triangle)

    A geo shader addresses this, but as discussed, how does mobile hardware hold up under geo shaders? In a geo shader, that stuff is only computed once per grid quad.
     
    Last edited: Mar 13, 2020
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Here's the thing. GPUs are fast. Really, really fast. Even sh**ty slow ones on mobile are actually really, really fast compared to the CPU they're paired with for doing this kind of math.

    So yes, you're doing 3 times as much work per quad vs computing it on the CPU, but since the GPU is likely tens if not hundreds of times faster it doesn't really matter. For example a i7-9700k is around 425 GFLOPs out of the box. That's a really fast consumer CPU. When overclocked I believe it's still the fastest single core performance CPU you can buy.

    The nearly 4 year old, $100 GTX 1050 is 1.86 TFLOPs. That's 1,860 GFLOPs. Compared to all 8 cores of the i7 CPU it's "merely" ~4.4 times faster, but no sane person is going to be using a GTX 1050 with an i7-9700k. They're hopefully going to be using at least an RTX 2070 Super or better, which is 9.5 TFLOPs, or >22 times faster.

    So the redundant work of transforming all 4 vertices individually vs doing it on the CPU ... the GPU wins every time.

    A geometry shader would indeed let you do it "just once" per quad, assuming you have your mesh setup as point data and generate the quad in the geometry shader. But geometry shaders have some cost in just using them which may or may not be more than the cost of the redundant per vertex calculations.
     
    elenzil likes this.
  6. drew55

    drew55

    Joined:
    Dec 13, 2017
    Posts:
    44
    This was the focus of my question, no need to catch me up on CPU v GPU but I appreciate the support friend.

    So I had this working with a geo shader but I was astonished to find that my trash can mac pro (dual AMD FirePro D300) doesn't support that. Point being, geo shaders seem to need another few years until they're considered broadly available imo.

    Almost done w/ this thing and it's pretty cool!
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    That has nothing to do with the GPUs, they do support geometry shaders. The problem is Apple decided to stop supporting geometry shaders entirely with their new Metal graphics API. If you force Unity to use OpenGL on Mac geometry shaders will work fine, and have worked on almost all new desktop GPUs for the last 12 years. It's part of the requirements for Direct3D 10 and beyond, the first GPUs for which were released in 2007. OpenGL added core support for geometry shaders in version 3.2 in 2009, but it was available through extensions as soon as the Direct3D 10 GPUs came out. They're even supported on most modern Android devices as it's part of OpenGL ES 3.2 and OpenGL ES 3.1 AEP, the later of which has been the standard for mid to top tier Android devices since 2015.

    But, again, Apple decided to not support them in Metal, so they don't work any more if you try to use them there.

    To be fair, hardware makers are in general against supporting geometry shaders. It was something pushed into the Direct3D 10 spec by Intel and software designers at the protest of pretty much everyone else. Using compute shaders are considered the "real" way to do it, which is why Metal supports compute, but not geometry shaders, and the newer Mesh shaders (introduced in Nvidia's RTX GPUs, and recently announced to be supported in AMD's RDNA 2.0 GPUs) also expand upon the functionality of Geometry shaders while being significantly more efficient.


    I should also note the FirePro D300s in the Mac Pro were ancient technology when they came out in 2014. They were based on a desktop GPU architecture that had been released in 2011 and by 2013 was mostly relegated to the entry level GPUs. Apple used them because they fit within the thermal limits they had, but a mid range PC GPU from the same year easily bested them in basic rendering performance.
     
    Last edited: Mar 17, 2020
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Also, your first post talked about using DOTS, so I was hedging a little bit there even though the rest of the post talked about geometry shaders. ;)
     
  9. drew55

    drew55

    Joined:
    Dec 13, 2017
    Posts:
    44
    Sad times indeed. Wow, and I thought I triggered enough as it is w/ Metal, but that's rough news (and makes way more sense!)

    Thanks for the info drop!