Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Virtual Geometry Images. Can someone explain potential benefits?

Discussion in 'General Graphics' started by Passeridae, Jan 26, 2021.

  1. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    First of all, I'm a digital artist, not a programmer, so my understanding of the technical side of this topic is quite limited, but it seems interesting nevertheless. I've come across several papers and blogs on virtual geometry images for the past few month. They seem to be a bit more popular after the UE5 hype demo. Here are some of them:

    http://graphicrants.blogspot.com/2009/01/virtual-geometry-images.html (this seems to be the author of the UE5 innovations himself)
    http://publications.lib.chalmers.se/records/fulltext/220583/220583.pdf (using virtual geometry in WebGl ?)
    http://www.cs.harvard.edu/~sjg/papers/gim.pdf and so on...

    They are all talking about more or less the same pipeline:
    1) Unwrap the mesh with as few seams as possible, up to the borders of the UV square until it's completely filled (and try to avoid stretching as mush as possible).
    2) Map original XYZ vertex coords to RGB colors for this UV layout.

    According to these sources the main benefit comes from the fact that now there's no random vertex order of the usual mesh + all the techniques like compression and mip-mapping that are available for the textures can now be applied to the geometry as well, since all the geometrical detail is stored inside a texture. Plus, this texture can be stremaed quite efficiently and the whole topic if often reffered to as a continuation of virtual texture streaming.

    The part about the textures seems logical and more or less clear. Now a question to the tech guys: is a random vertex order of some photoscanned rock is so much worse than a well organized vertex order of a primitive plane with the same amounts of vertices? Performance-wise, I mean.

    The thing is, the whole pipeline doesn't look that hard.
    1) The unwrapping (which according to all of the sources is the hardest part) can be done via a numder of specialized tools that support this kind of texture parametrization. It also can be done in advanced UV software like RizomUV by locking the border to UV square sides and unwrapping everything inside (which gives really good resuluts and very little stretching).
    2) The XYZ to RGB looks like a regular Vector Displacement Mapping.

    So, just out of curiosity, I downloaded a rock asset from Quixel Megascans, unwrapped it in a way I described above, saved this mesh, morphed it to UV (UV morph in Zbrush or via a script in 3Ds Max/Blender, etc.) which resulted in a square plane, saved it as well. Imported both meshes into Mudbox, baked a vector displacement map which stores a geometrical difference between them. So now I can apply this displacement map to virtually any square plane and it takes the form of the original rock asset. The plane can be also subdivided/tesselated in any way.

    The result looks close to this:
    "
    "
    And this:

    And according to the second paper:

    "Nothing is stated in [GGH02a] about how vertices for the geometry images are sent to the graphics hardware for rendering. But the implementation of the library OpenGI (Open Geomertry Images), created by Christian Rau [Rau11], which was investigated for this thesis, creates a geometry mesh patch of vertices that matches the size of the geometry image. It is sent to the GPU and the geometry image is rendered to its vertices using vertex texture fetch in a simple vertex shader [GGH02a]."

    Which implies that at least some of virtual geometry image approaches actually do create a patch (square plane?) and then displace its vertices in a regular shader. Is that really it?

    There's also a possibility to stream a gigantic 8/16k vector displacement map baked from a high-poly source mesh via virtual texture streaming to get a good detail level. The downside that is mensioned in all the sources as well is a quad overdraw which makes 3/4 of GPU work useless when polygons become as small or smaller than pixels due to 4x4 rendering scheme, if I understand it correctly. Probably an additional grayscale map can be provided to guide the tesselation density and make it more dense in the most detailed regions and sparce in less detailed/flat to remedy this problem to a certain degree. Something close to the patch tesselation technique.

    But the question remains: is this approach with a mesh created from a quad plane primitive with ordered vertices and geometrical detail stored in a virtual texture beneficial for performance in the end?

    Thanks!
     
    Last edited: Jan 26, 2021
    Lex4art likes this.
  2. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Bump, anyone?
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    If the question is:
    "What's more efficient, a normal mesh, or a mesh with similar poly count using a vertex displacement texture."
    The answer is a normal mesh. Way, way more efficient. Even worse if you consider the displaced quad won't look anywhere near as good as the regular mesh in this use case because the triangle density will be all over the place. It only gets worse if you have to increase the tessellation of the displaced quad to match the visual quality of the normal mesh, for many of the reasons you listed above. Plus any argument about vertex order efficiencies can be thrown out since any mesh imported into Unity will by default get put through a mesh optimization pass that will make it way more efficient than the 3d scan polygon soup the above examples would be comparing against, and to make something into a game ready asset would likely already need to go through some step of modifying the topology.

    But that's not really the reason to use this kind of technique. The advantage is continuous LOD on very high detail geometry with minimal work. Things like billion polygon models that you couldn't reasonably render normally can be quickly made into something plausible to render with some distance based tessellation and displacement texture mipmapping. At least that's the idea. There are other issues to consider though, like how you deal with culling, or micro triangles, or triangle density in areas of high distortion, etc. When you're dealing with non-real time rendering or architectural / CAD previews, a lot of these problems are less of a concern, and the ease of LOD alone might make it worthwhile. But for gaming it's probably not super useful.

    That said, similar ideas exist and have been actively used in the past. See this paper on mesh clusters:
    http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf
    The idea is to reuse a single 64 vertex mesh with vertex positions, normals, and textures all stored in virtual textures so that everything in a scene can be rendered with a single instanced draw call. The extra trick is culling based off of each mesh cluster's known bounds before hand.

    Moving to rendering via compute shaders also makes a lot more of these kinds of techniques plausible. Unreal Engine's Nanite may be something similar, though there's still not a ton of public information on it.
     
    hippocoder likes this.
  4. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Thanks for your reply and the detailed explanation!

    To me this looks somewhat close to DX12 mesh shaders with meshlets.

    So, if one of the main issues is an extreme high polycount that a displaced quad plane needs to be comparable with a regular mesh, would it be reasonable to tesselate this plane adaptively? Probably through a computer shader (like in this paper: https://www.google.com/url?sa=t&sou...FjAAegQIAhAB&usg=AOvVaw3dfUotPq7JXc4DKZgnUODz)

    Aside from the continuous LOD chain are there any other benefits of this approach? For example, isn't it easier for unity to render a scene if a large number of objects in it has the same mesh (a quad) in the mesh filter and share the same shader? Only the set of textures is different for each object, with the displacement map defining each object's actual shape.

    Also, do you know what is the actual way of rendering Virtual Geometry Images? Is it done, like it's said in the paper, just by displacing vertices of a geometry mesh patch (I wonder what do they mean by "patch" in this case...) through a regular shader? Is a model which shape is defined by a displacement texture streamed like a virtual texture can be actually called a virtual geometry? Or is there more to it? Probably, vertex displacement is better left to compute shaders as well?

    Sorry for a lot of questions, I'm just trying to grasp the idea of virtual geometry images. Not talking about UE5 now - it's clear they have a lot of custom stuff on top of it. Our team tackled with the idea of the continuous LOD by baking the difference between the lowest LOD and the High Poly source into a vector displacement maps. So we could always start with the lowest LOD and then scale the detail level up based on the camera distance. So now I'm thinking if it's possible to go even further by starting with a simple quad, because hundreds of even the most primitive LODs 8 from sources like Quixel still mean tons of geometry.

    Thanks!
     
    Last edited: Jan 29, 2021
  5. Undertaker-Infinity

    Undertaker-Infinity

    Joined:
    May 2, 2014
    Posts:
    112
    Kind of necro, but it's an interesting topic.
    Yes it's slower than a regular mesh, BUT there's so many benefits that offset that cost.
    I've never implemented any of this, so I'll be hand waving out a lot of important details, this is just what I know:
    Any kind of switching is slow, and the switch is done separately, so the more switching you avoid, the better. With geometry images on virtual texturing:
    Texture is never switched. All geometry resides in the texture, so geometry is never switched either. If all of these meshes share the same ggx shader (with the added geometry image stuff), you would get ALL your meshes in ONE draw call. The ideal case, all the time, with continuous LOD.
    You can also stream in/out geometry based on usage (so that factors in culling), since this is virtual texturing,and bring in the appropriate mips only.
    With all the info already available on these techniques, this should be perfectly doable in Unity.
     
    blueivy and skattology like this.
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    I think Epic's Nanite has proved out their technique that has many of the same advantages as virtual geometry images without the issues around seams and inconsistent triangle density that geometry images have. And since then they've talked a bit more about how the internals of Nanite work, and it's surprisingly similar to the system I described above, but using 128 triangle clusters.

    As noted by @passeridaepc , the first article he linked to on Virtual Geometry Images was written by Brian Karis, one of the people responsible for Nanite. While I don't think he's directly referenced that article recently, Brian has mentioned that Nanite is the culmination on over a decade of work on the topic. Considering he was fully aware of geometry images the fact they ultimately did not use that should tell you it end up as a dead end.
     
    AcidArrow likes this.
  7. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    In the recent stream, he said that the team spent almost half the development time to make the model import process as fast as possible because they understood that nobody will use their tech no matter how brilliant it is if you need to wait hours for the model to be processed for Nanite. I wonder if they opted for clusters because it's way faster in terms of mesh preparation in comparison to baking an extremely detailed geometry image...
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    Nah.Geometry images are super fast to generate. Especially if they're already unique UV'd, otherwise you just need to do some auto UV on it, just like most engines already do if you need to use lightmaps on it. Could probably generate a geometry image in a few extra tenths of a millisecond over "normal" import times.
     
  9. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Is there a software that can do it? Just want to test this approach.
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    Nothing that I'm aware of, no.
     
  11. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Thanks! Do you know any sources where I can find info on how to generate them?
    I don't really understand how are they different from vector displacement maps which also store vertex positions in 3d space..
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,238
    The difference between geometry images and vector displacement seems to be in the "UV generation". Otherwise I can't tell any difference either.
     
  13. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Interesting. I thought this special UV generation is separate from the process of baking geometry image itself. Required, but not necessarily tied to it. The resulting texture looks visually different from vector displacement, though.
     
  14. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I did notice some UV seams on nanite when doodling around in UE5, on some low res close up bits. Perhaps my system was under load at that time.
     
  15. Undertaker-Infinity

    Undertaker-Infinity

    Joined:
    May 2, 2014
    Posts:
    112
    You should see seams at the LOD discontinuities which could be in the middle of an object, judging on what I've read. Maybe they rely on the LODs to not be different enough and the polygons themselves to be small enough for anyone to notice?

    Edit: I'm assuming there's LOD switching to simpler meshes (less tesselation), which would be a discrete jump but reusing the same position data.
     
  16. Undertaker-Infinity

    Undertaker-Infinity

    Joined:
    May 2, 2014
    Posts:
    112
    Update: Here's the SIGGRAPH 21 video on Nanite. Turns out it's something else and way more involved than I had imagined (which is no surprise!)