Search Unity

So, Nanite

Discussion in 'General Discussion' started by Win3xploder, May 13, 2020.

Thread Status:
Not open for further replies.
  1. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    There's a lot of free statue models used in research online etc, quite detailed.
     
  2. Scoth_

    Scoth_

    Joined:
    May 25, 2020
    Posts:
    9
  3. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,117


    UE5 with Nanite and Lumen on PS5.

    This hurts...
     
  4. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    It's funny because i recognize some fortnite behavior, having a strong basis does help demo be actually impressive. It's F***ing interactive. The quality of face capture seems also on the low end of what epic can do (look at mouth), which hint at something toss together quickly, which is impressive, aldo playable character is almost vanilla meta human and does look out of place with scanned face.

    That's like a big flex.
     
    Deleted User likes this.
  5. Scoth_

    Scoth_

    Joined:
    May 25, 2020
    Posts:
    9
  6. Deleted User

    Deleted User

    Guest

  7. Lex4art

    Lex4art

    Joined:
    Nov 17, 2012
    Posts:
    445
    Well, there is already DOTS to stream meshes, LODs, sounds, etc for large-scale dynamic worlds (remember Megacity demo from May 2019!). And there is already Virtual Texturing to stream almost infinite amount of 8K textures (since Nov 2020 or even earlier).

    *artist whining starts*
    For me as artist there are some barriers to use all this in Unity, though:
    - Those features created via "for programmers by programmers" approach - it's super cool for programmers but for artist there is no checkbox to enable virtual texturing - write own shaders for that & deal with other stuff same way. No checkbox to render my level using some kind of DOTS to overcome CPU drawcalls bottleneck - write your own spawn, LODs and other system for each type of objects or something like that (correct me if I'm wrong, looks too scary to even try to dive in DOTS).
    - Those features not working/not fully working/not tested at all running in one project with each other + ray tracing. Virtual Texturing seems to work but still can't process transparent surfaces as far as I can remember, not working with Addressables, and in general - is anynobody in Unity really working on testing all that high-end features working together in one Fortnite-like project? Feels like there are literally 1 guy working on Virtual Texturing, slowly pushing one feature/one incompatibility fix at a time.
    - High-end features of HDRP are low priority for Unity management (or not low, but just not prioritized). My own bugreport experience (scroll down for list of reported bugs) makes me feel there are no huge game developers/Unity team currently using HDRP at full capacity and thus bugreporting on essential stuff & obvious problems with HDRP high end features/ray tracing, it is all on 3-4 enthusiasts and Unity staff itself when they can spot a problem using simple scene setups. Only one RT-based official project - LEGO - comes to mind, probably this is the reason for Unity management to keep current low pace for HDRP. Vicious circle.
    - Asset Store still mobile-content oriented - you can't upload more than 6GB assset there, and even those 6GB will be painful to push there (uploading speed limits and general state of infrastructure that simply not ready for heavy high-end content; my experience with that 1 year old, though - maybe now it's faster...).

    So, for a several months once a year internal Unity team uses HDRP, gives rich feedback and pushes out decent demo with custom hacks to overcome limitations of standard HDRP features set and show a couple of new features, once a year web team carefully stripes all this content to put under Asset Store limits and that it it, all fine for next year struggle... It works, but is it good enough... Each demo project is just another shard that often not fully connected with all other features/networking, just a showcase fragment.

    Glad that old good texture streaming was fixed on dynamically spawned objects - now I can have 6GB VRAM VS 11GB in my half-empty desert scene, it took about a year to fix that for some kind soul in his/her spare time - now we have this basic feature working for dynamic world creators (if anyone beside myself?).

    No idea how all that High-End stuff will work/not work at all with *DOTS* multiplayer. But at least such basic thing as terrain now working with ray tracing ... or still not oO?

    *artist whining ends*

    So, cool stuff is kinda already there (Unity's ray tracing even better than Lumen with its over diffused lighting and tons of quite poor screen-space hacks) but kinda not, really. Things goes better and better (love 2022.1b editor improvements) but relatively slowly and fragmented, at least compare to UE4/UE5. Hope in 2022 there will be more coherency in HDRP features and focus on HDRP in general... but maybe Unity as company happy with current way of doing things - pushing limits is more Unreal company credo, Unity has its own way ...
     
    Last edited: Dec 13, 2021
  8. Wawruch2

    Wawruch2

    Joined:
    Oct 6, 2016
    Posts:
    68
    I personally think that Unity is aware that something like Nanite and Lumen is going to be a standard for console/pc projects down the line. It's not even about the quality of the graphics (which is revolutionary) but about time. As an artist you can save so much time making model for UE5 compared to UE4 or Unity, whole process of optimising the model and making sure that the polycount is alright takes more time than anything else. You can just forget about polycount all together, and it looks much better. Currently things that HDRP offers seems irrelevant compared to UE5 tech, if Unity is not aware of it it's doomed to fail in a long run
     
    Scoth_ and Deleted User like this.
  9. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    FWIW most of those systems don't work for programmers without just as much hassle. This will hopefully change when DOTS gets released.
     
  10. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Fairly sure Unity is always aware. Take a customer seeing tech? Unity saw it a year or more ago and is on the board for said tech, or somehow part of that said tech's stuff. Some of the stuff in Lumen can be traced back to Unity's graphics research staff. It's a fairly small world out there when it comes to the actual people making high end tech and they all know each other.

    The big problem with tech is the investment behind it. Some want investment one way, some want investment another way. Bigger businesses like Unity have a surprisingly non unified direction, and that's common too.

    But let's say they invested in nanite-like tech and meshlet DX12+ tech? This still isn't going to end well for the customer because an arguably bigger problem is tooling that is absolutely necessary. Someone recently did a nanite-like in Unity, on a mobile phone. But admitted it'd have to go through the virtualisation of that mesh using Unreal's tools or waiting all night. The tooling that surrounds these technologies and the long term support make it a decision not to be taken lightly.

    Epic probably spent a couple of years just letting their tech bubble along til it was much clearer to them it could even be invested in, plus their company structure (it's quite traditional vs some tech places) allowed it to move forward clearly.
     
    Wawruch2, AcidArrow, Scoth_ and 4 others like this.
  11. raincole

    raincole

    Joined:
    Sep 14, 2012
    Posts:
    62
    It's funny that Unity guys freak out about UE5, while people who actually use Unreal all know it's not that special. Lumen is basically just SSGI + SVOGI, both are intensively used on the market since forever. Don't be scared by Unreal's PR.
     
  12. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    Scared? Why would anyone be scared?

    And yeah it may be less big news for Unreal users, who already had a pretty good HLOD and various dynamic pseudo gi solution. On the flip side Unity has none of that.
     
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Yeah no, that's not "just" that, there is a number of new ideas and completely different tracing method.
     
    Bioman75, SMHall, Energy0124 and 2 others like this.
  14. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    792
    Unity wanted to develop its own Realtime GI some time ago and use it to replace Enlighten, unfortunately no one has heard anything in this area for a long time, and Enlighten has been activated again.
     
  15. Lymdun

    Lymdun

    Joined:
    Jan 1, 2017
    Posts:
    46
    And it's not even an updated version of Enlighten
     
  16. bb8_1

    bb8_1

    Joined:
    Jan 20, 2019
    Posts:
    100
    mesh shaders : Mesh Shaders Release the Intrinsic Power of a GPU - ACM SIGGRAPH Blog , primitive vs smesh shaders : Primitive Shader: AMD's Patent Deep Dive | ResetEra also mesh shader vs nanite comparison i found in a post on yt : " Nanite also still uses hyper optimised compute shaders to programme the GPU with work which Epic themselves have stated and these still rely heavily on CPU/RAM. Mesh and Primitive Shaders take this compute shader functionality and integrate into the GPU so it relies significantly less on CPU/RAM. Mesh Shaders on Series X and PC, and Primitive Shaders on the PS5 are going to revolutionise graphics in the next generation and a lot of developers will embrace it beyond Unreal Engine. "
     
    NotaNaN and Ruchir like this.
  17. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Nanite can actually use mesh shaders when available. If you read how both work you'll see they are neither mutually exclusive and that mesh shaders alone does not "replace" Nanite: it just makes things like GPU-based geometry culling, GPU-based LOD management, and tesselation much less cumbersome to code without the need to juggle multiple shader stages and just use a single compute-like one.
     
    Bioman75, Energy0124, NotaNaN and 2 others like this.
  18. Lymdun

    Lymdun

    Joined:
    Jan 1, 2017
    Posts:
    46
  19. Ruchir

    Ruchir

    Joined:
    May 26, 2015
    Posts:
    934
  20. Deleted User

    Deleted User

    Guest

  21. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    My somewhat simplistic understanding is that it means that the GPU decides what should be rendered, so it does the culling itself and the draw call set up on its own instead of waiting for the CPU to prepare those.

    It more or less means CPU usage will drop a lot.
     
    Gooren, bb8_1 and Deleted User like this.
  22. Lymdun

    Lymdun

    Joined:
    Jan 1, 2017
    Posts:
    46
    MaximKom and bb8_1 like this.
  23. UnityLighting

    UnityLighting

    Joined:
    Mar 31, 2015
    Posts:
    3,874
    Unity was forced to reactivate the Enlighten because it took a long time to complete and release the new Dynamic GI solution. Also because we have no realtime GI solution in the HDRP for now... So Enlighten had to be reactivated
     
  24. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    MaximKom, SMHall and hippocoder like this.
  25. Lymdun

    Lymdun

    Joined:
    Jan 1, 2017
    Posts:
    46
    Well, considering the PR + the word "yet", we're clearly going to get a Nanite equivalent, except if I'm missing something
     
    Deleted User and bb8_1 like this.
  26. SebLazyWizard

    SebLazyWizard

    Joined:
    Jun 15, 2018
    Posts:
    233
    It might become industry standard anyway, so it's very likely that we'll see something similar soon.
     
    bb8_1 likes this.
  27. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    GPU Driven Renderer != Nanite
     
    Bioman75 and hippocoder like this.
  28. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The less spoken thing is that how that matrix demo worked well in HD on the series S, which has less Tflops that the one X. Which is like HUGE, that mean it's not next gen bound. I wonder how well we could see it scale down against the old state of teh art of each platform.
     
    Deleted User likes this.
  29. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    GPU driven is HDRP finally catching up to 2016~2018 rendering techniques beyond just fancy shaders and postprocessing.

    You simply cannot reach the sheer complexity of AAA game scenes with old school CPU draw calls and CPU culling like Unity always did. SRP batcher? That's just removing redundant commands between draw calls using the same shader, that's 10 years overdue.

    The pull request does mention deferred materials too, which is good and part of what Nanite does. It helps save pixel shading performance by decoupling the material rendering from the geometry, eliminating redundant shader processing at the triangles' edges. It also opens the door for other interesting possibilities like using compute shaders for doing the actual rendering of the materials.
     
    Bioman75, NotaNaN, ontrigger and 4 others like this.
  30. Wawruch2

    Wawruch2

    Joined:
    Oct 6, 2016
    Posts:
    68
    @Neto_Kokku thank you for clarification. Can you elaborate on GPU Rendering? Is it just a way to skip CPU bottleneck with drawcalls? Is it a performance revolution?
     
    bb8_1 likes this.
  31. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    @Wawruch2 GPU rendering is using compute shaders to handle a bigger part of the rendering process compared to the traditional way where most most mesh and material preparation was done on the CPU. This can include GPU instancing, occlusion culling, geometry virtualization like Nanite does or other things. (you could even count Lumen to that, which is compute shader based global illumination)

    @AcidArrow Why is Nanite not a GPU driven renderer? It surely uses the GPU to first virtualize some geometry and then render it, so.. ? There is more to it too, of course, like the data format and its preparation, which maybe makes it more than that, but the G.d.R. surely is a big part of it, right?
     
  32. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    Unity can move some parts of the rendering they are doing on the CPU now to the GPU (say, just the culling) and voila, Unity now has GPU Driven Rendering. But that's not exactly a Nanite equivalent, right?

    Nanite is a super fancy virtualized geometry / LOD / mesh compression system, that happens to be GPU Driven.
     
  33. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    Nope, that's not an equivalent. But GPU driven rendering is a bigger term, Nanite is one approach to it and so are other systems.
    Yes it is. And since you mention compression, that is the biggest reason, why it is not yet that interesting to me: it's data size. I am an indie developer that is using Unity atm. and I kind of care about my entertainment/GB-ratio. :D But it is not about a battle of the engines here, I don't care about either hype nor loyalty in that regard. Unreal have developed Nanite and Lumen? That's a great thing that they try out new techniques in graphics! And afaik. Unreal engine is open source, so people can port those techniques to whatever engine they want and they will, if it is worth it..
     
    bb8_1 likes this.
  34. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Yes, it's entirely about moving tasks like issuing draw calls, selecting which LODs to display, and camera/occlusion culling to the GPU.

    These are things developers have been doing on their own for years since Unity notoriously uses far too much CPU time to do those tasks. It's even worse in HDRP, which is why it's almost unusable on PS4/X1 (which have much lower single thread performance compared to PCs) if you try to go open/large/dense worlds with it. I'm personally knee deep in writing a custom clustered mesh renderer to optimize a game with large streaming urban environments which is simply far too much for Unity on consoles.

    Keep in mind that the kind of GPU driven rendering they are going for seems to involve producing draw calls on the GPU, which is only supported by DX12 and Vulkan, but I need more time to check their pull requests. This will be interesting to see because the API necessary for that (executeIndirect) isn't currently exposed to C#.
     
    Wawruch2, ElevenGame and bb8_1 like this.
  35. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    And I don't care about any of this. I am just saying that a Unity dev posting "No. We don't have a GPU-driven renderer yet at Unity" does not mean "we'll get our own Nanite".

    That is all.
    Compression helps make sizes smaller, not bigger. You don't have to feed a ton of data to Nanite for it to be useful.
     
    Last edited: Jan 13, 2022
    Gooren likes this.
  36. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    Yup, you are right then. We will still get our "own Nanite" though, especially if it turns out to be a great way of rendering at the hardware of the time.. If the benefits are big enough and it is open source it WILL be ported, no one needs to state that, it is just a question of time.
    It is my understanding that the LOD precalculations (and overall data structure) make a Nanite-Mesh way bigger compared to just a plain mesh with some LOD variations. It is nice that they have included some compression, too, but i don't think it is enough. Light rendering these days moves towards less precalculations (e.g. less lightmaps) to achieve more dynamic game worlds. So I don't think precalculating the heck out of static meshes will stick around for too long in the future, it is huge in data and not dynamic at all. It is good though that they show how geometry virtualization can be useful for high quality LODs, less overdraw, etc. There's use cases for that, but still it does not apply to gamey things such as swaying trees or dynamic objects. :D
     
    Last edited: Jan 13, 2022
    bb8_1 likes this.
  37. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,749
    It is my understanding that your understanding is wrong.

    But you know what part of Nanite I really miss? The part where a game engine releases a major feature which then works, as opposed to hyping up technologies with ego fueling conferences that then arrive too late, if at all and don't quite work, if at all.

    But the icing on the cake is that in the meantime, employees of this under performing company are making nitpicky tweets about how Nanite is not really optimal and Epic's doing it wrong, while the community of said company is constantly misrepresenting what Nanite is and does and what its problems are.

    https://en.wikipedia.org/wiki/The_Fox_and_the_Grapes

    In the meantime, if Unity got its own Nanite right now, absolutely nothing would be solved.
     
    Last edited: Jan 13, 2022
  38. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I recommend people just make their games right now. It doesn't matter what fancy tech is coming, if you haven't a game to throw at it. So lets get cracking and make stuff in Unity. Forget the tech. Let that problem be Unity's problem.

    Having a wicked game is the number one thing and nobody needed nanite so far for that.
     
  39. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    No, watch their technical video
    Also nanite as nothing to do with light, or light precomputation.
     
  40. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    I did, but it was a while ago. So you are saying a mesh prepared for Nanite is smaller than a plain one? I understood that differently, but eager to learn. Did you try it in Unreal5? What's the data size difference?
    I know. I only mentioned lighting as an example of another part of rendering, which is moving away from precomputations and static worlds. Don't get me wrong, Nanite it might be just the perfect solution to render game backgrounds. But as long as you have to render skinned meshes on top in the traditional way anyways, you again face the problems wich Nanite solves for static things (regular lods, culling, polys smaller than a pixel, etc.). So it might be perfect to render movie backgrounds and things like that, it is useful for games, but you can't make a game using "Nanite only".
     
  41. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Nanite uses quantization to compress vertex components like position, normal and tangent (not sure if they quantize UVs). I don't know what exactly their vertex size is, but by using 48 bits for position (instead of 96) and 32 bits for normal and tangent (instead of 96 each) alone I got my custom meshes use 40% less memory than standard Unity meshes.

    Also, you seem to be assuming Nanite "bakes" the meshes in the scene, kinda like Unity static mesh merging. That's not how it works: it process the source mesh, converting it to a format Nanite can use. It can then be used for anything you could use a non-skinned mesh for, even procedural generated games. For example, the cars in the Matrix demo are rendered using Nanite while they are undamaged, at which point they are replaced by standard meshes.

    And just because you can't do a game "entirely" with Nanite (since it doesn't work with vertex deformation) it doesn't diminish its benefits.

    Of course, if your game takes place entirely in jungles full of fully swaying trees or have the screen filled with mostly characters on top of a barebones background, then it won't do much as those are different domain problems with different solutions. But most games have tons of static meshes, and are pushing larger and denser worlds.
     
    Last edited: Jan 14, 2022
    mariandev and bb8_1 like this.
  42. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    All right. But it also precalculates data about the mesh, which allows the virtualization. You'd have to add the data size of that to your now compressed mesh size. If it wouldn't do that, the whole "enabling Nanite for a mesh" step would not be necessary and any static mesh could by default be rendered by it. So if you can, please check, how the size of the whole rendering object changes, not just the mesh, because Nanite needs more than that.
    I see how you could assume that from what I said. :) But I did know that it is about any kind of static meshes, that can move and be thrown around by physics, etc. The word i used, "movie background", was exaggerated I admit that. ;-)
    Yup, agreed.
     
    bb8_1 likes this.
  43. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    The precalculation is basically a tree of polygon group, it actually help the compression, the various mip map like level share similar data and thus compress greatly.

    I mean nanite basically just group adjacent triangle in groups of 128, that's the main processing, then decimate group of them and store the decimation as a parent for selection.

    And given the density of teh mesh your forgo global texture since texture are there to supply details to sparse mesh, that's the key, because texture are always power of two and increase quadratrically in size, while nanite is arbitrary in size, not having texture with their mipmap (for the same level of details) is the big saving you see on nanite. And you can probably run an optimized decimation that remove vertex less contributing to shape before going into nanite anyway.

    Technology 10 years in the making, I can see how the 128 groups limit might come from geometry images.
     
    bb8_1 likes this.
  44. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    The group size is probably to align to the GPU warp/wavefront thread count (32/64).
     
    bb8_1 likes this.
  45. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Obviously
     
  46. Scoth_

    Scoth_

    Joined:
    May 25, 2020
    Posts:
    9
  47. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Very interesting, extra info for people too lazy to look for the comments but are interested nonetheless

     
    blueivy and Deleted User like this.
  48. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    792
    Unreal is not open source in the sense that is a common misconception. You have limited source access: you are not allowed to post large code snippets publicly, change code only for personal use, no redistribution of code...
     
  49. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    Okay. I didn't know much about the license details tbh. Still the source is available for people to see how something is done in terms of implementation, so what I mentioned above is possible. It couldn't be a simple copy & paste of code anyways, which is what the restrictions you mentioned talk about.
     
  50. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    This thread is now general discussion due to people just turning up and making things up, with other people just replying to that made up stuff. It's so far away from HDRP it is now just general babble.

    The best part of the thread is the progress of the Unity-based virtual geo rendering and the author of that is welcome to make a dedicated thread for it where they'd like to.
     
Thread Status:
Not open for further replies.