Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Nano Tech - Something similar to Unreal Nanite

Discussion in 'General Discussion' started by Antypodish, Nov 17, 2021.

  1. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,754
    I have no ideal about the tech behind, but looks very similar to Unreal Nanite.
    Any thoughts?



    Also author (Chris Kahler) replies to one of commenters,
    Q: Would this be on the asset store?
    A: Maybe next year, it also depends on how many unity users want it. I was more thinking about a Patreon campain with github access.

    Edit:

     
    Last edited: Nov 22, 2021
  2. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,967
    Yes this does look incredibly similar to nanite, no idea under the hood what differences there are but the overall approach seems the same

    Which begs the question, when does unitys internal version of the same thing come out? :p If one guy is making this unity must be at least looking into it after all the buzz nanite has created right?

    Nanite might be UE specific but the overall approach will become a standard across all engines and I would be excited to hear / see something about what that may look like in unity
     
    Lapsapnow, Mlackey, Rewaken and 3 others like this.
  3. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,850
    My uninformed guess is this can be extracted from the WETA pipeline.
     
    MadeFromPolygons likes this.
  4. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    160
    No real time and cinematic render farm based rendering are too different beast altogether, the culture toward asset is different, in render farm culture, you are not looking to optimize performance (money does it), you are looking to optimized visual and workflow speed. Which is why the paradigm shift of real time cinematography is happening.
     
  5. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,850
    Apparently you are not taking into account they have a real time renderer to preview with prior to putting it through their cinematic render pipe. You are trying to educate someone who has been studying or building sfx since the manual days of the 80's and involved with game engine tech since 2009. I made my first stop motion 16mm film in 1971.To believe that what was purchased at 1.6B USD by Unity from WETA will never make it into the real-time pipeline is naive to say the least.
     
    MadeFromPolygons likes this.
  6. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,631
    For the answer to this question just remember when SEGI was all the rage and everyone was wondering when Unity’s internal version of fully dynamic GI will come out.
     
    MadeFromPolygons and NotaNaN like this.
  7. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    160
    I'm not trying to educate you, i'm pointing at something you should have known given your experience, since you also didn't consider that real time in movies is different than real time in games.
     
  8. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,967
    That is a good point! I think this goes above the level of hype that SEGI generated though, as this time its specifically their competitors tech (and currently only available there commercially to the masses).

    I am hoping that gives unity a good kick up the behind to get into gear on this issue, but yes I suppose best to not hold my breath :D
     
  9. CodeKiwi

    CodeKiwi

    Joined:
    Oct 27, 2016
    Posts:
    119
    Interesting, those Nanite examples are amazing. It would be pretty cool in VR where normal maps don’t work. I think if I was going to use something similar in Unity it would really need to be officially created by Unity.

    I feel like creating something that looks similar wouldn’t be too difficult. Break a high resolution mesh into LOD clusters with instanced materials. Then maybe add my own LOD level to swap these objects so the cluster size changes.

    Obviously getting something that performs at the level of Nanite is a different matter. For example, Nanite only loads data required to render the scene. It also compresses the data e.g. 1 million triangles compressed to 14mb. In Unity a similar mesh would probably be around 75mb and that’s before creating clusters and LODs. I’m sure there are also a bunch of details I’m missing like how it prevents gaps when transitioning between a high and low resolution cluster. Wonder if the source code for Nanite will be included in UE5.

    Personally I don’t think Unity will implement this. Unity seems to focus more on mobile games and fast prototyping. Although I guess this might change since Unity purchased Weta like ippdev mentioned.
     
    Deleted User likes this.
  10. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    160
    you can look at it here:
     
    Antypodish and CodeKiwi like this.
  11. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    It's the toolchain not the technique that's the problem. You want to be able to generate these really quickly and I'm fairly sure there will be a ton of edge cases to deal with and research on that. The brute force way would take too long.
     
    CodeKiwi and NotaNaN like this.
  12. CodeKiwi

    CodeKiwi

    Joined:
    Oct 27, 2016
    Posts:
    119
    Thanks for the video GimmyDev. Looks like it was way more complex than I thought. If that Nano Tech is really using a similar technique then I’m really impressed. Generating the required data sounds really involved like hippocoder mentioned. I wonder if the person that made the Nano Tech demo used the UE5 toolchain, exported the data and then imported it into Unity. Then they’d only need to implement the rendering techniques.
     
  13. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    160
    There is always the solution to simplify the problem, by enforcing modeling guideline, to make the problem domain easier. Nanite is a kind of "optimized" brute force to decouple concern from artist to tech, which lead to an over engineering solution that is too generic. Over engineering over generic solution is what big company do, because at their scales of resources it's teh most competitive things to do, since it mean less training for artist, and it also help with solution like photogrammetry or filmic mesh (ie big polygon soup mess), by essentially automating the conversion workflow. It's also clever because it can be seen as a form of potential lossy compression (you could simply cull the small leaf), and it's byte sized is favorable to streaming.

    But the same ideas over a less generic version, say that need strict quadmesh modeling, could be an option too. The consistency at modeling time would simplify the algorithm (clear boundaries) and make it more predictable.

    Bu this is a world where nanite already exist and the method documented... Another less generic implementation pushing issues down the workflow isn't competitive, unless we are speaking for scrappy specific small project who want to take a risk tailored to the nature of their projects.

    I can see unity evolving this into their own Nanite solution:

    which kinda looks like old ROAM algorithm

    and this is similar to to the less generic solution I was talking about, even though it's about decimation:
     
    Last edited: Nov 22, 2021
    DragonCoder and hippocoder like this.
  14. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    The thing is the fidelity being solved here is not within typical reach of AAA. When you listen to what the Coalition said about assets, you know they're already forced to reduce polys in order to sustain dev times.

    So given you need to reduce polys and yet still have enough to qualify using resolution independent tech, you're looking at the real problem being your own budget to author enough high quality source art to make it worthwhile.

    Right now, every 'pushing the envelope tech' requires more, not less work in order to make the most of it, and we can't even come close to saturating this tech as we can't source the assets for it.

    And if we did source the assets for it, we would still be needing to source everything else that sustains this level of detail. I can't help but think this is an interim polygon-chasing fancy, and for indies at least, some form of deep trained image enhancement is more effective. And even for AAA eventually when quality is sufficient.

    All this polygon hunting (which is essentially what it really is), isn't the future.
     
    angrypenguin and GimmyDev like this.
  15. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    160
    Who thought traveling around the world with expensive DLSR and color card was easier than sculpting zbrush in mom's basement?
    /joke :p

    Nanite also ain't got you the lion king movie, they haven't solved fur yet :D
     
    hippocoder likes this.
  16. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    New tech always means more work. It increases productivity so then the boss expects more productivity.

    Like in the army, all the modern gear is way lighter than anything ever before. But soldiers now carry more weight than any point in history. THat's just the stupidity of humankind. We keep creating more problems. All the actual problems were solved ages ago. Now every problem there is is our own doing.

    Anyway, I dont care too much about new tech coming out, but I'm thinking I'll probably start my next project in around another 6-12 months, and I am thinking that if UE5 is production ready around then I might be able to really save some time using Nanite.

    I want to make a game that takes place in a city because there is tons of art for generic cities already available. I dont plan on hiring any environment artist to help, so the big task of creating tons of LODs for every model would be a real headache. It is looking like Nanite will pretty much negate that mountain of work. I can more or less just drop models in and LODs are completely automated.

    I haven't actually looked into it besides watching that one promo video, but it looks like this may be a case I can use new tech to actually save me work. We'll see of course.
     
  17. giraffe1

    giraffe1

    Joined:
    Nov 1, 2014
    Posts:
    300
    Is that video confirmed to be real?
     
  18. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,754
    This guy definatelly made some Unity stuff working



    Also, his previous vids showing his dynamic clothes system, based on GPU.
    So definatelly there isn't something pulled from thin air.
     
    Last edited: Nov 22, 2021
    C-T-Y and RoughSpaghetti3211 like this.
  19. Deleted User

    Deleted User

    Guest

    Fax.png

    Source:



    R.I.P Nanite, you'll be remembered. (2021-2018)​
     

    Attached Files:

    • Fax.png
      Fax.png
      File size:
      67 KB
      Views:
      329
    DungDajHjep and bb8_1 like this.
  20. TieSKey

    TieSKey

    Joined:
    Apr 14, 2011
    Posts:
    223
    Fixed that for u with the year Unity will add it in a workable state (?) :p
     
    bb8_1 and Deleted User like this.
  21. Deleted User

    Deleted User

    Guest

  22. Deleted User

    Deleted User

    Guest

  23. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Not only this has nothing to do with this topic, you also lack basic understanding on how both mesh shaders and Nanite work. Mesh shaders is not a Nanite replacement, since all mesh shaders do is unify the vertex/geometry/tesselation shader stages into a compute-like stage. Nanite can actually even use mesh shaders when available.
     
    C-T-Y, Deleted User and DragonCoder like this.
  24. Deleted User

    Deleted User

    Guest

    I know guys this has nothing to do with the original topic, but I've seen some users mention Nanite in this thread and I loved to chime in :)
     
  25. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Those users are also misguided because DOTS and Nanite are even more unrelated. It's like comparing cars with houses: both can have people inside of them but serve entirely different purposes.
     
  26. WAYNGames

    WAYNGames

    Joined:
    Mar 16, 2019
    Posts:
    988
    What about camping cars ?

    Joke appart DOTS is many things but not nanites nor mesh shaders. However maybe the "DOTS Renderer" can take advantage of them or inspiration from them.

    Is that something looked at by unity or are they completely ignoring it "for now" ?
     
    Deleted User and bb8_1 like this.
  27. Deleted User

    Deleted User

    Guest

    Yeah, absolutely! I imagine Unity with an alternative solution like Nanite on top of DOTS = Unity will be an unstoppable beast!
     
  28. TieSKey

    TieSKey

    Joined:
    Apr 14, 2011
    Posts:
    223
    As unity itself said, DOTS rendering is about rendering thousand of smallish things, not a hundred of extremely big/complex ones.
     
  29. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,754
    Last edited: Apr 10, 2022
    Goularou and MadeFromPolygons like this.
  30. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Offtopic nonsense moved, thanks Anty for the hint.
     
  31. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,957
    Yes and no. In my opinion Epic made a mistake by focusing on extremely high polygon assets when the real benefit is being able to use a single mesh across the entire scene rather than having multiple copies at various degrees of quality as well as minimizing your draw calls for you.
     
    Last edited: Jan 9, 2022
  32. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yeah epic's said so that things don't necessarily need to be high definition, they seem to be taking pains to mention it these days.
     
  33. kite3h

    kite3h

    Joined:
    Aug 27, 2012
    Posts:
    192
    Nanaite was created by implementing Wolfgang's visiblity buffer rendering.
    So, it can be said that Nanaite is mainly composed of two parts.
    Visiblity Buffer and Cluster preculling.
    Visiblity Buffer is a way to reduce GPU cache misses caused by Quad overdraw.
    Cluster culling is later called meshlet culling, and it is effective in reducing CPU cache cost due to PSO replacement.

    Cluster culling or meshlet culling is a very effective method when rendering very large objects. Of course, there is a way to use visiblity buffer, but there is also a way to use tessellation.

    Visiblity buffer is actually a bit vague. 30 % performance improvement is certain, but Is it valuable to overhaul the entire render pipeline because of it ?

    But nevertheless, if you are preparing a game for playstation, the use of the visiblity buffer will be essential.
     
    sammtan and Deleted User like this.
  34. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,957
    Wolfgang's blog mentions it's "a research project at our company since September 2015". Meanwhile Brian Karis started in 2009. While it's possible that he may have taken ideas from Wolfgang's implementation I suspect it's much more a case of multiple discovery where different people come up with the same idea independently.

    http://diaryofagraphicsprogrammer.blogspot.com/2018/03/triangle-visibility-buffer.html
    https://en.wikipedia.org/wiki/Multiple_discovery

    Below is an article mentioning how Nanite varies from visibility buffer rendering so even if the idea originated with him they clearly took it one step further.

    http://filmicworlds.com/blog/visibility-buffer-rendering-with-material-graphs/

    Regardless of who invented it first though I'm happy to see something resembling a whitepaper on the topic.
     
    Last edited: Apr 28, 2022
    sammtan and Deleted User like this.
  35. kite3h

    kite3h

    Joined:
    Aug 27, 2012
    Posts:
    192
    Meanwhile Brian Karis started in 2009.

    Anyone can say that. Would you believe me if I said I thought the same thing in the early 2000s? Before the preview of Unreal 5, there were games that already used the Visibleity Buffer, and there were more detailed papers on how to use them. Game technologies are influenced by each other, and the best ones are often borrowed.
     
  36. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,083
    Ah yes, so he lied. Of course.
     
    Ryiah likes this.
  37. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    He has an old blogpost somewhere I recall reading and I have no doubt he did work since then. Can we not do this? We are all adults.
     
  38. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,957
    He provides links to his own blog posts to back his statements up.

    https://twitter.com/briankaris/status/1260591486250266624

    Yes, I would trust you until I had a reason not to trust you. Of course I would ask questions to try to learn more but that's not a sign of distrust that's just wanting to further my own knowledge on the subject.
     
    Last edited: Apr 29, 2022
  39. imDanOush

    imDanOush

    Joined:
    Oct 12, 2013
    Posts:
    368
    This looks promising, looking forward to Unity officially supporting this asset or adding such a feature - besides something like Lumen, though SEGI seems a decent temporary option.
    Kudos!
     
  40. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
  41. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,677
    Quite the impressive work!
    Am looking forward to the full campaign~

    That it does not require a new rendering pipeline is certainly impressive and will definitely improve the chance of success!

    I'm very curious for the workflow Nano Tech will involve though:

    Is there a baking step or does it work even with procedural generation?

    What conditions do the meshes have to fulfill and how hard is it to use a random mesh and texture designed for "normal" usage?

    Does one profit from the performance boost even if you have a reasonable number of dynamic meshes that need to move independently? (like a detailed interior where objects have rigid bodies for interactions etc. - the opposite of the landscape\terrain usecase)

    Do you plan to provide tools with solid usability to actually make use of the powerful features (like that large number of textures which likely requires some form of dedicated editor)? As a tools dev (in a non-game field) myself, I hope you have alpha testers for those, because as devs ourselves, we simply cannot assess usability of what we have created.
     
  42. Nexusmaster

    Nexusmaster

    Joined:
    Jun 13, 2015
    Posts:
    365
    The import system will be very similar to Nanite's import process, procedural generated meshes are not on the agenda yet, but I created a tool similar to the "Dreams" pipeline, which might be added to Nano Tech in the future ("Dreams" meshes are based on SDFs).
    Currently there are no conditions/limitations for the mesh import, but I'm still testing different meshes, the plan is to have a QA team helping we with these kind of "tests".
    The workflow is very simple, just import any random mesh as a "Nano Mesh" and the Nano Tech system will render it automatically.
    Currently the import time is too long, which I'm working on, currently it can take 1-2 minutes until the mesh is ready, this will be improved to run in a few seconds, using GPU power and the job system.
    The performance boost will be there even with low poly models, because it's just one draw call for everything. Also you get a dynamic LOD system under the hood automatically. Which can adopt to the performance of the device it's running on. All meshes can move around dynamically in realtime (of course this will have a small performance impact if you have thousands of moving objects, but I use the job system for transformation updates, which can update thousands of transforms in realtime.)
    (Seen in the last part of this video:
    )

    For more questions and answers please follow this link:
    https://forum.unity.com/threads/nan...ndering-for-hdrp-urp-and-built-in-rp.1292223/
     
  43. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    So many amazing things being made for the Unity engine by "third party" contributers! I really wonder what Unity is doing the last 5 or 6 years with their massive budget and employee base. (shakes head in dismay)
     
  44. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,050
    Marketing.
     
    Rewaken and Deleted User like this.
  45. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    No need to be sour, there's actual good reasons for Unity not doing this right now. And they do have various experiments for all kinds of rendering. Imagine the community backlash if they bring out yet another experimental x but still have to do work on y.

    For something bleeding edge, what Nexus is doing is absolutely a great thing. And if there is anything Unity can do to make it easier - like a blocking thing - then speak up, as Unity is much more inclined to enable doing a thing vs doing the thing.
     
    MadeFromPolygons likes this.
  46. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,677
    To be fair, the Burst compiler is quite inovative, the HDRP rendering pipeline allows things you could not do before even with external assets (albeit some things are missing) and DOTS is on a good way to become great too (and already makes some things possible that previously weren't without source code access).
    The profiling features have also advanced a lot in the last 5 years.
    But yeah, much of their budget was not spent in such a way that smaller devs profit from it.
    A lot of it comes from investors and those invest for growth and unfortunately both parties know the growth cannot come from indies.

    @Nexusmaster
    Thank you for the answer! Sounds great and nice that you have a thread of your own for this project now.
     
    Rewaken and FernandoMK like this.
  47. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,083
    Personally, I'm glad this is a third party thing in development and not first party. If anything, third party things like this existing are great, not just because it means we have more fine grain control over what goes into our projects, but it's also a great showcase of one of Unity's core strengths: its extensibility.

    If this was being developed in-house, it'd likely start eating up lots of time on the part of the graphics team, team that could be better spent improving core functionality.
     
  48. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,957
    Building the render pipelines that act as the base for these contributions. While these contributions are great it's easy to forget that they won't just function on their own. You need everything else there to support them. Things that people don't want to or can't build themselves. Certainly not affordably.
     
  49. Enable people to do amazing things. That's their job.
     
  50. Rastapastor

    Rastapastor

    Joined:
    Jan 12, 2013
    Posts:
    589
    And then at some point asset devs decides that he doesnt have willing or time to support the asset and leave u with the dll.
     
    Jakub_Machowski likes this.