Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Question How to render large amount (millions) meshes with decent FPS and Memory ?

Discussion in 'General Discussion' started by Aravind_B_SST, Dec 13, 2022.

  1. Aravind_B_SST

    Aravind_B_SST

    Joined:
    May 16, 2022
    Posts:
    5
    Hey guys,

    We are trying to develop a cross platform application that should render objects created at runtime dynamically.
    Case 1: 1 Million Unique geometries (meshes).
    Case 2: 300 Million Triangles in a single mesh.

    Both these cases results in worst FPS and Memory when rendered using MeshRenderer and Graphic.DrawMesh with API.

    Graphic.DrawMeshInstanced, Graphic.DrawMeshInstancedIndirect and other instancing API will not work in our case as we have unique geometries.

    Even though above instancing APIs doesn't fulfill our requirements we tried those out of curiosity. Graphic.DrawMeshInstanced produced below par FPS. Graphic.DrawMeshInstancedIndirect doesn't support in WebGL.

    What is maximum triangles and meshes count that can be rendered with target FPS as 30 ?

    Any other suggestion which can fulfill the above requirement ?
     
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    It does not work like that. You can drop fps to single digits with one triangle if you use the right shader.

    In this scenario you're likely aiming too high.

    I can't find the most recent data on number of meshes per second the most recent GPS can process, but there's a suggestion RTX 3XXX can go up to 20 billion triangles per second. With your 300 million triangles that SHOULD produce 60 fps, but in practice it won't, because for that you'd need to render the mesh only once, and in the scene the mesh will be rendered several times. For example, in forward rendering mode it may be rendered extra time for additional lights, it WILL be again for each shadowcaster and so on.

    When the lighting is involved, you're increasing number of triangles effectively being drawn.

    So in your scenario you'll need to start cutting corners. First see if you can even render it with unlit shader, no lights and shadows disabled.

    If you can't, that's it - end of the line.

    Most likely you'd need to introduce impostors in scenario #1 and try to split mesh #2 and calculate occlusion data for it.

    Also you'd need to spend time in frame debugger to see what is eating your framerate the most.
     
    Rewaken likes this.
  3. Max-om

    Max-om

    Joined:
    Aug 9, 2017
    Posts:
    486
    You need to trick the player that he sees a high detailed world but in reality he does only see a small part of it. LODs, impostors, streaming etc. Also the upcoming nano tech asset will probably help
     
  4. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,124
    Unreal Engine 5.
     
    Neto_Kokku, Rewaken, shikhrr and 4 others like this.
  5. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,355
    I know this is a Unity forum, but if you might also want to research Unreal engine 5's nanite feature. Apparently it's a software system that automatically bakes multiple optimizations for large. densely-triangled environements. I don't know if they need to be static meshes or not.
     
  6. algio_

    algio_

    Joined:
    Jul 18, 2019
    Posts:
    85
    I would test the performance of case 2 (a 300 million triangles mesh) with a non dinamically generated mesh and if that is good make it procedural, if not lower the requirements.
     
    Last edited: Dec 13, 2022
  7. CodeSmile

    CodeSmile

    Joined:
    Apr 10, 2014
    Posts:
    4,019
    You mean the "wrong" shader, right? :D

    Anyway, I was going to say that 1 million unique (!) meshes is a terrifying amount! Consider investing in a server farm that handles the rendering. ;)

    Seriously, it would help to know where these (insane) requirements come from and what these meshes are, what they represent, what the goal of the app is. Maybe you are rather looking for a break-through idea that makes the whole thing a lot easier rather than the brute-force everything is a mesh approach.

    For example, point cloud rendering is one way to achieve highly detailed photo-realistic real world scans in realtime 3D but at the cost of extremely high memory usage and some visual oddities. Voxelization is another alternative, but with similar drawbacks. Or maybe you simply don't need every mesh to be unique. Maybe it could be done with textures or shaders instead?
     
  8. unitedone3D

    unitedone3D

    Joined:
    Jul 29, 2017
    Posts:
    151
    Dear Aravind_B_SST, Just a 2 cents.

    https://forum.unity.com/threads/nan...endering-for-hdrp-urp-and-built-in-rp.1292223
    https://advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf

    In this forum thread, the creator is making a Nanite equivalent for Unity; it answers most your questions. Albeit, Invertex, explained, after I asked (having heard Nanite is space disk dependent), the reasoing, that it can increase the size of your game drastically (depending if organic or hard surfaces), an UE5 dev had said that it was 10x times bigger with Nanite on. Invertex said it is 1.5x-100x bigger; so that dev said a 10GB Nanite-using game would be 100GB roughly. I don't think hard drive space is such a problem today; with 10-15TB drives available (but still not SSDs/NVMes..fast ones..) but old SATA mechanical/platters drives..space is there if you get a old tech SATA drive. Soon, SSDs/NVMes will also be several TBs; so in that sense, the problem is moot (gamers lacking hard drive space to install many 'huge' games requiring 150-300GB...or more even; like COD requiring such; of course, it often means extreme high quality assets and oftenly little or no compression/less optimization per asset, and hundreds of thousands of these assets); What I'm saying is that...if your game is not huge, Nano tech is a great tool to have (as it does what Nanite does, but in Unity) and will save you a ton of time (of not having to do these optimizations to make your game playable (in fps) on lower hardware too).

    If your game is huge...then it's more problematic; because, you will Eat hard drive space, if using Nano tech. You will have to Design your game to use more 'organic' shapes that cost little in Nanite space (instead of hard surface that are costly and wasteful for Nanite space disk demand). More so, than if you Don't use any Nanite/Nano tech...you could optimize your assets and it would take Less space than if converting your assets to be Nano-tech ready (because from what I understood it is not an ON/OFF switch/script (that applies this effect on everything realtime)...
    you must 'preconvert' the 3d models-- to Nano-tech format--to have Nanotech scriot read the assets/work.
    So that is extra-work.

    It's the same as using Impostors (as others said)/and/or LODs....(check also that your impostors have shadows...this is crucial I found many impostors..with no shadow possibiliy, and especiall, Self-shadowing, very important. or else your objects impostereed..receive' no shadows on theirselves)

    One thing I do recommend the Nanotech or the Impostors...over LODs, is in 'the distance'...it is Visible in the distance that some models reduce in detail (using LODs) and it shows; quite clearly. I mean, the model is far in the distance, so is small (in terms of % size on the total rendererd image), but, it still shows in some models;

    It means, that you can't Reduce polygon count - Too Quickly...or it shows. Like, the model becomes blocky, with distance; but if too quick; it's not quite visible. So, the best is to a gradual reduction of polys with distance; so it's not evident. Like I was watching a scenery with very low poly LODs in the mid-to-far distance...and it looked Very Low Res...like PS2 graphics...not kidding. So, if you reduce the poly count too quickly - graphic LOD 'distance' - will make a game look much more 'last gen'; because you don'T see far in the distance/and/or the models in the distance are too low res/chunky.

    This is where the Impostors shine; but it's a long tedious thing but done with/in combinatino with LODs, it's the best; and you combine that with (as you said) GPU instancing; or use 'mesh combining' (but careful with this; mesh combining a ton of models...to make 1 Big model...is great...except it can explodwe memory demand; like that's what I read from devs who said they combined 100,000 thousand little models...to 1 model...and it was just way too much memory demand - GBs RAM...that single model (comprosing all the models - combined as 1) ate the memory);
    So you have some tools, that do 'regional combining/local combining/sector combining/cell combining'; where like, objects are combined...but 'in cells/chunks'...in a 3d grid..so it's not 100% 1 mesh, it's rather, let's say 5 parts...and these 5 parts (represent the 100,000 objectS, combined as 5).

    Impostors are not 100% solution...like, at close-up, it shows; you need at least 4K res for 'close-up' of Impostors..or they look 'Flat cards' (like old 2.5D build games..Duke Nukem 3d...etc..) facing the camera no matter its angle..
    2d billboards.

    At 8K, now they are quite sharp/detailed and quasi the same as regular 3d model; but rendering 8K impostors..is long.
    You must do this (once, only) and then it's done. but this will take substantial hard drive space (these 8K texute impostors are not small; you must optimize them; like using 'crunch/compression' on the image itself; using like DX5 or BC7 compression, tha reduces their space disk size); but, still, as said, it is not as good as The 3d model itself; but, in the distance, it is better; because it is the Exact same detail of the model Close - or - Far from the camera;
    while LODs always reduce poly count; and thus, make chunky low res models in the distance.

    My take; a combination of these techs is best. And it depends of your needs/what you wish;
    do you Need to 'preserve model detail' -- in the distance..or you don'T care...and low res in the distance doesn't matter -it's far...so who cares...

    (in my case, I care; becaeuse I noticed it and AAA games rarely have this Low res in the distance look; like the models do you use LODs but it's not Drastic reduction neither).

    As other said; you have no choice but to show only 'a parcel' of the world at any time; but only showing a little chunk; you reduce resources; so 'hide/unhide'; use occlusion (GPU occlusing..CPU occlusion is not cheap); frustum culling.. etc...tricks that 'hide' everything friom view; but, also, can Remove from memory so not just hide; but Deactivate;
    like, removing the asset from memory and then Reloading it - upon seen. So this way you reduce memory need (because 'hiding' the objecty...does not remove it from memory...only hides it...you have to 'unload asset' but then you have to 'reload asset' so that it's back in memory..depending- if in view/needed or not).

    1 million objects...will liklely need serious optimizations (if not using the DOTS/ECS techs..); like mesh combining; lods, texture compression; occlusion; culling; unload/load - depending distance/view; hide/unhide; make only a small periphery...of any active/in view assets...

    I wish to add; AAA games, I noticed, have Far view; like you can oftenly see in the distance they use tricks like 'fog' in the distance and 'camera cutting end'; like after a certain distance, objects disappear there is nothing else; 'camera distance clipping';

    And, oftenly, I noticed; in smaller games; that 'cut-off' is wayy too close (along with low poly models in distance); and this can feel claustrophobic or is a sign the game is not AAA; because AAA game 'push the distance' you can see far;; and that means using LODs/impostor for that to preserve the models in teh far distance; look at a game like assassin's creed origins or red dead redemption 2; they have a very far LOD distance and the game runs well despite so; 'seeing far' is a sign of AAA game; even Escape from Tarkov (a Unity game) has very far LOD distance and you can see 'miles' in the distance. That's not 'free' in resources, it's very taxing to have this far objects be seen clearly in the distance.

    Hence, the trick is 'fog' to 'hide it in the distance' and thus make a much 'closer' 'camera clipping/end'. But, as said, games that do that, are either going for claustrophobic look or are doing it on purpose (can't 'view too far' in the distance -> too demanding in hardware resources/the engine can't). The other trick is to make, as said, a more claustrophobic-design game...mostly 'hallways/corridor ('corridor-shooter'); where there is No 'outdoors/wide space'...
    it's all happening inside/in hallways..etc..you don'T need to see far...walls block the view.

    Just 2 cents.

    PS: There is more good to using Nanotech, than not. Your gamers will need hard drive space, buy New hard drive that has many TBs (SATA)...and if they buy ultra-fast NVMe/SSDs...well your game will fit in it...and that'S about it (you will eat the space on it; since it's oftenly only 1-2TB); they will have to play 'install/uninstall/install/uninstall.....as they will have to 'pick&choose' between which big games to install).

    PPS: One more thing, it was said, Nanotech/Nanite is still hard for non-static stuff (like great for buildings/non moving stuff..but still not great for vegetation or any moving thing); like this 'virtual texturing/virtual model' tech is still too slow; and in that sense, it forces needing a NVMe/SSD..like you need it because it is 'model streaming' tech...and as such you drive must have very fast reading (like a NVMe...not an old SATA). This will cause 'momentary catch-up/low-res textures' as the scene displays and 'updates itself/textures are read..and become hi-res..as they are read'...just like virtual texturing does; it catches up --as it reads/the stream (of the textures) on the disk. But, the Great thing about that is Memory cost (RAM); is it Much low RAM cost...using virtual texturing than using regular texture - put on GPU VRAM memory; still, if you compress/using compressing/crunching tech like BC7 or DX5 texture format...plus lossy 'cruching'...you save a lot of VRAM space need (and stick to 4K res..instead of 8K textures). But, it still, not as good as using virtual texturing tech; or, for 3d models..using Nanotech. That's what Invertex explained, you are trading memory space...for disk space. So, that's the big caveat, it's disk space and speed of reading; for Nanotech.

    PPPS: Others mentioned shadow casters, very important, shadow casters/pixel lighting can tank frame rate, and
    also shader complexity; use Defered mode, better for that (more pixel lights possible); check shadow caster count;
    check Batch count and Draw call count; these will increase very rapidly and overwhelm the engine; still...there's no magic button to make them lower; several AAA games - have 50,000 drawcalls...and they work/they made it work somehow. (problably using ECS/DOTS tech etc...). Plus, use DSS, 'image resolution scaling/dynamic scaling'...like reduce complexity of everything depending the hardware and/or redude the Rendered Resolution.
     
    Kaohmin and ippdev like this.
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    That's only if their geometry can even be processed by nanite. The whole thing smells like something weird or a special use case, like polygonizing a CT scan or displaying some sort of external data.

    Because a videogame normally would not need this kind of numbers.
     
  10. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,745
    What's your target hardware? Is the geometry being generated procedurally or is it being imported at runtime? Do you actually need that high poly density? If you're trying to push 300,000,000 polygons to render, that's the equivalent of rendering a polygon for every pixel of a 1080p display nearly 150 times over.

    You're not giving nearly enough information here.
     
  11. lmbarns

    lmbarns

    Joined:
    Jul 14, 2011
    Posts:
    1,628
  12. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    The OP's issue is that he wants unique geometry for those triangles. 10k dragons would use instancing and will be cheaper to render.
     
    BrandyStarbrite and QJaxun like this.
  13. Deleted User

    Deleted User

    Guest

    Exactly, it sounds like visualizing some infrastructure project with raw data coming from CAD engineers. There it's no surprise to go above 100M triangles per model.

    upload_2022-12-14_11-4-21.png

    I took from a presentation of the company that developed their "rendering codec" and all toolset on the top of Unreal to handle such datasets. They're presenting a visualization of ITER reactor construction.

    And it seems like a reasonable solution. Even with Nanite, as engine rendering can handle huge meshes, tooling might not be prepared for such an insane triangle count. Just importing a few million triangle meshes to the engine can take a lot of time.

    (Ignore stupid presentation name, it actually has nothing to do with metaverse blablabla, pure engineering talk)

     
  14. Max-om

    Max-om

    Joined:
    Aug 9, 2017
    Posts:
    486
    nanite/nano tech clusters meshes. I wonder if it wouldn't be able to swallow the biggest of CAD files actually.

    I have my home in SketchUp with alot of detail, when nano tech arrives I will try to se how well it runs :)
     
  15. Aravind_B_SST

    Aravind_B_SST

    Joined:
    May 16, 2022
    Posts:
    5
    Hi Everyone,

    Thanks for all the replies.

    I am a rookie in Unity, apologies for incomplete data in the OP and I will try to provide a detailed explanation on the requirement why we are doing this.

    We are not building a game here but trying to build POC using unity that renders CAD 2D data.
    Mostly these data are lines but a lot of them will come based on the size of building.

    A very very small region of floor plan example that needs to be rendered is attached below as screenshot.

    As the data source is from external application we have no control on the maximum meshes or lines that needs to be rendered. We are not expecting to render whatever input that is thrown at the application (certainly the amount of triangles/Lines rendered will be dependent on how good the hardware is) but at least we need to know the correct way/approach to render this in Unity.

    Initially we tried out rendering these data using LineRenderer and MeshRenderer (with topology as lines) but we faced a lot of Memory overhead due to Game Objects with Renderer component.

    Test Results: GPU - AMD Radeon R7 450 4GB
    Case 1:
    No of Triangle: 100,000
    MeshRenderer Used: 1
    FPS: 1
    Case 2:
    No of Triangle: 50,000
    MeshRenderer Used: 1
    FPS: 2
    Case 3:
    No of Triangle: 50,000
    MeshRenderer Used: 5
    FPS: 15

    Then to simplify things we just rendered a chunk of Line Geometry (100K Lines, Mesh Topology) inside a single Unity.Mesh with Graphics.DrawMesh API our memory issues were solved but we still faced issues with FPS.

    We got better results in Graphic.DrawMeshInstancedIndirect but unfortunately the data that we get doesn't allow instantiation to work as it is unique Geometries. We understand 300 Million is too much for any GPU based on your feedback and its not realistic to render 300 Million Lines we agree that. But we are only looking for the best approach that can render the maximum amount of unique lines.
     

    Attached Files:

  16. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,988
  17. Aravind_B_SST

    Aravind_B_SST

    Joined:
    May 16, 2022
    Posts:
    5
  18. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,875
    If you need CAD, its PIXYZ you need to use unity even moderately well. Otherwise look at Unreal 5. (and yes PIXYZ is an extra and expensive cost)
     
    Deleted User likes this.
  19. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    You're doing something strange here. With only 100k triangles, unity should run circles around it.

    Profile the application, see what is happening. Line renderer is not really good for drawing lines (it is meant to be for drawing WIDE lines), plus there's a chanc that it is updating every frame. For wireframe rendering you'd need to be using line primitives, really.

    https://docs.unity3d.com/Manual/Profiler.html

    You'll also want to segment your line data, most likely - for faster performance. This isn't hard and can be done with a script.
     
  20. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,792
    Unity has CAD solutions if yer willing to pay for Pro and a few add-ons from the engineering bundle.
     
  21. Andy-Touch

    Andy-Touch

    A Moon Shaped Bool Unity Legend

    Joined:
    May 5, 2014
    Posts:
    1,445
    Honestly, you would probably have an easier time handling that data with UE5. Nanite is built to handle this kind of scale; and its super easy to enable and debug.
    However, you would then be restricted to very specific platforms and outputs.
     
    MadeFromPolygons likes this.
  22. algio_

    algio_

    Joined:
    Jul 18, 2019
    Posts:
    85
    Fps is too low and could be greatly improved provided some conditions that are not clear:
    How do you pass data from the external application to Unity?
    How do you convert this data to Mesh?
    Does this happen in realtime?
     
  23. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,988
    how about using virtual/mega texturing? get really high res image of that layout, can zoom in out..
    (not sure if gets blurry in some distances?)

    for drawing verts,
    here's 432 million points on screen in editor,
    without really any optimizations (just regular geometry shader brute force drawing quad points from array)
    https://forum.unity.com/threads/released-point-cloud-viewer-tools.240536/page-8#post-5115404

    same scene with "lod" (using 3d grid, less points per cell, based on distance)
    https://forum.unity.com/threads/released-point-cloud-viewer-tools.240536/page-8#post-5152982

    surely its doable, and since its 2D only, could have other optimizations techniques available..
     
  24. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,574
    Too much speculations here, too few details.
    Question to OP is, what is exactly doing?

    What is the current models source?

    Is it CAD, is it point cloud, is it high detailed mesh?

    What is the end goal? Game, visualisation, static image, real time dynamic mesh loading?

    Can models be pre optimised, before loading to the Unreal, or Unity?
     
  25. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    8,988
    Voronoi likes this.
  26. kinorotpirsum

    kinorotpirsum

    Joined:
    Dec 14, 2022
    Posts:
    10
    In my opinion this means you can't reduce the polygon count - too fast...or it will show up. Like, the model becomes a block, with distance; But if too fast; It's not really visible. So, the best is a gradual reduction of polys with distance; So it's unclear. Like I was viewing a landscape with very low poly LODs in the middle to far distance... and it looks very low resolution... like PS2 graphics... no kidding. So, if you reduce the poly count too quickly - LOD 'distance' graphics - will make the game look much more 'last'; Because you don't see far/and/or the remote models are too low resolution/chunky.
     
  27. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    That floor plan can definitely be simplified. Since it's 2D and all the user can do is zoom in and out, you can take advantage of that by doing the following:

    1) Slice the mesh into chunks so that the camera can cull the parts that are outside of view when zoomed in.

    2) Generate LODs for each chunk and setup a LODGroup component on each one to transition based on screen size. This will reduce the triangle count when zooming out and more chunks become visible.

    300 million triangles at 60fps means 18 billion triangles per second. That's RTX 3XXX territory. Also if said triangles are thin/small and all clumped together the performance becomes even worse.
     
    Last edited: Dec 15, 2022
    MadeFromPolygons and Voronoi like this.
  28. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    571
    Based on that image, and the idea this is some kind of large blue-print to be panned and viewed, I would also consider approaching it more like a geo map and convert it to a raster format. This is assuming it's not real-time generated and things don't need to be selected.

    I did something similar where I needed to allow the viewer to scan an image that was about 180,000 pixels wide. I broke the image up into 8192 px squares, with two slices per square and stitched them together in the scene. Using mip-map streaming and I think virtual texturing, the viewer could zip from one end of the image to the other with no tiling or lags (like you get with other gigapixel viewing techinques where it starts out blurry and then gradually loads the tiles). They could zoom in or out as well with no obvious lags.

    Depending on your use case, the nice thing about an image based approach is it's scalable. I was able to run this on a 4K touch screen and mobile devices (limiting the tiles to 4096) and performance was great.
     
    mgear likes this.
  29. GradientOGames

    GradientOGames

    Joined:
    Feb 8, 2021
    Posts:
    28
    I know this might be a bit old but for anyone new in this thread looking for answers. There is a third-party unity version of nanite currently being developed by Chris-K.

    Really go check him out: (demo vid of 1 billion triangles with good fps)


    check out his channel and join his discord. I am not affiliated with him in any way, just that his stuff is absolutely incredible! He is working on GPU based physics simulation, and even a version of nanite and lumen in unity.
     
  30. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,574
    It is off topic, as proposed solution is not production ready. It does not addresses OP's question. Op wanted current vaiable solution, not some possible future tech.
     
  31. GradientOGames

    GradientOGames

    Joined:
    Feb 8, 2021
    Posts:
    28
    I was assuming that people would see it in a few months (roughly when it is planned for release), I guess it was dumb of me to post an unreleased project.
     
  32. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 20, 2015
    Posts:
    9,903
    Well, I donated above $100, it got me access to a private repo saying the same thing over a year:
    This is what you get. And again, this was updated last year in the summer, since then, radio silence. To be honest, I don't think this project goes anywhere at this point.
     
    neginfinity likes this.
  33. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,459
    It is definitely still WIP but wouldn't say radio silence. The dev has posted 3 new videos on Nanotech in the last few weeks: https://www.youtube.com/@ChristianKahler
     
    GradientOGames likes this.
  34. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    Kinda reminds me of atomontage development speed.