Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

Unreal Engine 5 = Game Changer

Discussion in 'General Discussion' started by DigitalAdam, May 13, 2020.

Thread Status:
Not open for further replies.
  1. kburkhart84

    kburkhart84

    Joined:
    Apr 28, 2012
    Posts:
    910
    I'm hoping my current tower lasts that few more years until DDR5 is out. I'd much rather purchase at the right time.
     
  2. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    Btw this kind of texture hacks are not new, for example baking physics like destruction can be done like this and then repayed directly on the GPU with close to zero overhead
     
  3. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,678
    Well, you can store just about anything in a texture. From object positions/velocities to mesh data. Geometry images are a quite unique idea though. Finding a good way to automatically unwrap any mesh into a square shape without terrible precision loss in some areas looks far from trivial.
     
  4. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,150
    Just keep in mind memory prices may not be as reasonable for next generation memory as they are right now. A normal consumer desktop currently caps out at 128GB and you can purchase a 128GB DDR4-3600 CAS 18 kit for just under $600.

    https://pcpartpicker.com/product/qx...-x-32-gb-ddr4-3600-memory-f4-3600c18q-128gtzr

    Furthermore it's important to remember any and all problems companies had with new memory standards. AMD's first generation of Ryzen CPUs struggled to achieve good performance. My X370 motherboard for example requires stepping down to 2666 from its highest speed of 3200 to be able to use more than two memory sticks.
     
    Last edited: May 16, 2020
  5. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    Zen+ too it was first with zen 2 it was really stable. I have built systems both with the 1800x, 2700x and the 3950x and its first with the 3950x I can run my memory at their 3200mhz cl14

    Edit: pretty awesome though that I could use my old x370 mobo for all 3 builds
     
  6. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    This tech though uses textures to store data so that means it will utilize the Vram which is much faster than ddr4
     
  7. kburkhart84

    kburkhart84

    Joined:
    Apr 28, 2012
    Posts:
    910
    Yup, and I'm really hoping this tower will last long enough for those types of issues to be discovered and fixed(or worked around).
     
    schmosef likes this.
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    We know that part for sure, no speculation, I have already shared the exact quote from the technical director from the eurogamer articles, in this thread earlier, so repost:
    MY speculation is that lumen and nanite, while kinda different, use the same structure because it would be redundant, and by tightly coupling the two you solve many problem, think about it, light need access to geometry data for GI, frustrum and occlusion culling need visibility computation which is done by lighting too (gi is a visibility problem). It make sense to unify the two.

    The missing link is to jump from an sdf to the geometry image. My guess is that they trace the large voxel down to the sdf brick, then use the sdf to trace in the brick, then when there hit a leaf, they dispatch a rasterization. The missing part is how they deal with partial occlusion of a leaf, so that the ray can continue if no hit? It would make sense to separate each batch of compute to avoid cache miss.

    I'm currently watching the claybook talk to get some insight on sdf tracing. Claybook is also a great example for the sdf/voxel combo scaling down to lower tech such as the switch.

    It's worth noting that the creator of claybook now work at unity ... so half the requirement to implement lumen, and maybe nanite, is there. Though the edge cases of the implementation is probably the real hard part, not the overall idea.
     
  9. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,879
    A bit late to that party but I actually agree.

    I think trying to shoehorn the current engine into a different shape is messing unity up where it should have just tried to make a new unity altogether given that they have learnt SO MUCH since they started their "refactor".

    I actually think that is probably already happening in the pipelines but not announced given how things have gone, but that is completely CONJECTURE and should not be taken by anyone reading this as substantiated.
     
  10. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,764
    So what?
     
    hippocoder likes this.
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    The first thing I thought was about that, the idea in the geometric image paper looks terrible to me, the main culprit is getting curvature to a flat surface, which lead to stretching. It's also looking incredibly complicated to understand and implement. My first idea was to break the problem into smaller patch, the smaller the sampling the more curvature get close to a plane, with smaller patch you minimize stretching on the patch scale because global curvature get smaller.

    Then I remembered this paper:
    http://vcg.isti.cnr.it/polycubemaps/resources/sigg04.pdf
    upload_2020-5-16_15-51-11.png
    Probably not what nanite do, but that would be perfect with geometry image. And it would "mesh" nicely with a grid like structure we have with voxel and sdf, it's also close to an experiment Alex Evans did for dreams in his talk, where he use parallax occlusion mapping from blocky approximation to reconstruct the surfaces.



    EDIT:
    the engine is said to take billion of triangle down to 20 000 000 drawn triangle on screen ...
    I realized that's the mali 400 mp1 peak triangle throughput! ... per second lol
    I think that can scale
     
    Last edited: May 16, 2020
  12. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    "Please note: The digital character in this package is provided under a restricted license for educational and non-commercial use only and attribution to Unity is required. Please read the license terms. The technology stack is under Unity’s standard Asset Store EULA."

    ...
     
    Tanner555 likes this.
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    this is why nobody cared
     
  14. shredingskin

    shredingskin

    Joined:
    Nov 7, 2012
    Posts:
    242
    That the whole "you need a SSD that travelled through time to run this" seems more like an ad than the actual requirements. We know that it will also run on the newer xbox, and I highly doubt they aren't going to target lower specs.
     
  15. EternalAmbiguity

    EternalAmbiguity

    Joined:
    Dec 27, 2014
    Posts:
    3,144
    https://www.reddit.com/r/XboxSeries...ech_demo_on_a_rtx_2080_970_evo_plus_notebook/

    Twitter comment referenced there: https://twitter.com/wangxingyu1999/status/1261694867622596608

    From resetera, about the same video:

    Another resetera post about the video
     
    Last edited: May 16, 2020
    Kirsche likes this.
  16. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,764
    The Xbox Series X also has a massively overhauled IO pipeline.
     
  17. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    BTW just a simple observation, they brag of bringing billion triangles seamlessly to 20 million DRAWN triangle on screen, but the demo is 1440p which is only 3,6 million pixels ... that's like 18% of the drawn triangle, that's 5 times more triangles than pixels o_O, is this the overdraw?
     
  18. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,764
    Don't forget that GI calculations are going to have to include off-screen points if you want to get any degree of accuracy.
     
    neoshaman likes this.
  19. Vagabond_

    Vagabond_

    Joined:
    Aug 26, 2014
    Posts:
    1,145
    Go and watch also the Cherno ( an ex Frostbite programmer at EA ) how guesses that this is all streamed from disk and also analyzes the whole video etc... It's interesting...

     
    tcmeric, SunnySunshine, OCASM and 2 others like this.
  20. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    cherno is pretty cool, i watch his videos
     
    Vagabond_ likes this.
  21. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    That's still only about 30% the speed of the 448 GB/s GDDR6 RAM on the PS5.

    Will we need next gen PC's with GDDR6 RAM as standard?
     
  22. splattenburgers

    splattenburgers

    Joined:
    Aug 4, 2017
    Posts:
    117
    So in theory the idea of "unlimited detail" should be pretty cool right? But the thing is, you can't actually use sculpts or ultra high polygon meshes outside of Unreal 5/Zbrush/Whatever modeler, which means you can't load it into 3rd party applications like SP, Maya, Modo/Blender/Whatever for texturing or rigging/animating. Doesn't that make nanites kinda useless? Unless the idea is to use Zbrush's internal texturing tools for texturing and then as of yet announced some kind of rigging and animation tools within Unreal 5 itself for rigging/animating I just don't see how this supposed nanite feature is suppose to be useful. Maybe if 3rd party applications start supporting nanite meshes in the future, but in the meantime this nanite stuff strikes me as a gimmick tbh.
     
  23. cfree

    cfree

    Joined:
    Sep 30, 2014
    Posts:
    72
    After 8 years developing projects with both Unity and UE4. Also tried a little with Cryengine and even Lumberyard. Some few shipped projects, TBH.

    I firmly believe UE5 will be a game changer for game development... Even if they deliver 50% than they are promising, it will already be a game changer... because there will not be any other solution even close to it. It is for sure. Epic have the resources to do it with mastery (money, knowledge, management, partnership with great studios). Time will tell, but it is a really safe bet. Just look at how the industry reacted. Just look at how the gamers reacted. Just look at how Epic conducts his business.
     
  24. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    that is true from a graphical artistic point of view, UE5 is 1st place by a significant margin
    but from a programmer's perspective, unity is by and large superior

    also, talking about resources, you're saying that as if unity is not too a multi billion dollar company
     
    Last edited: May 17, 2020
    hard_code likes this.
  25. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    The point of Nanite is for Unreal to serve as an end-point or final destination. You feed it highpoly assets and it converts into state the game can handle. So you no longer need to deal with retopology, poly reduction, normalmap baking and so on. The engine editor handles that. So instead of taking a high detail asset and wrestling with it for few hours to make it engine-compatible, you just feed it into the engine directly.

    3rd party applications don't really matter here.
     
  26. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,150
    Keep in mind both of the next generation consoles only have 16GB RAM. Streaming content from the SSD will be the real bottleneck even though they're going with fast SSDs. Meanwhile the PC can very easily surpass 16GB RAM.
     
  27. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Remember why Z-buffer is a thing.
     
    AlanMattano likes this.
  28. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
     

    Attached Files:

    • mfw.jpg
      mfw.jpg
      File size:
      532.3 KB
      Views:
      404
    • ywreq.jpg
      ywreq.jpg
      File size:
      2.5 MB
      Views:
      1,343
    Last edited: May 17, 2020
    schmosef likes this.
  29. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,150
    What are you basing this idea that they can't handle high polygon counts on? A quick search for high polygon meshes came up with the following for Blender. A sculpting demo that consists of 45 million polygons running on hardware that is very far behind us now. UE5's Tech Demo has the statue at 33 million.


    Below is a link to a thread where someone asked what the upper limit is for Substance Painter and the response was that you can easily import and smoothly paint 15 million triangles on a mid-range GPU which based on the era of the post is a card somewhere between GTX 1050 and 1060 which only has 4GB to 6GB VRAM.

    https://polycount.com/discussion/17...r-highpoly-models-with-high-end-graphics-card
     
    Last edited: May 17, 2020
    AlanMattano and OCASM like this.
  30. cfree

    cfree

    Joined:
    Sep 30, 2014
    Posts:
    72
    My intention was not compare Unity with Unreal. This forum title affirms UE5 will be a game changer, and i mentioned some factors agreeing with it. IMHO. I called it a safe bet, but it is a bet right now, for sure.

    To compare, we need to do it between Unity 2020 x UE4. And Unity Tech X Epic Games. All really awesome. But then it is another forum. And i agree with your points ;)
     
    hippocoder likes this.
  31. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    ^^^ This is also true.

    Modern hardware handles zbrush sculpts without significant difficulty. It is just you can't NORMALLY plaster zbrush model 1000 times in the scene without lods and manual optimization.
     
    AlanMattano likes this.
  32. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    honestly, whatsup with the hype for ue's lumen?
    hasn't enliten been in every unity version since forever?
    unity have said they have a replacement realtime gi solution in the works ever since they split
     
  33. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,150
    Enlighten is being removed.

    https://blogs.unity3d.com/2019/07/0...-for-baked-and-real-time-giobal-illumination/

    Yes, but that's literally all they've said about it. Lumen's biggest advantage though is that it doesn't require baking which can easily take hours to days with every solution out there. No baking is a massive productivity boost.
     
  34. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,764
    As Ryiah said, Enlighten is being removed, but also Enlighten is absolutely garbage as far as the version Unity implemented is concerned and requires a lot of work and horsepower to really use as an effective GI solution.
     
  35. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Enlighten has been deprecated, pretty much, requires very long baking times, and cannot utilize GPU. It will be removed in future releases.

    Unity promised a replacement GI solution, but as of now it does not exist and hasn't been released in any form.

    The "alterantive" to Enlighten at the moment is "progressive lightmapper" which does not really perform that great, is not a GI, and that's why 3rd party assets like Bakery exist.

    So in practice currently in order to work with lightmaps in unity your best bet is buying 3rd party asset. This state of affairs is not something that normally gives people confidence.

    The reason why people are interested in Lumen is because it does not require baking, and reminds of NV VXGI technolopgy which could work on any scene geometry without baking process as well.

    Your earlier picture also includes factually incorrect information, as Unreal engine has been used for many genres, and as long as it is dealing with 3d, it is very usable. Fighting games, racing games, rpgs, 3rd person, 1st person, RTS. It is rare to see Unreal used for 2D, but it would be best for unity not to be reduced to "2d framework".

    Usefulness of Dots is dubious, and while it is a unity's favorite marketing term right now, it will be a long time till it is finished, and many games do not benefit from it at all.
     
    Last edited: May 17, 2020
  36. knr_

    knr_

    Joined:
    Nov 17, 2012
    Posts:
    258
    Having been with Unity when I could get a free version and you had to pay in upwards of US $1 million to get an Unreal license I never thought that Epic could win me over.

    They just have.

    Free up to $1 million in revenue, plus free online services while Unity is upping their monthly fees and charging for online services, not to mention the financial benefits of the Epic store from a developer's perspective, etc...

    Even though I have over the years invested heavily into Unity, I am now officially out. Epic has won me and my teams over, and we will be porting all of our stuff over.

    We're experienced in C/C++ so that is not an issue, but we are aware that there is a C# layer out there that you can put on top of it. In our case, we'll just go back to dealing with memory management and multi-threading on our own, the value proposition is now just too great to not make the transition.
     
    Last edited: May 17, 2020
    jcarpay and hippocoder like this.
  37. tatoforever

    tatoforever

    Joined:
    Apr 16, 2009
    Posts:
    4,337
    Is not a gimmick, it's not fake. It's truly hi-poly geometry virtualized. To put it simple. The basic idea is: the engine encodes those hi-poly meshes into textures at import and smartly stream it to the GPU to be reconstructed there = profit.
     
    jcarpay likes this.
  38. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    huh hadn't heard about bakery all too much. what are the pitfalls of progressive lightmapper? i had the impression that it was fairly advanced and new, like with their russian roulette and denoising stuff. also that you would get unparalleled results if you really pushed up the parameters

    is bakery stricktly a replacement for realtime gi?
     
  39. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    a dots based game would absolutely be better in every way than a non dots based game, but its not at all requirement if the scripting in your game doesn't either: need more than one thread, vectorization or fast object destruction / construction (or new dots only exclusive features like netcode, low level audio, animation blending etc)
     
  40. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    ...I'd suggest to get more experience with both engines, to be frank.

    The pitfalls are that it is still slow.
    https://assetstore.unity.com/packages/tools/level-design/bakery-gpu-lightmapper-122218#reviews

    The fun part, as I said before, is that before enlighten, unity used Beast, then scrapped its support, people weren't happy about it. Now they're scrapping enlighten too.
     
  41. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    why would progressive be any slower than bakery? wouldn't bakery just too produce lightmap textures, albeit in a different way and add them to LightmapSettings.lightMaps
     
  42. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    That's nonsense. And you've been given exampels earlier. Many times.

    The thing is, modern hardware is INSANELY powerful.To the point where you shouldn't need Dots in the first place. In other thread some time ago I've run numbers, turned out that modern desktop in one second is capable of performing amount of calculation that would take a human 15 thousand years to finish. Likewise since, for example, atari/commodore times, computing power has risen by a factor of over at least one million.

    Instead of trying to build the brave new world with Dots, unity could provide C++ API, and that would likely result in comparable if not greater performance boost, without extra effort required to convert to their unique approach. Yet here we are now...

    Either way, I wish you to finish the Dots honeymoon phase soon.

    Have fun.
     
  43. shredingskin

    shredingskin

    Joined:
    Nov 7, 2012
    Posts:
    242
    Mainly by not defaulting to CPU every time you try to bake something is my guess.
    About dots, even after almost 4 years I do have some hope for it.
    Imagine just clicking a checkbox in the animator "dots animation" and have all the animation run in a single job.
    Or just click a button and have all the static meshes be used by the hybrid renderer.
    That would be non intrusive and would help everybody.
    But right now is not there yet (or anywhere IMO).
     
    xVergilx and hippocoder like this.
  44. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Because bakery would be using more efficient approach.

    Also you need to ask unity why their builtin lightmapper is slower than a 3rd party asset. Or why both of their builtin lightmappers are slower than a 3rd party asset.

    Things like this are the reason why people get excited over Unreal tech demos. Or why I wouldn't put any faith in the unreleased global illumination solution. And why I'm skeptical about Dots.
     
    Last edited: May 17, 2020
    xVergilx likes this.
  45. shredingskin

    shredingskin

    Joined:
    Nov 7, 2012
    Posts:
    242
    I guess with GPU they have the excuse to be brand agnostic.
    Though there are a lot of 3rd party assets that blow unity's execution out of the water.
     
  46. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    by *absolutely better*, i mean that a game will be the same apart from the scripting backend. then of course, dots is faster and capable of more than mono.

    anyway, il2cpp already exists.. and that's what you should really be using for release builds if you want to achieve c++ like performance without you having to do absolutely anything

    dots is performant due to multi threading, cache misses and vectorisation, all of which is not a default feature of using C++. its not about the syntax or the language, dots actually uses llvm so it's actually just like il2cpp

    you would need to use a dots like approach with C++ to achieve dots like performance in C++
     
    Last edited: May 17, 2020
    T0rp3d0 and OCASM like this.
  47. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    Unity supports compute shaders, so they absolutely can run GPU calculations while remaining mostly brand agnostic.
    https://docs.unity3d.com/Manual/class-ComputeShader.html
     
    Tanner555 and hippocoder like this.
  48. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,764
    Cool, but the scripting backend is a significant factor. The hybrid mode may offer some benefits, but the reality is that if you want performant code out of DOTS, you have to be able to deal with DOTS' still high skill level barrier to entry to use at all. DOTS is not simple. The boilerplate code issues, while improved from a year ago, still have loads of issues. The visual scripting solution being built alongside it is years away from being a viable option with Unity's current rate of progress.

    DOTS will be "absolutely better" ages from now, when it's as easy to use as monobehaviours.
     
    Gekigengar, TeagansDad and hippocoder like this.
  49. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,328
    You can't acheive C++ performance by cross-compiling through C++, because source language comes with its own baggage and that baggage often requires helper structures and additional mechanism, which produce extra cost.

    I would suggest to learn more about C++, as multithreading support made it in quite some time ago.

    I doubt it. I think the main reason why Dots even is even a thing is overhead you get from overreliance on C#.

    The main issue I see here is that the dots showcase - megacity has only 100k objects in the scene. To really get my attention you'd need to up the number to ten million or hundred million entities. Then I'd be much more likely to be sold.

    Because on modern computer 100k is in ballpark of something that I believe should be achievable by default. Of course, we have so much software bloat' these days that we're flushing most of computing power down the drain. 10mil or 100mil enetities, however, would be at the range where an actual significant improvement would be required. And that would be interesting.

    Nevermind that almost nobody will ever produce a game that gets to that number in the first place. Standard shooter these days has something up to 12 enemies per screen, and I can't remember the last game that woudl let you control a thousand units. There's still Factorio, but that's pretty much it.
     
    Last edited: May 17, 2020
    Ryiah, AcidArrow and hippocoder like this.
  50. Ommicron

    Ommicron

    Joined:
    May 15, 2020
    Posts:
    47
    yes c++ is faster than il2cpp. but il2cpp is faster than no il2cpp. the difference between c++ and il2cpp would practically be 0, especially if you used C# as if it were C++ by using pointers, pass values into functions by const reference, stack allocation vs heap allocation, no gc etc.

    C# has a very simple and much easier to use multi threading library. instead of include<future> and putting and poping things of a stack and locking data, you can just very simply use
    Code (CSharp):
    1. Parallel.ForEach(arrayOfData, functonToApplyToEachData);
     
    Last edited: May 17, 2020
Thread Status:
Not open for further replies.