Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Unreal Engine 5 = Game Changer

Discussion in 'General Discussion' started by adamz, May 13, 2020.

Thread Status:
Not open for further replies.
  1. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yep I glazed a little sorry <3
     
    ugur likes this.
  2. jcarpay

    jcarpay

    Joined:
    Aug 15, 2008
    Posts:
    558
    Wrong, a big sized company doesn't operate without a CEO (for good reason).
     
    TalkieTalkie likes this.
  3. IgnisIncendio

    IgnisIncendio

    Joined:
    Aug 16, 2017
    Posts:
    223
    https://forums.unrealengine.com/com...ming-~-what-do-you-want?p=1760249#post1760249

    While we're asking for Unity to sort out their package manager and SRP, and people here are arguing over ECS...

    People over at UE forums are asking for Unity features, such as a package manager, SRP, ECS, compute shaders and Vulkan. This kinda makes me appreciate what I already have more, and I guess both engines have their flaws.

    Edit: Let's also not forget that UE5 still doesn't have proper 2D and FixedUpdate support, and they are still waiting for an intermediate language. I'm interested in seeing what the Skookumscript team do for UE5 though.
     
    Last edited: May 15, 2020
  4. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    I don't believe in absolute power of a CEO, and see a company as a machine with more than one possible fault point.

    I'll leave it at that.
     
    Ryiah and xVergilx like this.
  5. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    A CEO does not have absolute power, the board will fire him if they see it necessary
     
  6. ugur

    ugur

    Joined:
    Jun 3, 2008
    Posts:
    692
    I don't think "blaming" the CEO or CTO or anyone personally is useful or warranted in this case.
    because i feel like no one did things with bad intentions.
    The contrary, these things were done with the best intentions.

    I can totally see how up front one wants faster progress on more ends, and so then it may have sounded like a good idea to have more and more teams working on more and more features or engine systems or other things and then it was seen these progress at different pace, so hey, why hold back that progress by monolytic core engine updates?
    Why bundle all this stuff, why not make it all separately downloadable in own packages just shipping when they have a nice update ready?

    The MASSIVE problems coming from that like that nothing fits together, nothing is well testable against other things and the entire rest, there is never a really even just halfway complete, and really long term reliable stable state etc etc, well, such many downsides are more and more visible over time.

    Then, as an engineering team would do, if one generally thought the idea itself was good and sound up front, one tries to improve things and throws more testing, more automated testing, more public testing and what not all at it.

    But at some point, yes, one should accept that no matter how good the intentions were, it does not lead to viable result with that setup.
    It has to feel like a coherent complete engine where all things, features, systems etc which many users would use frequently should ship with the engine and be preinstalled so all users of version x of the engine by default have this state of all packages which is properly integrated and well tested to work well together.


    Likewise, i can totally see how ecs and dots were envisoned, performance of codeside/cpu/memory usage etc was not good, users asked for better, so a long hard look was done at what all causes those issues and then a whole mantra was decided.

    The problem is, yeah, that's not something that should be force pushed to user land code and workflow, great to offer it as option, but yeah, it becomes misguided as soon as one expects most or even just a high percentage of users would be into changing their whole coding ways to that way and on top be into throwing away their entire scene objects workflows and throw away all existing unity systems for from scratch started replacements which are incomplete, buggy and will have many own issues for years to come.


    Likewise, i can also totally see how it came to multiple render pipelines due to the best of intentions.
    One of Unity's biggest strengths is far reaching platform support, both across generations and also across many current (and future) platforms.
    At the same time many users often asked for better graphics features, some (few who would actually do it) scriptable render pipelines, and many for better graphics performance and access to using the latest high end graphics features.

    So then scriptable render pipelines made and then, initially probably just as demo intended, lightweight and high end render pipelines made.

    But then marketing or user feedback or someone decided those should not just be kept as example, rather going forward, why not have these two additional render pipelines and have one which supports most devices and then one for the extra kickass highest end stuff and treat the old one just everyone uses as legacy.

    But sorry, that's where it went totally off the rails despite the best of intentions, because for many it is one of the biggest selling points of Unity to be able to deploy from lowest end to highest end from mainly a single base for a project, exactly not having to redo many things over nor having to decide up front set in stone to what one wants to deploy to and how low and how high end one wants to support/go.
    And to many it is also completely unacceptable to have to redo all shaders, post processing and most if not all assets and workflows and then still end up with something with less platform reach and in each option some things missing which exist in the other.

    So yeah, i don't blame any single person as i can totally see how we got here exactly because of the best of intentions.

    But yeah, now time to course correct massively.
     
    Noisecrime, protopop and hippocoder like this.
  7. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Let's leave Unity staff out of this. As in my signature, you can criticise any ideas and pour a bucket of piss over ideas if you want. That is being intellectual and playing with concepts, it's a beautiful interchange to attack idea or stretch and mould it.

    But we don't lower ourselves on these forums to attacking people, and that includes the CEO :)
     
    xVergilx, ShilohGames and pcg like this.
  8. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    In how many projects is the game logic a performance problem though? Our game logic included network code has almost zero impact on the CPU time.
     
    UndeadButterKnife likes this.
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    In this scenario, I don't see a point in trying to blame the CEO for anything if he wasn't fired by the board. The board represents self-regulatory mechanism, if the mechanism hasn't been triggered, and CEO wasn't fired then all is well. Or mostly well.
     
  10. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    I haven't blamed the CEO of Unity for a single thing
     
  11. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    That wasn't directed at you, but mostly voiced my opinion regarding earlier argument by another poster.
     
    MDADigital likes this.
  12. sxnorthrop

    sxnorthrop

    Joined:
    Sep 29, 2014
    Posts:
    43
    That being said we still don't know what mobile support looks like for virtual GIM. It could very well just be a fallback of the existing automatic LOD in UE. I've seen this argument a lot, and I don't understand how you can get anything more out of a mobile GPU, using software, then what already exists.

    Also, both XBOX Series X and PS5 will use AMD's RDNA 2 which will support DX12 and NVIDIA already has internal demos with mesh shading running on the series x. It's also confirmed that the PS4 demo was utilizing RDNA 2 for the tech shown.

    You're right though, we may not be seeing mobile support for mesh shaders for a while.
     
    Last edited: May 15, 2020
  13. Necromantic

    Necromantic

    Joined:
    Feb 11, 2013
    Posts:
    116
    At this point I just want a combination of UE and Unity. :p Not like that's really a new one.
    • Ease of use (C#) of Unity
    • Unity ECS and the DOTS efforts
    • A good 2D feature set
    • The Unity community
    • A proper networking solution that makes use of dots. I guess I'll have to check out the free Epic Online Services now as well.
    • Graphics out of the box from UE (not that I care that much about graphics though)
    • Open source and royalty free for low revenue
    And more other features from both.

    Unity does put a lot of efforts into new tech for programmers and gameplay while Epic often seems to favor graphical features more. Though with Unity features often end up like prototypes and may be scraped soon after. Unity Networking anyone? Which is always interesting as an early adopter but might bite you in the ass in production. And UE seems to have the more interesting business model, not to forget being open source.

    While I've used both but have preferred Unity for the interesting new features and ease of use I am becoming torn once again.

    It's going to be interesting to see how Unity reacts to this all.
     
    hippocoder and IgnisIncendio like this.
  14. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    Take Half Life Alyx for an example, it's asset management is key to why it look so good. Engine tech wise they use something very similar to shadowmask mode with mixed lighting so I'm certain you could reach same level as detail in unity.

    But to reach that level of asset manegement so you can output that fidelity and don't miss the 90 or even 144hz mark on the valve index is a feat most of us indies can't do.

    If unreal have solved that, then I'm impressed. Actually SRP have somewhat solved parts of that by making it easier to batch meshes even if the don't share materials.
     
  15. MrPaparoz

    MrPaparoz

    Joined:
    Apr 14, 2018
    Posts:
    156
    in self defense about CTO blaming. I just asked that wasn't that CTO's responsibility about new techs. That's all.

    I hope Unity can calculate its trajectory better looking forward here and on. In roadmap, it doesn't seem like it's going to be in 2020 cycle. Better luck, next time.
     
  16. BTStone

    BTStone

    Joined:
    Mar 10, 2012
    Posts:
    1,418
    To be honest - I couldn't care less about shiny new graphics. Pretty things are pretty. Not really breaking news. This is something I expect from new tech and new console generations. That being said, what does impress me is the underlying change of workflow. This is a real gamechanger, IF this is truly the way Unreal 5 will work.
    We won't find that out until 2021.

    Now. That's the point: workflow.
    I'm working with Unity for almost a decade now and went through lots of engine iterations, starting at Unity 3. Like many of you I've seen the good, the bad and the ugly.
    The last couple of months I asked myself why I'm still using the engine. The obvious answer is: it's ease of use.
    It's easy to use Unity. The concepts are easy to grasp.
    A scene has a gameObject has a compnent has data and logic. Write your script, attach it to a gameobject, run the scene to test it. It's not only simple, it's easy.
    Now the ease of use is widely spread. The engine lacks specific features? Well, you can extend the editor yourself if you want. Your game needs special functionality, we don't have that feature out of the box? You don't want to implement it yourself? Well, have a look in our Asset Store, maybe you'll find something.

    This ease of use is the whole spiel of Unity, isn't it?
    Problem though: in the last engine iterations things changed. The ease of use isn't scalable.

    The introduced Package Manager is a blessing and a curse. It's great to see Unity is working on different things. Unity becoming more modular should be a good thing, but wholy moley, everything related to packages is sooo unreliable:

    - The quality:

    Using packages in preview feels like a high-wire act, but this is something I do expect. At least verified packages provided a kind of stable experience.
    Sometimes I also don't get why specific features are not targetet in whole.
    Like camera stacking for the URP. Available and works...but not for the 2D Renderer. (Apparently it's coming and it'll be backported to 2019.3 but you get my point)

    - The the documentation:

    Well. To be fair, to get quickly started their fine, but if you really want to work with your packages and need specific information about what the API does here and there and how to use it effectively, most of the time the docs do not provide the desired information.

    - The examples:

    Some packages provide examples to directly download/install. Some packages don't have any examples. Other packages have examples, but you won't be able to download/install them through the package manager. You have to go to the github repo of that package.

    - The bug reports:
    Some thing is wrong with package X? File a bugreport. But for package Y there is the github repository, create an issue there.


    Working and I mean really working with packages, not the quickly installing a package to dragdrop a component and see how it works - really working with a bunch of packages feels like a cryptic mess, things aren't consistent. Ironically Unity is lacking unity.

    My point is: workflow.
    With the last engine iterations and the introduction of packages Unity did not lighten my workload, quite the contrary sadly. I have to study docs because things are not quite self-explanatory. I jump into the forums, since the docs aren't that great. I update the package version and my stuff breaks, since I had previously to create specific workarounds to solve old issues and now I have to fix those broken code.
    It's happening to often lately that Unity prevents me working on my games, instead I have to figure out how things work and the help we get is quite limited.
    And if I see or hear how another engine takes workloads away from me I do wonder: why can't we get the same with Unity?
    Personally I don't need Nanite or Lumen or whatever fancy tech, but a stable and reliable workflow inside the engine would be a great start.

    And all that talk about DOTS and it being the silver bullet for Unity is nonsense. Sorry but DOTS coming out 1.0 won't solve the issues how everything else is handled.
    First of all it'll take years since it will come out, then it'll take years until a significant amount of devs will actually use it. I mean alone the resources out there.
    EVERYTHING in the internet related to Unity are Monobehaviours: Documentations, public repositories, Asset Store tools, tutorials, online classes, degree courses.
    I don't see the Unity dev world migrating to DOTS, not without way more resources online - way more and better documentation, way more tutorials, way more examples.
    But in it's current state I don't see how Unity should be able to provided all that since it can't provide this for all the other packages :/


    That being said I understood the real reason I work with Unity: not because it's easy to use. Because it isn't if you plan to do something more than prototyping.
    I use the engine, because I'm experienced in using it, I like their multiplattform approach and C#. It's my sunken cost fallacy :)
    And since I'm focused on creating 2D games there is no real point in switching to Unreal, although that updated revenue-share policy is quite something. But I sure will bite the bullet and learn me some C++ once Unreal revamps their 2D tools.
     
  17. IgnisIncendio

    IgnisIncendio

    Joined:
    Aug 16, 2017
    Posts:
    223
    I feel that Unity will never be able to match UE's licensing model. Epic has Fortnite, and AAA companies pay for Unreal, so indies don't need to pay. Meanwhile, Unity only has us. They need to charge indies money to survive.

    It's unfortunate, really, but it also explains why Unity has been chasing the AAA market and other markets for the past few years. Maybe Unity hoped that if they hit it big in the AAA market, they can make Unity more free for indies? But that didn't happen, so here we are.
     
    pm007 and pcg like this.
  18. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    It is not just mobiles. There are ATI GPUs and I'm personally using an nvidia GPU that is not turing and can't do RTX/Mesh shaders. This kind of hardware is going to be around for quite some time. Couple of years, maybe.
     
  19. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    5,984
    This is my opinion, but I think that component based architecture is more intuitive for anybody. If you think of a random object in reality, its properties and functions are generally on it, not somewhere else or in many other places anywhere on the planet.

    The idea of splitting up functionality of an object into scripts that are elsewhere in a project, and separating data from functions, requires just the sort of logical gymnastics that programmers love and non-programmers hate.

    Could be wrong but reality is the context from which most people operate.
     
    pcg and Shizola like this.
  20. sxnorthrop

    sxnorthrop

    Joined:
    Sep 29, 2014
    Posts:
    43
    Hmm, that's true I guess. But your GPU is likely 2-3 years old at best. I'm skeptical if the claim is that VGIM will perform on non-current/next gen hardware. I don't believe that the claim was ever made though, only that UE5 would support old/new platforms. People seem to be piecing that together themselves.
     
  21. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    The geometry image software rasterizer is kind of genius if you start thinking implementations:
    - store stuff into image patch,
    - for each 2×2 pixels block on the patch, cull position and blackface,
    - if 2 pixels is cull, there is no triangles, next block
    - else output triangles to list, ie max 2 triangles
    - or rasterize for early z rejections

    You could probably store patch with a structure that make them spatially coherent, so you know the extent they will cover on screen, which allow for parrallel tile base rendering. We could probably easily discriminate, per tile, all the patch front to back, if we store them in a 3d grid index. Once the tile is full we don't even visit further tile, a bitfield coverage of the tile could help. So 32x32 would a nice size per tile. That would obviously play very well with a gi structure i think they have. It's probably efficient enough to run on compute able mobile phone.

    The texture themselves could be packed in morton order to have a nice coherent layout.
     
  22. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,662
    Back to the Unreal/Nanite discussion, I don't think its that big of a game changer in the gaming industry: sure, you can throw a 100 million polygon asset into the game and have it render beautifully in realtime. For film/VFX/previz this is absolutely killer, as all you care about is time-to-screen. The heck, at this quality level you could probably get away with realtime rendering for the final shot in many cases.

    Problem is, you can't do that for every asset in a game or you'll end up with a 20 terabyte game. A bottom to top approach like tessellation/displacement maps is great because you can up-res your meshes at runtime and save space at the same time. A top-to-bottom approach in which you take a huge input mesh and ultra-HQ textures, then reduce their quality to fit their on-screen size wastes a lot of space imho.

    Unless this new pipeline scales nicely to less powerful devices, and involves some amazing data compression technology, it does not look that hot to me. Hope to be proved wrong!
     
    tango209 likes this.
  23. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    It's closer to the displacement you mention, there is NO traditional mesh, it's stored as texture.
     
  24. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,321
    It is unclear what VGIM refers to in this context as of now it is not a common abbreviation.

    If you meant Nanite mesh handling, then it is possible that it can be done on GPU anyway even with no mesh shaders being available. Older hardware still has access to geometry shaders, domain shaders and tesselators, and additionally compute shaders are capable of dealing with arbitrary raw data and produce geometry. It will be more awkward than mesh shader, of course.

    And as I said earlier, what I'm interested in the most is the GI solution.
     
  25. unit_dev123

    unit_dev123

    Joined:
    Feb 10, 2020
    Posts:
    989
    If you are using old operating system and card you should consider upgrading. My friend said she is using something called '2 nvidia titans in sli' although I'm am not sure what she means as i am not a graphics person.

    She sent me some images of what she doing here.

    if this is only concern you can try something called 'cryengine' i believe this is help you.
     
  26. pcg

    pcg

    Joined:
    Nov 7, 2010
    Posts:
    292
    I don't want to detract too much from the original topic but...

    Chances are you started off coding in OOP (maybe?) and hence have that mindset.

    Before learning OOP I played around with assembler on the Amiga and always thought in terms of data. When I started to learn OOP I didnt get it. Why would I need to have some move code on my rocket AND on my spaceship when they both did exactly the same thing. (look at the velocity and move). I understood why it needed data but why did it need its own code? Also was this code in memory multiple times for every object :eek:
    Yeah, naive I know :)

    My point was though if you don't have the concept of OOP then DOD isnt too much of a stretch.
    But from the direction change DOTS VS seems to be taking based on feedback from focus groups perhaps thats not the case.
     
    xVergilx and Billy4184 like this.
  27. unit_dev123

    unit_dev123

    Joined:
    Feb 10, 2020
    Posts:
    989
    Does anyone know if same process can be said for animated models. Or is only high triangle mesh suitable for static non movable models?
     
  28. OCASM

    OCASM

    Joined:
    Jan 12, 2011
    Posts:
    326
    The approach failed and URP has become the new Frankeinstein's monster.
     
    Last edited: May 15, 2020
  29. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    5,984
    With this and the recent acquisition of Bolt I wouldn't be surprised if something was going to change yet again.
     
    hippocoder and pcg like this.
  30. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    I wonder if unity made the classic failure of going public with SRP too early. Done it myself so I don't blame them. Imagine if kept both DOTS and SRP internal beta only, and when it was stable released it instead. I think we would have seen totally different outcome
     
    arkano22 likes this.
  31. sxnorthrop

    sxnorthrop

    Joined:
    Sep 29, 2014
    Posts:
    43
    Virtualized GIM is (I assume) just an abbreviation for Virtualized Geometric Image Mapping. I just called in VGIM (shortening the "virtualized" part).

    Twitter post where the term is being used: https://twitter.com/BrianKaris/status/1260734555532611584
    Research paper regarding geometric imaging: http://hhoppe.com/gim.pdf

    I believe GIM just stands for Geometric Image Mapping (not a common abbreviation) and I'm just adding the V for virtualized.
    I could start calling it "virtualized geometric imaging" or whatever, but that's a lot -- semantics.. (I guess I could just call it Nanite :/ )

    And let's say that Nanite is just a version of GIM that automatically converts a polygonal mesh into a texture and transforms it back into 3 dimensions. What's the different between that and VT? Unity is working on a VT implementation as we speak.

    I'm also interested in their GI solution. I'm curious how different it will be from DDGI solutions already out there.
     
  32. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    The eurogamer interview mentioned it's a software rasterizer, which allow more custom stuff, they bypass vertex altogether and probably why they bypass the micropolygon inefficiency.

    Occam razor, there is enough informations to fill some blanks on how it works.
     
    sxnorthrop likes this.
  33. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    It probably is just that, all points to it atleast. But it's not only the runtime tech, if they also fixed a good workflow for it then it's a gamechanger.
     
    sxnorthrop likes this.
  34. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,745
    The amount of repeating assets in the scene won't have a meaningful impact (or really any impact at all) on storage size.
     
  35. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    In the eurogamer article the dev says it's voxel for large scale, distance field for medium and screen space for detail. It's a hybrid of all technique they used so far.
     
  36. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    Since they have Realtime GI :) in unity you would need gigabytes of lightmaps :)
     
  37. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,662
    URP is much simpler than the built-in pipeline, and writing shaders for it is considerably easier. Now, afaik they want to glue a deferred renderer to it... I just would keep it a separate pipeline. That's when things start to get overly complex.
     
  38. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,745
    A deferred pipeline is really important for a lot of use cases, but also you don't have to target it if you don't want to.
     
  39. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    How did it go with the new pipeline that would be perfect for VR? Lots of talk that they would marry forward and deferred and make the perfect VR pipeline. Never heard much of it again
     
    arkano22 likes this.
  40. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,662
    I think it makes sense to wait until a certain feature is stable enough (maybe do a beta with a small subset of your userbase), then release it and mark whatever it is meant to replace as deprecated. This way everyone easily understands what they should be using, and user pain is alleviated. I don't think public "preview" features are a good idea.
     
    T0rp3d0 likes this.
  41. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,662
    Agreed. But I'm not sure having full deferred/forward paths in the same pipeline is good idea, unless you can wrap an elegant abstraction around both.
     
  42. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    It's much easier to reference the talk: http://hhoppe.com/gim.ppt
    It make things much more clear than the wall of jargon of the pdf, who mostly focus on the "unwrapping", as it show what the deal is with mesh to texture encoding.
     
  43. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    I guess it would be a new pipeline altogether like Unreals Forward+ or if they could abstract pipelines altogether. but thats hard are since certain features like anti aliasing doesnt go well with performant pixel lights
     
  44. sxnorthrop

    sxnorthrop

    Joined:
    Sep 29, 2014
    Posts:
    43
    I still hesitate to say it's a game changer (yet) though. These kinds of tech discussions about upcoming Unreal tech are always happening. And more times then not it's not as good as advertised.


    That's great but hardware rasterization is so much faster in a lot of cases. How will this scale to less powerful hardware? They might as well make it a hybrid between mesh shading and their custom rasterization method. Otherwise you have a black box when it comes to different hardware architectures. Am I way off or?

    I referenced the PDF merely as an example of the reason I use my abbreviation of GIM. It just so happened to be the first result when I googled. A powerpoint is great but there are reference links in the tweet I posted too...
     
    Last edited: May 15, 2020
  45. MDADigital

    MDADigital

    Joined:
    Apr 18, 2020
    Posts:
    2,198
    I agree. At the same time our studio have gotten alot of nice feedback because of early access. Our game would have been completely different without it. Its a balance
     
  46. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    CONTEXT FOR GAME DEVELOPERS:

    It's worth bearing in mind that Unity has different goals now to when most of you started Unity. Let's take a shallow look at it as it does factor into all our interests:

    Unity focus
    • Part of Unity merged with an advertising company. This means that for Unity to maximise income, they kind of have to prioritise mobile. This means URP becomes the built in renderer.

    • Unity earns money from many places. But for game developers Unity earns money from subscriptions and ad rev. They don't have any rush to add tooling because the kind of games that make Unity money are simpler games. They also don't need to hurry in finishing anything really. They do it when they want to, in each department.

    • Unity has subscription for income. There's no real reason to rush out tools to help developers deploy complex titles. HLOD, and many other supporting technologies are abandoned.
    Epic focus
    • Epic only earns money when you earn one million dollars. Think about that carefully. Epic can't get a dime from you unless your game SHIPS. So the tooling is all about lod/hlod/bp/unified scaling architecture etc, it's all about making you deploy a complex title.

    • Fortnite is a huge driver of working tech, along with AAA studios. Right now, more games than I can count (edited) run on PS4, look stunning. That's how battle-tested and far ahead epic is for game devs.
    Summary: Unity's core focus in game development is mobile with ad revenue. Epic's focus in game development is a scalable game from mobile to film, in one pipeline. You do not need two projects. That's all about making YOU more money because if you don't make one million, epic are not going to get paid.

    Unreal is easier to make games in because you know it's proven. How many 3D titles are for playstation 4 on Unity vs Epic, from indie studios? (and those would only be on built-in). I could only find two for Unity but more than I can count from Epic. This is the real question here. Does Unity scale? It does not currently thanks to the choice of two separate official pipelines and virtually no love for optimisation of assets.

    This is all my opinion as a Unity customer. I'm entitled to that. None of this is moderator opinion. None of this is based off gossip or friendly chats with staff. If it involved ANY Unity employee, I will never write about it, ever. Because respect is everything in this industry.

    With that in mind, I'm still, as a paying customer, entitled to my observations. I stand by those personal thoughts of mine. I think Unity's focus is not really on games at all (unless mobile). This is backed up by CEO comments online in interviews stating that Unity doesn't expect games to be the majority of Unity's revenue stream.

    Finally...

    Unity's got DOTS coming, and so many industry-changing features. But even the smartest people at Unity will read my comments and agree that it will be a long time before these features can be properly supported and hardened. It will take many years for that to mature.

    I for one at least now know that Unity's direction maybe isn't going to align as well as I thought with my goals and team size, and that's fair. But I am done with "the next best thing". I do not feel as a non-mobile game developer, I matter much.

    I hope Unity has the foresight to understand how much of a risk I take giving a truthful and honest view, and how much money they've cost me over the years with promises that have not arrived. It won't change the love I have for the company but they cannot expect all my games to be made in Unity if they aren't working toward things I am.
     
  47. BTStone

    BTStone

    Joined:
    Mar 10, 2012
    Posts:
    1,418
    Completely agree with your post, just one thing I'd like to know: did a quick google and couldn't find any sources on if Ghost Of Tsushima uses already Unreal Engine 5. Would appreciate if you'd throw some source on this? :)
     
    hippocoder likes this.
  48. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,745
    I've seen no reputable source say this. I'm pretty sure Ghost of Tsushima is running on an in-house engine.
     
  49. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    OK I'll edit it out since that source was taken down from the internet, but it changes nothing. At all.
     
  50. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    They do mention hardware rasterizer is better at some cases, and it's probably an hybrid method, as it seems nanite is highly specialized in static environment. MY guess is that it help unify the paging, geometry and lighting, making it "one pass". They mention 16k shadow mapping only achievable through this techniques (vs 2k in traditional). So the benefit might be simple logistics, it's costlier in isolation, but merging it make a probable case for optimization.

    Here is my speculative guess about the algorithm so far
    It's probably versatile enough to opportunistically fall back on a vertex path after compute pass, when it applies. So while it's not meshlet base, it probably falls back to meshlet when it need, that would be hardware specific implementation. Notice I have an output of triangle list, it really just flow nicely without a hitch.

    Anyway we will know more about it sooner than later. It's also a great exercise in thinking outside the box.
     
    sxnorthrop likes this.
Thread Status:
Not open for further replies.