Search Unity

No More "Cool" Features Please Until Crippling Stuff Is Addressed

Discussion in 'General Discussion' started by Games-Foundry, Nov 15, 2012.

  1. darkhog

    darkhog

    Joined:
    Dec 4, 2012
    Posts:
    2,219
    The question is WHY they don't want to update to latest mono? I've heard that there was some licensing problems but since it is Internet and person from whom I heard it wasn't UT employee, I need details to confirm or disprove that theory.
     
  2. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    What license cost? Isn't the full mono stack free?
     
  3. superpig

    superpig

    Quis aedificabit ipsos aedificatores? Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,250
    Nope.
     
  4. Lucas-Meijer

    Lucas-Meijer

    Unity Technologies

    Joined:
    Nov 26, 2012
    Posts:
    157
    Hello,

    We are working on VM improvements, including GC work. I can't share that much about this yet, other than that we know that for some people
    this is a big problem, and that we're on a path towards making that better. I have no release date to share, other than it's not in the very near future.

    I just scrubbed our bug database about specific api points people have trouble with because they allocate memory, and the only one submitted was by Games Foundry, about Terrain. Our terrain support has more problems that just allocating managed memory, and we decided that the best course of action is to move the entire Terrain implementation to c++ for performance (it's currently in c#). Off the top of my head, I know that we were needlessly allocating some memory every frame because internally we called Camera.allCameras somewhere. We introduced Camera.GetAllCameras() (I _think_ that's in 4.3, otherwise next release, and I _think_ it's public). public or not, it will make that specific "allocation I have no control over" go away.

    On a completely seperate note, I very often read things on the internet of people saying "oh you shouldnt use foreach, because that allocates memory". This is not completely true. foreach called ".GetEnumerator()" on whatever you are foreaching.

    if that returns a class, it will cause an allocation, but if it returns a struct, it will not. so it copmletely depends on what it is that you are foreaching over. System.Collections.Generic.List<T> for instance, has a struct enumerator. so foreaching over that does not cause allocations, neither does Dictionary<TKey,TValue>. a normal array actually returns an IEnumerator, but the c# compiler has special optimizations for foreaching over an array. i.e., it does this:

    Code (csharp):
    1. static class user
    2. {
    3.   static void Main()
    4.   {
    5.         var a = new int[123123];
    6.         foreach(var i in a)
    7.           Console.WriteLine(a);
    8.   }
    9. }

    Code (csharp):
    1.  
    2. // method line 1
    3.     .method private static hidebysig
    4.            default void Main ()  cil managed
    5.     {
    6.         // Method begins at RVA 0x2050
    7.     .entrypoint
    8.     // Code size 44 (0x2c)
    9.     .maxstack 2
    10.     .locals init (
    11.         int32[] V_0,
    12.         int32   V_1,
    13.         int32[] V_2,
    14.         int32   V_3)
    15.     IL_0000:  ldc.i4 123123
    16.     IL_0005:  newarr [mscorlib]System.Int32
    17.     IL_000a:  stloc.0
    18.     IL_000b:  ldloc.0
    19.     IL_000c:  stloc.2
    20.     IL_000d:  ldc.i4.0
    21.     IL_000e:  stloc.3
    22.     IL_000f:  br IL_0022
    23.  
    24.     IL_0014:  ldloc.2
    25.     IL_0015:  ldloc.3
    26.     IL_0016:  ldelem.i4
    27.     IL_0017:  stloc.1
    28.     IL_0018:  ldloc.0
    29.     IL_0019:  call void class [mscorlib]System.Console::WriteLine(object)
    30.     IL_001e:  ldloc.3
    31.     IL_001f:  ldc.i4.1
    32.     IL_0020:  add
    33.     IL_0021:  stloc.3
    34.     IL_0022:  ldloc.3
    35.     IL_0023:  ldloc.2
    36.     IL_0024:  ldlen
    37.     IL_0025:  conv.i4
    38.     IL_0026:  blt IL_0014
    39.  
    40.     IL_002b:  ret
    41.     } // end of method user::Main
    42.  
    Bye, Lucas
     
  5. superpig

    superpig

    Quis aedificabit ipsos aedificatores? Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,250
    What about #502436, #502648, #502675?
     
  6. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    But can't unity use the lgpl version? It's not GPL so shouldn't be a problem no?
     
  7. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    You're right, didn't think of mobile / consoles.
     
  8. Lucas-Meijer

    Lucas-Meijer

    Unity Technologies

    Joined:
    Nov 26, 2012
    Posts:
    157
    @superpig: thanks, looks like I had a too specific search string.
    (bg info for others, all but one of those bug reports are about functions that return arrays).

    I think it makes a lot of sense for when we do a next breaking api update to change/ammend all functions that return arrays.
    I've been going back and forth a bit on how to do it, because merely passing in an existing array is not really enough, as you want also want
    to know how much elements we wrote into your array. maybe this:

    Code (csharp):
    1.  
    2. List<Camera> mycameras = new List<Camera>();
    3.  
    4. void Update()
    5. {
    6.     Camera.GetAllCameras(mycameras);
    7. }
    8.  
    then again, that requires changing the current simple getters to methods. we could maybe implement both. or perhaps alternatively change the
    rules of our API, and always return the same list for a certain method, and expect the user to know that, and that if he wants to actually keep the current list in tact, he needs to duplicate it, because next api invocation, we'll be writing to the same one again.

    I go back and forth on what I think makes most sense, but currently lean towards the third approach. Not sure how much existing code that would break though, and obviously if done like that, it can only be done in a major release.
     
  9. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Getters > method, no reason to sacrifice ease of use, is better if those are unity-managed lists anyway but reused.
    Maybe indicate any list you retrieve from the unity API is only valid for the current frame unless you clone it?
     
  10. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    I strong believe that the GetAllCameras (mycameras) approach is far better than what you propose.
    We get all the control that is needed like that and this kind of API also informs us that we are responsible for it even without reading the documentation. On the other hand having a getter may be a pretty nice ease of use, but a horrible design choice in my opinion. Getting results that become obsolete within one frame produces an unnecessary source for potentially hard to find bugs that can easily be avoided.
     
  11. TowerOfBricks

    TowerOfBricks

    Joined:
    Oct 20, 2007
    Posts:
    963
    It may be valid for much less than a frame. Consider
    Code (csharp):
    1.  
    2. Component[] arr = GetComponents<Component>();
    3. gameObject.AddComponent<BoxCollider>();
    4. Component[] arr1 = GetComponents<Component>();
    5. gameObject.AddComponent<BoxCollider>();
    6. Component[] arr2 = GetComponents<Component>();
    7.  
    In fact, you can only guarantee that it is valid until the method ends ( the method unity calls, like Update, Awake, Start, etc ) or until a call similar to AddComponent is called in the running method.
     
  12. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Actually the best of both world would be that unity manages things internally, but returns neither a list nor an array, but a custom struct enumerator that (depending on how far you want to go) either simply throws if enumerated when obsolete, or points to a snapshotable container.
     
  13. TowerOfBricks

    TowerOfBricks

    Joined:
    Oct 20, 2007
    Posts:
    963
    For some getters I definitely think the API should return the same list every time ( unless the length is explicitly modified ). For example Mesh.vertices, it is very logical that if some script updates the vertices of the mesh, all scripts who use the same backing array will have their arrays updated.
    Combining this approach with using Lists could be problematic however, because users would expect that if they append something to the vertices List, the mesh would be updated, but this would require constant checking by the mesh class to see if its backing list has changed, I guess this could be circumvented by using an Apply method similar to what the Texture2D class uses.
     
  14. superpig

    superpig

    Quis aedificabit ipsos aedificatores? Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,250
    That's pretty much what I said to Kim back at Unite Nordic :) Bear in mind that it need not be a breaking change - you can add functions (or overloads of functions) that are allocation-free without removing the existing ones, at least not yet.

    Yep, List<T>-based seems the way to go to me. That's actually partly why I wrote MonoListWrapper - I thought you guys might find it useful for exactly this problem.

    I'm not enthusiastic about that, to be honest:
    • Who 'owns' the list in that situation, the engine or user code? I think you're proposing the former, but in that case, what's the lifetime/GC situation for it?
    • The lists would presumably not be 'live' (wouldn't be aliased to actual unmanaged memory), so if I'm retrieving something on a regular basis (e.g. mesh.vertices) and intend to be reusing the same list each time, I still have to hit the property getter to have the list be updated. So, I'm doing "vlist = mesh.vertices" repeatedly, when vlist and mesh.vertices are actually ReferenceEqual, in order to invoke a side-effect of the getter. That's not a nice use of properties, and might not be optimization-safe.
    • Seems like it will cause additional headaches if you want to multithread things later on, as it turns read access into writing-shared-state access.
     
  15. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    But if it changes to engine managed non allocating on get then you can change your pattern to not "store" it but simply get it everytime
    so instead of var bla = getsomething();
    store bla
    do bla.1
    wait
    do bla.2
    you'd do getsomething.1 and getsomething.2 directly. only assigning it locally in the scope of your function instead of keeping hold of it.
     
  16. superpig

    superpig

    Quis aedificabit ipsos aedificatores? Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,250
    For some situations, I guess that's true, though in other situations you may not want to be re-querying the engine every time you access it (either to avoid managed->native transition, or because you don't want the data to have changed). It also doesn't resolve the GC/lifetime or multithreading issues.
     
  17. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    It sure does solve the gc/lifetime issue, lifetime is no longer your concern if you don't store it so unity can keep reusing changing it (it can't change during your own method) however yes it breaks if you try to use it from another thread (could change then), i do think some form of snapshoting enumerator would be best but it's prolly nontrivial to specify.
     
  18. TowerOfBricks

    TowerOfBricks

    Joined:
    Oct 20, 2007
    Posts:
    963

    Basically nothing in Unity's API is threadsafe anyway, so I don't think that is a big concern.

    I agree that relying on side-effects from a property getter isn't really a good programming practice.
     
  19. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    It sure is fine to depend on expected side effects of a property, property would fill absolutely no use if they didn't have another effect than giving you the field, we'd just use fields then.
     
  20. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    7,234
    I don't think Unreal's scripting language uses Mono, which is the GC problem area, and Unreal 4 has moved to C++ as the scripting language alongside it's graphical programming toolset.
     
  21. actuallystarky

    actuallystarky

    Joined:
    Jul 12, 2010
    Posts:
    183
    Unreal 3 uses unrealscript - a custom language running on its own virtual machine. I haven't heard of any GC issues with Unrealscript but it's a lot harder to get info about Unreal than Unity in general. Just because I haven't heard, doesn't mean it isn't an issue.

    The decision to move to C++ in Unreal 4 is frankly what prompted my question. If Unreal have decided that managed environments aren't suitable for games, what are the implications for Unity and mono?
     
  22. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    You're making assumptions there, moving to C++ doesn't mean "deciding that managed environments aren't suitable for games", there are plenty of other reasons why they could pick C++ over unrealscript or even .net, it's a matter of tradeoffs. I assume the big + of C++ for them was only maintaining 1 dev env (vs C++ + another layer) making Scripting able to consume any native code?
     
  23. actuallystarky

    actuallystarky

    Joined:
    Jul 12, 2010
    Posts:
    183
    Unreal 3 uses unrealscript - a custom language running on its own virtual machine. I haven't heard of any GC issues with Unrealscript but it's a lot harder to get info about Unreal than Unity in general. Just because I haven't heard, doesn't mean it isn't an issue.

    The decision to move to C++ in Unreal 4 is frankly what prompted my question. If Unreal have decided that managed environments aren't suitable for games, what are the implications for Unity and mono?
     
  24. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    7,234
    That's why I raised this thread

    http://forum.unity3d.com/threads/207254-How-much-faster-would-Unity-games-apps-be-if-it-used-C

    Unfortunalty no real info on how much faster Unity would be if it's scripts were C++, although in general C++ is twice as fast as C# but this depends upon the benchmark you run.

    Personally we could bypass the whole mono thing allow direct C++ access to Unity for the C++ programmers and still provide C# and mono for everyone else.

    Which I'm pretty sure Unity do anyway for anyone with the money to purchase a source code license, but we don't need that we just need Unity as a dll and the relevant header files.
     
  25. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    C++ is not "twice as fast" as C#, this is just false information that keeps getting spread since the early java days. C# is FASTER in some areas and SLOWER in some areas than C++, and usually the difference is fairly minor. The only exception to this is that it doesn't allow explicit use of vectorisation, so you can write faster (and then it's much more than 2X faster) in C++ when you need vectorisation, for the rest it's a matter of tradeoffs even when you consider only performance, and they're mostly in the same league.

    Actually there was a back forth blog post challenge between a well known C++ C# coder and the C# version was substantially (many times) faster than the one produced by the C++ guy, the C++ version ended up beating the C# version after something like 6 itérations and rewriting the new allocator!
     
  26. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    7,234
    Do you have a link I would love to read more about this?
     
  27. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Sure, link to all the parts at the start of the blog post, rico mariani vs raymond chen, Google them up if you don't know who they are but it's interesting since it's not your random dude without a clue writing a 5 line benchmark and being all like "woah it's 10megaton faster xxx 111!!!" but actual people with a clue who wrote something decent sized with an actual functional goal.

    Here's a blog post that's an overview of the back forth blogging between both that has links to both, as you can see first C# version was 10 (TEN!) times faster than C++ version, 5 itérations required to MATCH C# speed and another pretty crazy one to beat C# where it beat it only because of the difference in runtime startup time.

    I maintain my point of view, anyone who isn't a flat out mad guru of death of coding will ALWAYS WRITE MUCH MUCH MUCH faster code in C#, it's much harder to write crap to make it slow, and if you're really good, you'll match, maybe!
     
  28. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
  29. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    12,360
    Any generalized statement either way isn't telling the whole story either, though. Truth is that they're different tools useful for different things.
     
  30. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Not really no, aside from specific subdomains (system real time programming), they're both high level languages perfectly suitable for most tasks on the Platform on which they run, be it server side code (or writing actual servers), batch jobs, rich clients, large projects, small projects and yes, even games. I mean we're not comparing asp.net to darkbasic here, but 2 general purpose languages, they're diferent Tools but they're useful for the same things, and when doing those things you get to pick based on the tradeoffs.

    Currently (i don't know for mono, i really just come from .net so not sure how mono's performance profile fares) C++ has a few big upsides you can't have in C#, but raw normal performance isn't one of them, the ones i can think of are:
    - Specific manual or semi automatic performance for some critical areas (access to vectorisation)
    - Faster when you must work with native libraries (no need to cross the managed <=> unmanaged barrier and eat up the huge cost of that)

    If you're writing C# don't need to interop with unmanaged code nor plan to resort to manual vectorisation i'd say that you're very unlikely to end up with faster C++ code, it will likely be in the same league or massively slower depending on how you write it.

    The reason why the generalized statement of c++ not being faster hold than C# is that those 2 perfectly competent people i named (and the C++ guy has more C++ experience than the C# guy does because well, he prolly had >10 years of C++ experience before C# even existed!) actually wrote the same project, so the same "thing" with those different Tools and came up with these results.
     
  31. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    12,360
    So... what you quoted is wrong, except for in the cases where it's right? Ok, in that case I'll agree to disagree.
     
  32. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    No what i quote is right, except never. When i say except for those subdomains it's because it just doesn't apply in these subdomains since C# isn't available there. So except for places where you CANNOT use C# ANYWAY, they're different Tools for the SAME purpose. Which is quite the topic here unless what you wanted to say was that we should compare C++ speed to C# speed on platforms where C# is unavailable?

    So yes, everywhere where you could use C#, i feel they are diferent Tools to the same problem, where you can use one or the other as a high level general programming language, same league, similar abstraction level and apt for the same tasks.
     
  33. RvBGames

    RvBGames

    Joined:
    Oct 22, 2013
    Posts:
    141
    It appears that in the Microsoft article the C++ author used STL. This is inherently slow. Secondly, when STL was stripped out it appears C++ out performed C# two to one. More importantly, I can control the execution of the machine much better with C/C++/Assembly than with C#. When dealing with a lot of graphics (geometry, textures, lighting, collisions, etc.) it makes a difference.
     
  34. superpig

    superpig

    Quis aedificabit ipsos aedificatores? Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,250
    "Lifetime is no longer your concern" is pretty much the trigger phrase for "this will be a GC problem" :D

    Unity might own the list but there's still a GC problem: if the list is kept so it can be reused for the next call, when, if ever, will it get deallocated? The engine can't know that e.g. I'm doing "GetComponentsInChildren<Transform>" on the root object in my scene at startup, but then never again, yet it'd keep a big list of all scene transforms in-memory regardless (until I next do GetComponentsInChildren, but even then the capacity of the list wouldn't decrease so there'd still be a chunk of wasted memory).

    Also, another thing I realised is significant is the number of copies. The current methods are effectively one-copy operations - data is copied from native storage into managed storage (the returned array) and can then be held and used indefinitely. The reusing-lists approach needs one copy to populate the list, but then another copy if you want to hold and use the results indefinitely. So it's a two-copy operation. Giving the engine a list to populate is a one-copy approach still.

    It isn't thread safe now but I suspect it's going to need to be in the future. Given that this is a core API design rule we're talking about here, best to adopt something that won't make it harder to do that later on.

    I don't think you've grokked what I meant by 'side effects.' I'm talking about things that change state or have other externally observable behaviour. Having a property like mesh.vertices return what is effectively a constant value (because it'd be the same list every time), but as a side effect it modifies the list, is a very bad idea in my eyes - it actively destroys referential transparency for properties, which is a very fast path to total madness.
     
  35. Ocid

    Ocid

    Joined:
    Feb 9, 2011
    Posts:
    476
    You want to write an engine in Assembly? This isn't the dark ages anymore.
     
  36. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    7,234
    LOL Unity is already written in C++ but what if our games mechanics could be written in C++ and compiled into a Unity game, how much more could we get out of Unity just by bypassing mono's JIT compiler and GC?
     
    Last edited: Nov 8, 2013
  37. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,411
    I was also thinking it didn't, but actually it does. There's a library called Mono.Simd.dll that is located inside "\Unity\Editor\Data\Mono\lib\mono\2.0\". Copy it to your asset folder and use like an ordinary managed library. It works.
     
  38. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    It's a mono feature, not a C# feature, this is not doable on .net sadly (the Library doesn't do any of the work really, it just provides a stub if simd isn't available, if it is, like on mono, it uses it, but if you link to this on .net you won't get simd as the runtime doesn't support it, a shame really as it's the only big point where C# can't match lower level languages in performance)
     
  39. Lucas-Meijer

    Lucas-Meijer

    Unity Technologies

    Joined:
    Nov 26, 2012
    Posts:
    157
  40. RvBGames

    RvBGames

    Joined:
    Oct 22, 2013
    Posts:
    141
    @Ocid, dark ages, no, that was before my time :) If you want cycle counting performance than Assembly is the way to go. I rendered entire cities, from San Fran, to Las Vegas, and areas such as Washington D.C., etc. using nothing more the frustum culling with per pixel lighting and achieved over 40 fps at 1920 x 1080 resolution. I'm not sure if I can get the same result with C# though.
     
  41. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,411
    I my experience, which of course may differ from anyone's else, the places where I really need a performance boost are the places where large amounts of data are processed. In this case if I'm not happy with the performance that C# + multi-threading can offer, there's always an options to call a standalone C function that uses asm/sse/whatever to do the job, because interop transition time would be negligible. I've never gone so far though, haven't even needed to consider this option.

    It would be nice of course if the whole .net/mono ecosystem had a standard way to use vectorization, but even here in mono I don't see many people actively using it.
     
    Last edited: Nov 9, 2013
  42. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Well there aren't many people using it, but it's definately one of the reason all major engines aren't simply build in C# (prolly including unity), because you can't afford not to have simd when writing cpu side code that will apply the same operation over very large sets of data.
    It's also a very requested feature for .net (it's not a C# feature but a runtime feature really, if you can't generate simd opcodes nothing on the C# can help, not even having the source to the compiler would allow this) but so far it has gone completely ignored by Microsoft.
     
  43. Mwsc

    Mwsc

    Joined:
    Aug 26, 2012
    Posts:
    189
    If all you are doing is rendering cities, isn't the GPU doing all the work? The CPU just feeds the vertex and texture data to the GPU once, and then you change nothing but the viewing matrix each frame. C# should be fine. Or am I missing something?
     
  44. Ocid

    Ocid

    Joined:
    Feb 9, 2011
    Posts:
    476
    Haha was before my time as well. Only messed about with it a little though.

    Fair enough. Was that for that assembly jam thing? I forget what its called.

    I wasn't talking about that kind of thing though.

    I was meaning do you really want to code something like Unity, Unreal or CryEngine in Assembly or do the scripting/gameplay logic in it? At that point you're just at a point of serious diminishing returns that any speed increase gained is irrelevant.
     
  45. RvBGames

    RvBGames

    Joined:
    Oct 22, 2013
    Posts:
    141
    @Mwsc, actually I did both, GPU, and CPU when the GPU wasn't present. Worked with IMG hardware for a few years now, and of course NVidia on low powered devices. Or as they say now, mobile platforms.
     
  46. RvBGames

    RvBGames

    Joined:
    Oct 22, 2013
    Posts:
    141
    @Ocid, it would pay dividends many times over to have the engine written in extremely tight assembly for both RISC, CICS architectures. As others have noted, we don't have access to SSE instructions from scripting languages. At least I'm not aware of.
     
  47. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Well you do have some access from mono, but from (i think) much more recent versions of the runtime than the unity's version.
     
  48. Ocid

    Ocid

    Joined:
    Feb 9, 2011
    Posts:
    476
    I'm not debating the performance benefits from it. What I'm really getting at is who would want to code an engine as big as Unity/UDK/CryEngine in assembly these days? Probably not very many. Tim Sweeney mentioned Unreal 4 has taken something like 500 man years to develop, that would only increase exponentially if done in assembly.

    Or even if people had access to assembly to code in within the engine how many people are going to want to do that beyond just having a look? Not enough to justify the dev cost to implement it.

    Just look at Boo.
     
  49. ronan-thibaudau

    ronan-thibaudau

    Joined:
    Jun 29, 2012
    Posts:
    1,722
    Well i "am" debating the performance benefits from it, the reason why only tight specific parts of engines are done in asm or similar low level Tools is that, in the case of large programs, compilers fare way better than humans at low level optimisations. So it would probably be substantially slower, aside from being hell to maintain, huge code base having nearly no one able to maintain it as large scope assembly programmers competent on multiple architectures aren't exactly common place.

    You'd have to write diferent code paths for all arcitechtures but also within a single one (for pc you'd want to diferent assembler for amd vs intel, but also depending on the cpu feature set etc)

    Basically, beating compilers on a whole large scale project (unlike on a small area) is not going to happen
     
  50. Ocid

    Ocid

    Joined:
    Feb 9, 2011
    Posts:
    476
    I hear what you're saying. I should have made myself clearer. I'm coming from the best case scenario where what you mentioned is feasible.
     
unityunity