Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Unity Future .NET Development Status

Discussion in 'Experimental Scripting Previews' started by JoshPeterson, Apr 13, 2021.

  1. xoofx

    xoofx

    Unity Technologies

    Joined:
    Nov 5, 2016
    Posts:
    417
    There is no religion with Mono or that Unity cares more about Mono than what our users want. The post above from @JoshPeterson shows that we do care about moving to a more modern .NET runtime and ecosystem in general but we will concede that it is still not a main priority, despite several teams at Unity trying to push for this.

    The first reason this is not yet a main priority is that we have bigger priorities, and no, it's not about maintaining the statu-quo with Mono, but these bigger priorities are directly related to what our users are reporting e.g iteration time, improve the quality of the editor - no crashes, new platforms that we need to support...etc. Unity supports today 20+ platforms while e.g CoreCLR supports only 3 platforms. So even if we were moving to CoreCLR, we still have a lot more to manage than just moving to CoreCLR.

    The second reason, and I have explained it in my blog post: migrating to CoreCLR is a multi-year effort due to the fact that Unity was originally mostly a C++ engine with a thin scripting layer in C#. A big part of the code is still in C++, and this code is accessing managed objects in ways completely incompatible with how CoreCLR is expecting. We have dozen of thousands of places to fix in the code to make sure we can leverage entirely CoreCLR - and we can run without crashing...

    A third reason is the complexity of the migration: you can't expect an overnight switch mono->CoreCLR due to the reasons above. That induce that more likely we will have to maintain a transition period where Unity is able to run with Mono or CoreCLR, at the same time. It's like changing the engine on a car that is driving and that you can't stop, you can't expect this to be "easy". For example, this has a huge impact on testing infrastructure: Unity CI todays takes several hours to complete and this is already a struggle for the infrastructure team. When we will start to have CoreCLR, It will be another backend like Mono, and we will have to somehow double the time of our CI. That kind of changes don't happen overnight. You need to pull many teams and infrastructure challenges on this journey, it brings a lot more complexity to the migration picture (it's not just changing some code here and there).

    That explains why also we need to proceed in steps to get there, so that each step is a practical engineering progress and mitigate the risk to fail on our way. Among these steps we will have to:

    • Migrate first to a newer version of Mono is helping us to lower the gap of this migration. This version of Mono is already using many part of the .NET 5 BCL, but it's not entirely using the same internals. But it is able to expose a similar API experience that our users will use with .NET 5.
    • Then we will need migrate to Mono .NET 5, the same Mono (developed in the dotnet/runtime repo) that is entirely based on the same BCL used by CoreCLR .NET 5. Mono .NET 5 will be most likely only applicable to the standalone player because they have removed support for AppDomains (that are still required in the editor), but this sole part of the journey will be able to validate many parts, including the fact that we will rely entirely on the same BCL used by CoreCLR .NET 5, that alone is huge (and we might encounter regressions just by doing this).
    • Migrating to this version will require also to migrate IL2CPP to rely on the same runtime. Remember that CoreCLR is covering only 3 platforms while we cover 20+
    • In between, we will start to migrate to the CoreCLR runtime, and probably restrict it at the beginning to Unity standalone players (not the Editor). Reason is that the amount of internal fixes/API/problems that we will have to fix will be much lower than fixing entirely all the usages in the Editor. It will also validate a good chunk of the migration challenges.
    • Then we will start integrating CoreCLR within the Editor, evaluate how we can mitigate our usage of AppDomain to efficiently reload user code...etc. It will require to migrate/fix lots of code in the editor (C++ part) that will conflict with how CoreCLR is working.

    If you look at Microsoft, they haven't yet made CoreCLR the sole runtime for all the .NET platforms they are managing, still after all these years, Mono is the main runtime for all mobile platforms and even used for the WASM platform. If it was that easy to migrate to CoreCLR, Microsoft would have done it for its own product line. Things are even more complicated for Unity, because Unity is the most complicated usage/integration of Mono on this planet.

    So to wrap it up, Unity is engaged into this transition, we are working on it but the reality of this transition is complex. It will require some patience despite all the passion and effort we will put into this transition. ;)
     
    Last edited: Apr 14, 2021
    CodeSmile, Si1ver, nik_d and 70 others like this.
  2. optimise

    optimise

    Joined:
    Jan 22, 2014
    Posts:
    2,129
    Is there any plan to move more and more C++ code to C# to make this migration journey more smoothly since now Unity have amazing Burst tech? I think long time ago @Lucas-Meijer mentions about there is discussion internally whether to move more and more C++ code to C# land. One of the big issue I found currently is Graphics of Unity. It seems like every new feature graphics feature needs to have huge engine changes at C++ side make it always cannot backport to previous Unity version most of time. What I hope to see is moving more and more C++ side graphics code to SRP C# package and eventually make it able to truly become a individual graphics module that not tied to any engine version. This also applies to other features of Unity that bind tightly to engine version.
     
    Last edited: Apr 14, 2021
    Qbit86 likes this.
  3. xoofx

    xoofx

    Unity Technologies

    Joined:
    Nov 5, 2016
    Posts:
    417
    Yes, that's exactly what is happening with the development made in packages. Most of the new code developed is done in C#. But we have also challenges on our way: for example, Burst is not an integrated package within the Unity editor that legacy code in Unity could rely on. So before going there, we gonna have to transform the editor into built-in packages in order to rely on Burst. That's on our roadmap, but it's also a big project in itself (involving many teams, infrastructure, packaging...etc.)
     
  4. sebas77

    sebas77

    Joined:
    Nov 4, 2011
    Posts:
    1,642
    To me at this level is just curiosity more than anything else and I am also curious about what the relation with Tiny Project is, as, as far as I understood, Tiny Project is not using mono (again I can be wrong, not totally invested in knowing these aspects yet, so just curiosity).

    However, I do understand that for Unity is a struggle. Probably the majority of the products uses IL2CPP, but this is only true for mobile/console. I have never really understood how much the company values mobile platforms over PC platforms. However studios like ours only ever worked on PC, so it's nice to see that they care.
     
  5. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    Project Tiny was designed as a managed application that uses p/invoke to call into native code. So it is kind of the opposite of Unity (which is a native application that hosts the .NET VM, as @xoofx mentioned above). From the beginning Tiny ran on .NET Framework, .NET Core, and IL2CPP (albiet with a very stripped-down runtime and class library, mainly to avoid the need for reflection support).
     
    Walter_Hulsebos and phobos2077 like this.
  6. PassivePicasso

    PassivePicasso

    Joined:
    Sep 17, 2012
    Posts:
    100
    This is all very exciting news, but I have one question. I have a build system which has ScriptableObjects that the user populates with data. One of the objects contains a collection of AssemblyDefinition assets.
    If these are going away, will there be a new way to reference assemblies defined in the project as assets?

    The project is MIT licensed, if you want to take a look at it for referencing let me know.

    The loss of the ability to reference these assemblies would mean that my tool could only work in unity 2018 to 2021.1
     
  7. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    I don't expect any of this will be impacted by .NET updates. So you should be fine!
     
  8. sebas77

    sebas77

    Joined:
    Nov 4, 2011
    Posts:
    1,642
    Thanks Josh,

    There are MANY things that confuse me about unity intentions and Tiny project is one among them. Are it and Unity always be different projects? Will Unity and Tiny Project converge one day? Is Tiny Project a platform to experiment .net core (.net 5 now) so that you know what to expect for Unity?
    From your words it seems that no c++ code is used in Tiny Project. Unluckily I didn't try it yet, but I thought that TP was using Unity Editor, so in case you are talking only about the built client, which makes sense.
     
  9. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    I'm not sure about these details either, to be honest.

    It was more about small code size than .NET experiments. I don't think you should draw any conclusions from Tiny about future .NET support in Unity. Tiny happened to run on .NET Core, but it did not use the .NET 5 BCL - it used the stripped-down BCL that Unity defined, expressly to try to keep the code size small.

    Tiny does have a good bit of C++ code. The different from Unity is how the code is organized. Unity is a native application that hosts managed code. Tiny is a managed application that calls into native code.

    Yes, tiny is using the Unity Editor. The player build is a managed application.
     
  10. DinostabOMG

    DinostabOMG

    Joined:
    Jan 4, 2014
    Posts:
    26
    I was just thinking about this some more and I wonder if it might be good for the API to expose something like
    Code (CSharp):
    1. public static bool IsDead(Object obj)
    as a static member of UnityEngine.Object.

    That way, old code would be upgrade from:

    Code (CSharp):
    1. if (obj == null)
    to

    Code (CSharp):
    1. if (Object.IsDead(obj))
    instead of

    Code (CSharp):
    1. if (obj == null || obj.IsDestroyed)
    the latter of which could get long-winded, especially in complex conditionals.
     
  11. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    This is what I would personally prefer, and I actually do something similar to this currently with an "IsNullOrDestroyed" method (mimicking string.IsNullOrEmpty).
    Code (CSharp):
    1. #nullable enable
    2. using System.Diagnostics.CodeAnalysis;
    3. using UnityEngine;
    4.  
    5. public static class UnityObject
    6. {
    7.     /// <summary>Indicates whether the specified object is <see langword="null"/> or destroyed.</summary>
    8.     public static bool IsNullOrDestroyed([NotNullWhen(false)] Object? obj)
    9.     {
    10.         return obj == null;
    11.     }
    12. }
     
  12. NotaNaN

    NotaNaN

    Joined:
    Dec 14, 2018
    Posts:
    325
    Personally, I would much rather prefer having an 'IsDestroyed' property within GameObject:
    Code (CSharp):
    1. if (gameObject.IsDestroyed) { // Clean! }
    Instead of forcing us all to use a static method and make our if-statements look all weird like this:
    Code (CSharp):
    1. if (UnityEngine.Object.IsDestroyed(gameObject)) { // Bleh... }
    Because really, do we not have properties for this exact reason?

    Obviously Unity can provide both solutions (since the 'IsDestroyed' property would just call the 'IsDestroyed()' method internally) and everybody can just use their favorite of the two.

    I just want to make sure that Unity's automatic API updater upgrades to the syntax I want when the upgrade happens. :p
     
  13. The static method can do both check at the same time (null check to the variable and destroyed check to the object when the variable isn't null yet). When you use the property you need an additional null check before that. (Yeah, I know, the ? operator can be used in that instance, but still).
     
    phobos2077, cxode and NotaNaN like this.
  14. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    52
    Actually, you can have the best of both worlds:
    Code (CSharp):
    1. public static class UnityObjectExtensions
    2. {
    3.       public static bool IsNullOrDestroyed(this UnityEngine.Object obj)
    4.             => obj is null || InternalAPIIsObjectDead(obj);
    5.  
    6. }
    By using an extension method, you can have the syntaxis of an instance method, yet don't throw on null references.

    That function can be used as either:

    Code (CSharp):
    1. if (!UnityObjectExtensions.IsNullOrDestroyed(obj))
    2.    // ....
    3.  
    4. // Or
    5.  
    6. if (!obj.IsNullOrDestroyed)
    7.    // ....
    Thought, if that method would be added, I would also suggest add
    obj.IsAlive
    as an alternative to
    !obj.IsNullOrDestroyed
    because in a lot of times you want actually check if the object still exists.
     
  15. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    52
    Note that I accidentally forgot to add the parens in the code above.

    The instance version would actually be

    Code (CSharp):
    1. if (!obj.IsNullOrDestroyed())
    2.    // ....
    Since it's an extension method, not property.

    Surprisingly, the spam filter doesn't allow me to edit my own message...
     
    JesOb, Thaina and NotaNaN like this.
  16. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,166
    There are still some high chance that, at that time we might already have extension property and indexer
     
  17. Ramobo

    Ramobo

    Joined:
    Dec 26, 2018
    Posts:
    212
    Yeah, good luck. Extension everything has been dead for a while and I can't find any active continuations.
     
    Last edited: Apr 15, 2021
  18. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,166
  19. Ramobo

    Ramobo

    Joined:
    Dec 26, 2018
    Posts:
    212
  20. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    52
    Actually the quote

    Doesn't mean anything. The "Language Version Planning" github-project has been "deprecated" and they no longer use it. That is why he removed the issue from it. Now they use the github-milestones to track language features. https://github.com/dotnet/csharplang/milestones.
     
  21. triple_why

    triple_why

    Joined:
    Jun 9, 2018
    Posts:
    47
    Related to support for .net standard 2.1; I'm building a .net standard 2.0 dll, which perform heavy calculations on float-type data, and is consumed by my Unity app. System.Math class from .net standard 2.0 only provides double versions of math functions. System.MathF class from .net standard 2.1, on the other hand, provides float versions of those functions.

    I think I can presume support for .net standard 2.1 includes System.MathF class, so I will be able to move the code of the dll from Math class to MathF class.
     
  22. Ramobo

    Ramobo

    Joined:
    Dec 26, 2018
    Posts:
    212
  23. Qbit86

    Qbit86

    Joined:
    Sep 2, 2013
    Posts:
    487
    You still can benefit even from Slow Span — with the new Span-based APIs. They allow to reduce memory allocations in common scenarios. Like stackalloc, TryFormat(), slicing, and so on.
    But Slow Span is not worse than conventional ArraySegment, is it?
     
    Last edited: Apr 16, 2021
  24. sebas77

    sebas77

    Joined:
    Nov 4, 2011
    Posts:
    1,642
    according to this article the difference between fastspan and slowspan is not too much:

    https://adamsitnik.com/Span/#slow-vs-fast-span

    I did some tests myself and they were telling another story when compared with the mono implementation. As you can see the slow span implementation of net472 is not super bad, but the mono one is basically unusable when performance is critical. However my tests are old and my memory incredibly short (and that's why I write articles). I do stuff but then I forget about it because I don't care much (in my case I eventually implemented a custom solution instead to go for Span)

    https://www.sebaslab.com/fastest-way-to-iterate-an-array-in-csharp/

    Code (CSharp):
    1.  
    2. |      StandardManagedSpanInsert |     Clr |                       net472 |  6.713 ms | 0.0174 ms | 0.0145 ms |  6.707 ms |
    3. |      StandardManagedSpanInsert |    Core |                netcoreapp3.0 |  4.861 ms | 0.0208 ms | 0.0195 ms |  4.861 ms |
    4. |      StandardManagedSpanInsert |  CoreRT | Core RT 1.0.0-alpha-27515-01 |  4.891 ms | 0.0199 ms | 0.0186 ms |  4.886 ms |
    5. |      StandardManagedSpanInsert |    Mono |                      Default | 13.833 ms | 0.0294 ms | 0.0245 ms | 13.827 ms |
    6. |                                |         |                              |           |           |           |           |
    7. |    StandardUnmanagedSpanInsert |     Clr |                       net472 |  6.717 ms | 0.0161 ms | 0.0150 ms |  6.719 ms |
    8. |    StandardUnmanagedSpanInsert |    Core |                netcoreapp3.0 |  4.849 ms | 0.0171 ms | 0.0143 ms |  4.847 ms |
    9. |    StandardUnmanagedSpanInsert |  CoreRT | Core RT 1.0.0-alpha-27515-01 |  4.861 ms | 0.0283 ms | 0.0265 ms |  4.854 ms |
    10. |    StandardUnmanagedSpanInsert |    Mono |                      Default | 12.620 ms | 0.0376 ms | 0.0352 ms | 12.620 ms |
    11.  
     
    Last edited: Apr 16, 2021
  25. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    Yes. .NET Standard 2.1 will have support for MathF APIs.
     
    Thaina, Tanner555 and triple_why like this.
  26. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    802
    One question regarding that, as this is affecting how everything (packages, AssetStore, ...) is glued together: in the SDK csproj file style, is there an equivalent to AsmRefs? I'd assume that AsmDefs can be fully replaced, but AsmRefs are kind of "the other way around" - maybe I'm underestimating how those csprojs would work / what Unity would build on top.
     
  27. SugoiDev

    SugoiDev

    Joined:
    Mar 27, 2013
    Posts:
    395
    asmrefs are one of the greatest features for me, since it allows me to not have to fight against 3rd party directory structures. For example, many developers prefer to have multiple "Editor" folders, like one for each "module" in their asset. With asmrefs, I can just toss one in each "Editor" folder with zero need to reorganize the dir structure itself.

    Without asmrefs, I would need to reorganize the structure and every time we updated those assets I would have to very careful to re-create the structure. Also, if the asset itself had any dependency on paths, you would have to also modify those. It was incredibly time consuming.

    I don't think I'm the only one benefited by asmrefs existing, so I'm counting on it will still being around after all those migrations.
     
  28. VolodymyrBS

    VolodymyrBS

    Joined:
    May 15, 2019
    Posts:
    150
    BTW does Unity team plan to add nullable annotation in the future?
     
    MarekLg and Ramobo like this.
  29. Rickmc3280

    Rickmc3280

    Joined:
    Jun 28, 2014
    Posts:
    189
    Will Unity expose access to bluetooth communications with this release? Kind of frustrating only being able to work with Android IOS bluetooth... Sometimes I dont want to make an app for the mobile stores :)
     
  30. iliaJelly

    iliaJelly

    Joined:
    Jan 15, 2017
    Posts:
    10
    Thank you Unity for doing a good job thus far. I am very confused by what standards are/will/were being used. It just goes to show that Unity really did hide all these nuances which are probably very difficult to manage from us.

    In our company most of the developers did not have a previous background with .net features, and so we did not know what we were missing with the new versions of C#. Even today we haven't fully transitioned to using the async/await that Unity provided us and @neuecc perfected with the UniTasks.

    In truth, we dont care if under the hood the games are running on .net/c@/mono/c++/java. Whatever works we ship it. And would prefer to prioritise unity to providing us with more utilities and optimizations for our games and our development environment.
     
  31. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,166
    I care

    The point here is, "whatever work" sometimes require so much boilerplate that should not necessary if unity used the correct core under the hood. Some problems have already been solved but unity cannot reuse it because it was forked itself out specifically and it conflict or outdated and need to wait for unity to keep up. That put more work for them and they can't follow up fast enough

    From the third party service provider standpoint, they also need to support unity specifically because it made unique build process and hardly work with most of them. Sometimes unity change somethings internally and broke the unitypackage they provided. While dotnet core system more likely push us to do interop directly with native library. And that's make them more mature on interop system than unity

    There are also many problem that unity are lacking. Many things was solved in C# with nuget but then unity has so much conflict with it. And we then need to workaround the problem unnecessary. This waste so much time and effort potential when we need to do something that people was already solved in dotnet but almost always incompatible with unity

    All in all the point here is, if unity just switch the underlying system to dotnet core, it will be some hard time ahead. But then unity can just shove anything not related to game development (performance, utility, and editor feature) into responsible of dotnet, such as interop and underlying build process. Unity could then focused more on what you care instead of follow up with specific platform or feature that should just be able to write an easy wrapper in C#
     
  32. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    You can use nullable annotations in individual C# source files now. I know that we have discussed adding nullable annotations to .asmdef files as well, but that is not implemented yet. I expect that there will be some way to do that in the future, but I can't say for sure yet.
     
    Walter_Hulsebos, phobos2077 and cxode like this.
  33. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    I'm not familiar with the .NET Bluetooth API. Can you point me to the Microsoft documentation for it?
     
  34. VolodymyrBS

    VolodymyrBS

    Joined:
    May 15, 2019
    Posts:
    150
    That's great news!

    But my question is more about the annotation of Unity Engine codebase itself, not users' code.
     
  35. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    I'm not sure about this. I've not heard about plans to adopt nullable reference types on the Unity API, but since they are now supported, it may happen. Are there specific APIs where you would find nullable reference types useful?
     
    Walter_Hulsebos likes this.
  36. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    52
    Annotating Unity API with nullable reference may raise some problems.
    For example, in Unity there are 2 kinds of null: Real null (`obj is null`) and fake null (`obj == null`, used by `UnityEngine.Object` and derived). Would the annotation of the Unity API use real nulls or fake nulls?
    Another issue is methods like `Destroy(someObject)`, since the object was destroyed, now it becomes a fake null, however, AFAIK, C# nullable analysis doesn't have attributes to handle that (in C# an object can't just become null spontaneously, so I guess they didn't prepare an attribute like `[ThisParamWillBecomeNullAfterCallingThisMethod]`).
    I wonder how both issues would be solved
     
    JoNax97 and Ramobo like this.
  37. Mindstyler

    Mindstyler

    Joined:
    Aug 29, 2017
    Posts:
    248
    I'd really like to see Unity finally moving away from their own 'obj == null' overloads. I think the framework upgrade process would be a very good time to also change this behavior. Null checking with == or != is a really old artifact and has long been superseded by the 'is' keyword and in c# 9 the 'is not' pattern. Also moving away from the custom == overload would finally empower developers to use null coalescing etc. to write easier-to-read code and adopt the new nullability easily for all unity projects. There was already a blog post about moving away from == on May 16th, 2014, and lots of people would've loved to see this change come true but there hasn't really been any update on this since.
     
    Ziflin, phobos2077, Qbit86 and 7 others like this.
  38. Ramobo

    Ramobo

    Joined:
    Dec 26, 2018
    Posts:
    212
    I too questioned the usefulness of nullable reference types when Unity added C# 8.0 support for this reason. I think that the best fix for this is to just get rid of the custom `==` operator. @JoshPeterson, any chance? CoreCLR has major breaking changes anyway (no AppDomains), so then is the best time.
     
    Last edited: Apr 19, 2021
    Ziflin, phobos2077 and bdovaz like this.
  39. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    52
    You are confusing two things. One is the custom `==` operator for null comparison. The other one is the `GameObject.IsDestroyed` property.
    ATM, Unity is hiding the `.IsDestroyed` inside the custom `==` operator. What users want is to remove the custom `==` equal, and in order to do that is necessary to create a `GameObject.IsDestroyed` (or equivalent) property.

    Otherwise, if Unity objects didn't have a custom `==` operator nor a `.IsDestroyed` property, what would happen when you do `Destroy(myGameObject)`? You can't just turn into real null all the references to that object (not at least without forcing a GC run per destroyed object in order to traverse the heap and re-write all references to null, which is not only super complex but also insanely expensive). And no, rewriting the Unity engine from C++ to C# won't fix this either.
     
    Ziplock9000 and phobos2077 like this.
  40. Ramobo

    Ramobo

    Joined:
    Dec 26, 2018
    Posts:
    212
    Fair; I've edited my message to reflect this. But why would I think that rewriting the engine in C# would fix this?
     
  41. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    52
    Oh, a lot of people think that rewriting Unity engine to C# will fix all their problem. That is why I added the clarification XD. Sorry
     
    Ramobo likes this.
  42. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,166
    Or maybe add another property or extension method `Living` that return the gameobject if not destroyed and return null if destroyed and change `gobj == null` in most places into `gobj.Living == null`
     
  43. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,936
    At the moment this is not something we are considering as part of the .NET upgrade work. I don't know if it will happen separately or not.
     
  44. Knil

    Knil

    Joined:
    Nov 13, 2013
    Posts:
    66
    Please take time on this issue, it's a major problem which has the perfect time to be fixed. If it doesn't we will be stuck once again with all the weirdness of the magic equality operator on Unity Objects. An extension method seems like the best fix because it won't throw null.

    https://blogs.unity3d.com/2014/05/16/custom-operator-should-we-keep-it/
     
  45. Knil

    Knil

    Joined:
    Nov 13, 2013
    Posts:
    66
    There are 5 different ways to we could check if the Unity.Object is destroyed or null. Having a property to do it makes it too error prone, because then you would need to check if the reference is null as well as destroyed. That's why the member functions calls below are extension methods so we don't throw on null. My vote is A, the implicit bool conversion, because it's already implemented, it's the cleanest, and I think it makes a lot of sense for a beginner.

    Code (CSharp):
    1. A: if(obj)
    2.  
    3. B: if(obj.IsAlive())
    4.  
    5. C: if(!obj.IsNullOrDestroyed())
    6.  
    7. D: if(IsAlive(obj))
    8.  
    9. E: if(!IsNullOrDestroyed(obj))
     
  46. JesOb

    JesOb

    Joined:
    Sep 3, 2012
    Posts:
    1,109
    Highly disagree :)
    Option A is most uninformative what is bool obj? why obj is bool? Cars is true or false, or may be my iPhone 12 is false or true.

    This option have big contrast on way people think.
    Most people think like this: Cars is alive? or may be my iPhone 12 is Dead (! IsAlive).

    So my opinion that option B: IsAlive Extension Method is best
     
  47. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    Option A
    • This syntax implies that the value is a bool when it is in fact not.
    • It is not obvious that this is both an "is null" and "is destroyed" check.
    • By convention, null checks are explicit in C# unlike in C or C++. The implicit bool conversion operator on UnityEngine.Object creates additional methods of performing this check. Code then ends up riddled with "if (obj)", "if (obj != null)" and the various other forms, leading to inconsistency throughout the codebase.
    • Leads developers to mistakenly believe that a value is null when it is actually a reference to a destroyed object.
    Options B & C
    • Looks like a regular instance method call, which can trip you up while reading the code. You cannot immediately tell from the call site that this is an extension method and will not throw when the value is null.
    • (Option B) Due to the previous point, it is especially unclear that this is also checking for null as the method name does not imply it.
    Options D & E
    • This is an established pattern (string.IsNullOrEmpty, string.IsNullOrWhiteSpace, etc.)
    • There is no ambiguity here. It is extremely clear what the method is checking for.
    • Can still be exposed as an extension method for developers that happen to prefer that form.
    Option E gets my vote. I'm not a fan of the extension method approach for the reasons I've outlined in the relevant section above.
     
  48. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,166
    I vote for C

    Almost the same reason as @TheZombieKiller except I don't mind that method call is actually extension method that also check for null. In fact this pattern should be made to be more common
     
    TheZombieKiller likes this.
  49. JoNax97

    JoNax97

    Joined:
    Feb 4, 2016
    Posts:
    611
    I vote for B. The 'alive' concept includes not being null, that check is all you need to be sure the object is usable.

    I'd remove the implicit bool conversion.

    Edit: what about something like
    if (obj?.isAlive)
     
  50. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    266
    The result of the expression "obj?.isAlive" would be bool? (Nullable<bool>) and not bool, so you would have to write "if (obj?.isAlive == true)"