Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Unity Future .NET Development Status

Discussion in 'Experimental Scripting Previews' started by JoshPeterson, Apr 13, 2021.

  1. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    44
    Why would Unity need to replace `float` with `double` and `int` with `long`?
    They may have lower ranges, but they consume less memory and (afaik) CPUs compute them faster, especially when you do vectorize math with SIMD.

    Thought apart from that, I also agree Unity should do more breaking changes in favor upgrading their engine.
     
    Mindstyler likes this.
  2. TangerineDev

    TangerineDev

    Joined:
    Sep 28, 2020
    Posts:
    122
    1. We don't care about memory anymore nowadays XD
    2. Look at the Language Rust, they have their default floating point number at 64 bits (double).
    • The Reason: not much of an optimization, compared to the 32 bit equivalent
    From the docs:
    "Rust also has two primitive types for floating-point numbers, which are numbers with decimal points. Rust’s floating-point types are f32 and f64, which are 32 bits and 64 bits in size, respectively. The default type is f64 because on modern CPUs, it’s roughly the same speed as f32 but is capable of more precision."

    Source: Data Types - The Rust Programming Language (rust-lang.org)

    Hardware is evolving, so languages should too
     
    yu_yang and cxode like this.
  3. Neonage

    Neonage

    Joined:
    May 22, 2020
    Posts:
    238
    >DOTS stack and ECS architecture: XD
    Memory performance hasn't evolved much for us to not care about it.
     
    Saniell and TangerineDev like this.
  4. Neonage

    Neonage

    Joined:
    May 22, 2020
    Posts:
    238
  5. CaseyHofland

    CaseyHofland

    Joined:
    Mar 18, 2016
    Posts:
    551
    Unless they come up with a roughly samespeed decimal value that is 100% accurate, I don’t care about getting 32 more decimals of precision.

    I know that’s never gonna happen but it’s to drive home a point: once you have 6 decimals precision and an error margin, it starts getting really inconsequential, and if you do need it then just use it in the API’s that need it, which Unity really doesn’t.

    what I’m saying is such an update won’t improve your life so why bother?
     
  6. cxode

    cxode

    Joined:
    Jun 7, 2017
    Posts:
    268
    There are plenty of use cases where switching from 32 bit to 64 bits DOES make a massive difference. Space games with a large scale come to mind.

    Just because you don't make games that would benefit from 64-bit precision doesn't mean that nobody does.
     
    TangerineDev likes this.
  7. Armitage1982

    Armitage1982

    Joined:
    Jul 26, 2012
    Posts:
    37
    This is exactly why I am looking forward to CoreCLR, just for bepuphysics v2.
    I had excellent results with Havok, but learning DOTS in the process was a pain and apart from the physics engine, I don't need everything that technology offers. I had to stop for a few months, now I'm dreading all the updates (Entities 1.0.0-pre.47, Unity 2022~2023, packages,...). brrrrrr:eek:
     
  8. Neonage

    Neonage

    Joined:
    May 22, 2020
    Posts:
    238
    Looking into their source code right now, and I see that it is probably possible to replace System.Runtime.Intrinsics with Unity.Burst.Intrinsics, all the used AVX functions can be matched.
    Havok was great, sadly it is not available anymore to free users.
     
  9. Iron-Warrior

    Iron-Warrior

    Joined:
    Nov 3, 2009
    Posts:
    836
    I'm not sure about Havok, but with Unity Physics it's not necessarily required to use ECS, as the engine itself is decoupled from entities. Though of course in that case you are on the hook to write your own editor tools and whatnot to hook up to the engine, but that's not really that difficult overall.
     
  10. Neonage

    Neonage

    Joined:
    May 22, 2020
    Posts:
    238
    Oh, I forgot about that! Could try to use it with my BRG-based rendering :)
    Edit: seems like it requires A LOT of changes to actually decouple it from ECS package

    Havok integration is based on Unity Physics, so it should be the same - as simple as switching a toggle.
     
    Last edited: Mar 5, 2023
    TangerineDev likes this.
  11. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Increased precision for big worlds. About performance you'd be surprised how good double does in benchmarks, sometimes even better than floats as some CPUs are optimized for them.
     
  12. PetrisPeper

    PetrisPeper

    Joined:
    Nov 10, 2017
    Posts:
    62
    The performance issue is with SIMD code mostly, on 64bit platforms scalar perf is about the same (other than bigger types taking more of the CPU cache). With SIMD though, you can process twice as much floats and ints compared to doubles and longs so they're noticeably slower then.
     
  13. tannergooding

    tannergooding

    Joined:
    Jun 29, 2021
    Posts:
    16
    > Increased precision for big worlds

    This is more people working around floating-point and not understanding its limitations. Game worlds need to be "chunked" and you in general need a floating-origin. There are many ways you can do this and not necessarily one "right" way. But switching to double doesn't "fix" the issue, it just pushes it out a little bit more so you don't have to think about it "as soon".

    Ideally Unity would have a way to make this a lot simpler and have general guidance on how to "properly" set up your game world to avoid issues with jitter and large distances.

    > About performance you'd be surprised how good double does in benchmarks, sometimes even better than floats as some CPUs are optimized for them.

    As Petris said, while the raw performance of scalar operations for `float` vs `double` is "about the same". This doesn't hold up in practice and floats ends up approximately twice as fast. This is true whether considering "scalar" (1x op at a time) or "vector" (4x ops at a time for float vs 2x ops at a time for double Vector128). -- It can even be more than twice as fast for vector due to halving the number of operations and halving the number of memory accesses.

    The main reason is that accessing memory is incredibly slow compared to the operation and since double takes twice as much memory, you'll be accessing twice as much memory and effectively halving your throughput for large workloads.

    This is even a consideration for 64-bit vs 32-bit processes since pointers now take twice as much space. While the additional registers and more optimal calling convention on 64-bit typically offsets the difference; there are workloads where it doesn't and where pointer heavy code is faster on 32-bit.
     
  14. XJDHDR

    XJDHDR

    Joined:
    Mar 31, 2020
    Posts:
    20
    To play the devil's advocate here, from what I can see from Godot's docs. Godot appears to have implemented C# as an extension. They currently release two different binaries for the editor: With and without C# support. This already indicates that it's not that integrated into the engine. Furthermore, according to their roadmap, they want to condense this down to one binary. They want to create a binary without C# support, and then you would download an addon that will add C# support. So effectively, the C# API in Godot is equivalent to one of the packages you would install in Unity's package manager. Unlike Unity where the C# API is tightly integrated into the engine.

    With this in mind, it makes sense that Godot would be able to adopt .NET updates faster than Unity.

    Also, keep in mind that the first official cross-platform version of C# (.NET Core) was only released in June 2016. Which is why both Unity and Godot have been using Mono until recently.
    Edit: And arguably, it took another 2 or 3 years until Core was a suitable replacement for Mono or Framework.
     
    Last edited: Mar 6, 2023
    OCASM, Luxxuor, OndrejP and 2 others like this.
  15. Neonage

    Neonage

    Joined:
    May 22, 2020
    Posts:
    238
    Godot 4 moved to CoreCLR runtime:
    upload_2023-3-6_7-5-52.png
    What's new in C# for Godot 4.0 (godotengine.org)
     
    Alvarden, Thaina and TangerineDev like this.
  16. XJDHDR

    XJDHDR

    Joined:
    Mar 31, 2020
    Posts:
    20
  17. oscarAbraham

    oscarAbraham

    Joined:
    Jan 7, 2013
    Posts:
    431
    I agree with many of the ideas in your post, but, to be fair, doubles don't just push the problem a "little bit". They push the problem a lot. See this StackOverflow answer for the math. From zero, 32 bit floats can represent 16,777,217 consecutive integers, while 64 bit floats can represent 9,007,199,254,740,993 consecutive integers from zero. That's a lot more.

    I get that there are real performance challenges with them. Also, 64 bit floats present a lot of issues when fitting them into GPU-related problems. I agree that there are real reasons for preferring 32 bit floats. But I think you're underestimating the benefits of 64bit floats; sometimes they are worth it. Most games that have 32 bit float problems wouldn't have them with 64 bit floats.
     
    Last edited: Mar 6, 2023
    cxode and TangerineDev like this.
  18. CaseyHofland

    CaseyHofland

    Joined:
    Mar 18, 2016
    Posts:
    551
    So we’re at a real standoff then, where some games that don’t have 32 bit float problems would have them with 64 bit floats.

    I think this discussion needs its own thread now, it’s clearly a deserving topic.
     
  19. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,773
    For those who did not click through this link - I'll summarize here:

    Inside of Unity there is a recognition that the current policy is not serving the needs of all users. So lot's of discussions are happening now. I do expect some changes, but don't yet know what form they will take. We will share details as soon as we can.
     
  20. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,773
    No, we don't plan to move to a newer C# version before releasing CoreCLR right now. That could change of course, but that is our current approach.
     
  21. tannergooding

    tannergooding

    Joined:
    Jun 29, 2021
    Posts:
    16
    Yup, I'm very familiar with the underlying limits given that's my job to be responsible for those types in .NET ;)

    It's also notably quite a bit more involved than simply that. The distribution of binary floating-points exist on an exponential curve and so approx half of all representable values exist between `-1` and `+1`. The other half exist between `MinValue` to `-1` and `+1` to `MaxValue`. There then exists a small portion of this which exists to represent +/-Infinity and the various NaNs.

    This also means that while you can represent roughly `2^24` integers before losing integral precision, you've already started losing significant fractional precision far before this and there are considerations that various numbers are still exactly representable above this range (such as all powers of two and a larger sequence of powers of 10).

    There is also a distinct difference between using `double` as your storage and using `double` for intermediate computations. Selectively upcasting to `double`, doing your extended precision computations, and then downcasting back to `float` is generally fine/good particularly when you're in the known edge cases. -- Machine Learning does similar things, where it may store values as normalized `half-precision` (16-bit values) but does all its compute work via `single-precision` (32-bit values).

    You ultimately have to take multiple factors into consideration including where the compute work will happen, if the underlying hardware supports compute using the given size (many GPUs don't support `double`), how you've setup your world, the scales with which you need to account for, etc. What ends up working best across a range of games/devices is typically using `float`, ensuring you've normalized your local space (so it's between `-1` and `+1`), ensuring your world is "chunked" or otherwise divided so that you can address anything without loss of precision.

    Doing all of this can also help with resource/memory management, making distinct what is considered "near" vs "far" (to handle level of detail, which animations are used, how important live AI compute is, etc), allowing users to move farther apart in multiplayer without jitter or other issues, allowing moving at high speeds without as many physics problems, and gives you well-defined boundaries through which you can do cross-chunk processing and handle the fixups between the localized coordinates so you can minimize the amount of processing that has to be done on the CPU or with the wider precision.
     
    hessel_mm, Mindstyler, OCASM and 11 others like this.
  22. oscarAbraham

    oscarAbraham

    Joined:
    Jan 7, 2013
    Posts:
    431
    Perhaps I should have clarified, sorry; I didn't mean to say that's a fact you in particular don't know, or that it is a fact that directly impacts how doubles can solve big-world coordinate problems. I meant it as a demonstration of how big an of an impact 64 bit floats can make precision-wise. In other words, IMO, when talking about big-world coordinates, saying that using doubles just pushes the problem "a little bit more" misrepresents the difference in precision between 32 bit floats and 64 bit floats. You can have worlds many orders of magnitude bigger with doubles without using a floating origin.

    There are many games that could benefit from a big-world coordinate system, like the one they added in UE5. As an example, I believe that Minecraft versions that use 64 bit floats for positions have a lot less problems with glitches when traveling far. It's not a panacea, but it's useful in many cases; at the very least, it's a quick and easy way to solve some problems. One could also solve those problems with 32 bit floats, but that solution has its own challenges.

    On the rest of the post, I agree with a lot of what you said; some of it would depend on the context you are referring to. I hesitate to comment about it because I fear I've already contributed too much into driving this thread out of topic. I think you can and should chunk your world no matter if you are using 32 bit positions with floating origins or 64 bit floats. I also think there are things that you'd probably want to switch to singles with floating origin even when using doubles for the rest of stuff, like Unreal 5 does with particles.
     
    Last edited: Mar 6, 2023
    cxode and TangerineDev like this.
  23. marce155

    marce155

    Joined:
    Jun 8, 2014
    Posts:
    8
    I was under the impression that CoreCLR for the player (not the editor) was planned for a 2023 version.

    Just went through the 2023.2.0a6 release notes and didn't find any mention of that - there is an improvement wrt NativeAOT & IL2CPP, but no mention of the CoreCLR runtime.

    If I remember correctly only two tech streams are currently planned, so does that mean that player CoreCLR will be pushed back to 2024? I suppose such a fundamental change would be in already at an alpha 6 stage.
    No rush, take the time it needs to get done properly, but if you can already confirm "2024.1 at the earliest" or something like that it could help to manage expectations.
     
  24. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    Sorry to dredge up this old post, but does that mean I should *prefer* using Vector3 to float3 outside of Burst? I've converted most of my code over to using float3 because it's easier to just have a single vector type. I really don't want to have to think about what vector type to use at every juncture.
     
  25. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,773
    We don't plan to release the CoreCLR player with a Unity 2023 version. I'm sorry if I mentioned something like that before, I try to avoid mentioning specific release targets until we are sure that we can meet them, to avoid misleading anyone.

    I can definitely say that nothing is planned for release in the 2023 series now. I'm not ready to talk about releases after that, as plans are still up in the air. I can say that the team is completely focused on delivering CoreCLR support though, so it is just a matter a time before we have firm release plans.
     
  26. JoshPeterson

    JoshPeterson

    Unity Technologies

    Joined:
    Jul 21, 2014
    Posts:
    6,773
    As always with performance, I'll advise you to measure the scenarios you care about. It is really difficult to reason in general about performance. Of course, I'll now proceed to reason in general about it!

    I don't expect Vector3 to be significantly faster than float3 for Mono and IL2CPP. I believe I was intending to point out that Unity.Mathematics is really optimized to work with Burst (and Burst optimized to work with Unity.Mathematics). If there is code where performance is critical the combination of Burst and Unity.Mathematics is what you want to use.
     
  27. marce155

    marce155

    Joined:
    Jun 8, 2014
    Posts:
    8
    Thanks for the quick response!

    I got the information from the linked blog post in the OP where it says "we aim to release this new runtime during 2023 [...] hope to release this new Editor during 2024" and the post "last updated in February 2023" so I thought that's the latest state.

    Good to hear you are at it, best of luck :)
     
  28. Iron-Warrior

    Iron-Warrior

    Joined:
    Nov 3, 2009
    Posts:
    836
    Does Unity.Mathematics do any kinds of optimizations with Burst beyond substituting System.Math calls with SLEEF ones? I know that it can auto-vectorize some stuff, but I figured that once the code has entered the Unity.Mathematics methods the main optimization is just subbing in the SIMD functions.

    In any case @burningmime, you can still use the Bursted SIMD calls by doing goofy hacks like this, though as noted in the post I'm not sure what kind of overhead there is to calling Bursted functions from IL2CPP (none, maybe? Since nothing is marshalled?)
     
  29. Thaina

    Thaina

    Joined:
    Jul 13, 2012
    Posts:
    1,050
    Not on topic but I want to say I like float3 and int3 because it easier to write and has many shortcut utility function and operator. We can scale easily by float3 * float or taking zyx to yzx. These all functionality should be included in all vector type, or making it the main vector type of the engine
     
  30. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    760
    If it released in autumn/winter with the 2024.1 alpha, the year 2023 would still be correct :D
     
    kvfreedom and CaseyHofland like this.
  31. CaseyHofland

    CaseyHofland

    Joined:
    Mar 18, 2016
    Posts:
    551
    It is included, through the power of conversion operators.

    There’s about 10+ years of Vector3 API to making it a hard sell to switch over. Even if you could do it, I wouldn’t want it to happen until say 80% of the community has adopted this as the new standard. But since Unity is still using Vector3 themselves, and since there doesn’t seem to be any discernible benefit to switching (apart from nicer API), I don’t see us getting near that level of comfortability anytime soon.

    the fact is that this singular thread is probably home to some of the smartest people I’ll ever meet, but Unity.Mathematics for the community at large seems too hopeful right now.

    also: System.Numerics.Vector3 called, they want to make this a menage a troi
     
  32. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    44
    What do you mean by "It is included, through the power of conversion operators."? That I can convert Vector3 to float3?

    Can they just apply the same SIMD optimization that they put in Unity.Mathematics in Vector3 and friends rather than having me cast between them?

    It may not have all the same convenience methods that Mathematics provides... but it's clearly more used and I personally don't like much having to change of math library between Burst and Unity's main code. Probably I'm not the only one who finds it weird having to change of API for that, right?

    And if they will replace UnityEngine.Vector3 and friend at some point, it should be with the System.Numerics.Vector3, which is widely used outside Unity also, reducing the barrier between Unity and .NET code.
    Unity has a nice AutoUpdater, so it shouldn't be so difficult to do at some point.
     
    Qbit86 likes this.
  33. tim_jones

    tim_jones

    Unity Technologies

    Joined:
    May 2, 2019
    Posts:
    282
    Yes, Burst does more optimizations for Unity.Mathematics beyond substituting System.Math calls with SLEEF ones.

    Specifically, Burst substitutes all the vector types in Unity.Mathematics with LLVM vector types, which allows LLVM to perform more optimizations than it might otherwise be able to do on plain struct types.
     
    Iron-Warrior and tmonestudio like this.
  34. Macro

    Macro

    Joined:
    Jul 24, 2012
    Posts:
    45
    I have been enjoying watching this thread and while I havent got much to add about the wider stuff, I just wanted to pipe up on the discussion around `Unity.Mathematics` and `System.Numerics` as they both seem to have a lot of overlap at the conceptual level.

    I appreciate that Unity's version seems to have more optimisations for their own type so it may not be feasible to migrate towards `System.Numerics` as it stands now, but if there is any chance of that in the long run I feel it would make it far easier to write/consume nuget packages that want to support Unity.

    Currently as the Unity dll isnt available anywhere you are unable to hook into the unity specific Vectors etc, so you either have to write translation logic somewhere to go from Engine agnostic code (System.Numerics) to Unity specific code, so it would be great if we could just streamline the whole thing as other than the optimisations/SIMD and a few helper methods I dont see what Unity's specific maths bit would offer over the normal .net one.

    I know this is a bit niche to some extent, but given the more standard .net focus so we can use nuget and other common .net tooling/eco system going forward this feels like it would cause less overhead for people who want to work in both headless game logic scenarios outside of Unity and open source scenarios etc.
     
    KuraiAndras, cxode, Qbit86 and 6 others like this.
  35. YegorStepanov

    YegorStepanov

    Joined:
    Oct 10, 2017
    Posts:
    12
    Are there any reasons why there is no implicit static operator from System.Numerics.Vector3 to UnityEngine.Vector3 and vice versa? We can't write it ourselves.

    The safest version would easily fix the problem above (safest = via constructor, though I'll be happy to fast unsafe struct cast)

    Rider correctly suggests such types in intellisense.
     
  36. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    +1

    I've noticed in Numerics source that everything has [Intrinsic] attribute. I wonder if it could make Mono performance better for both math.float3 and Unity's Vector. https://source.dot.net/#System.Priv...rivate.CoreLib/src/System/Numerics/Vector3.cs

    Speaking of, question to @tannergooding . Why [MethodImpl(MethodImplOptions.AggressiveInlining)] (and Intrinsic) is required everywhere to get better performance? Couldn't compiler be smarter or had more aggressive mode?
     
  37. tannergooding

    tannergooding

    Joined:
    Jun 29, 2021
    Posts:
    16
    > I wonder if it could make Mono performance better for both math.float3 and Unity's Vector.

    It would not. The `[Intrinsic]` attribute is internal only and indicates that the method has special handling in the runtime. RyuJIT uses it to identify methods that it has built-in knowledge about and then to actually directly import as the relevant AST nodes rather than relying on the standard IL processing or pattern recognition.

    Unity could of course have its own similar functionality, but it would be specific to Unity and would likewise need "built-in" knowledge in their own Runtime and/or Compilers (e.g. Burst).

    > Why [MethodImpl(MethodImplOptions.AggressiveInlining)] (and Intrinsic) is required everywhere to get better performance?

    As with everything, there is a balance; the JIT can and normally will do the "right thing". But, as with any compiler there are ultimately limitations and resource constraints on what it can do. In the case of the JIT, there is a general time/memory constraint since it runs live and side by side with your application, in the same process. No one wants to wait 3 minutes for compilation to finish before you can start debugging or testing out your changes, especially for simple changes when you're actively working on and trying out some new feature in your code.

    `AggressiveInlining` serves as a hint, much as the equivalent features that exist in MSVC, GCC, and Clang for C/C++. It tells the compiler that it is worth spending extra time to try and inline the method because the author "knows better" and is asserting that it will be used or compile down to something that ends up being beneficial to perf.

    `Intrinsic` simply serves to indicate that the method may have special handling in the runtime. For the vector types we don't track them as regular "user-defined types", we instead track them specially like we would any other primitive. We have a special `TYP_SIMD8/12/16/32` and in .NET are introducing a `TYP_SIMD64` (for `Vector512<T>`). We have a special node kind `GT_HWINTRINSIC` rather than representing it as a standard `GT_CALL`. We then have various special constant folding, transforms, and other optimizations that take advantage of this so that we can do all the operations more cheaply and deterministically. It doesn't require complex pattern recognition, faulty/error-prone auto-vectorization, or other considerations that can ultimately hurt codegen.

    C/C++, Rust, and other AOT languages have all the same considerations in practice and where perf is considered "critical", you're often expected to do such manual optimizations and providing compiler directives or other hints to ensure the right things happen. -- This can most prevalently be seen in the low-level internals of the C/C++ runtimes, math libraries, or other frameworks/engines.
     
  38. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Thing is, in my experience (at least with Unity) it never inlines even methods with single line of code. Half of Unity.Mathematics was missing AggressiveInlining that was added after my benchmarks (5x-15x slower) but I wish it wouldn't be required.

    Everyone is fine with waiting a bit more for free performance for release builds that's why I mentioned more aggressive mode.
     
    cxode likes this.
  39. PetrisPeper

    PetrisPeper

    Joined:
    Nov 10, 2017
    Posts:
    62
    Are you talking about IL2CPP or Mono here? In case of IL2CPP, it doesn't inline anything from other files unless you set the IL2CPP configuration to Master in the properties or put AggressiveInlining on it.

    And the bigger concern with waiting is on Mono (and CoreCLR) where the method needs to be compiled every time you launch the game, not when you build it.
     
  40. tannergooding

    tannergooding

    Joined:
    Jun 29, 2021
    Posts:
    16
    For AOT, "it depends". For JIT, no and we have over 20 years of customer reports and telemetry backing that up. Startup performance is a massive consideration for many scenarios and is one of the reasons Tiered Compilation was such a big feature to introduce.

    I'll reiterate that RyuJIT typically does the right thing and is actively competitive for performance with other AOT languages for real world apps/deployments. Its possible that Unity has a different set of limitations or considerations but it would be up to the Unity team to improve those or answer what they may be.
     
    marce155 likes this.
  41. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Both were slower without AggressiveInlining. It was Math.Sin vs Unity's Mathf.Sin vs math.sin.
    I see, hopefully that will be improved in the future with ReadyToRun in Unity.
     
  42. YegorStepanov

    YegorStepanov

    Joined:
    Oct 10, 2017
    Posts:
    12
    This is not the first time I have encountered the word "time" in this context.

    Are you using this word for simplicity or can RyuJIT generate different asm code for different CPUs? (in terms of speed, CPUs have the same instruction set)
     
  43. Enderlook

    Enderlook

    Joined:
    Dec 4, 2018
    Posts:
    44
    With extra time, I think he means the JIT will do extra depth and more complex analysis to check if the method can be inlined and inline.

    Normally, when a method must be jitted, you don't want to run those complex analyses because they are CPU expensive and can take too much time, increasing startup time. So the JIT runs a more simple analysis instead, and if the method is used too many times, the tiering will rerun the JIT, this time using the complex analyses.

    That attribute, tells the JIT that it's okay to run the complex ones for the first time, rather that using the simple ones first.
     
    YegorStepanov likes this.
  44. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Seems like you are right, tested Net7 vs Unity (Mono). Calling small method:

    int Method(int a, int b) => a + b

    Net7:
    No attr time: 2186786 ticks
    AggressiveInlining time: 2215397 ticks

    NoInlining time: 10860448 ticks
    NoOptimization | NoInlining time: 13647052 ticks

    Unity (Mono):
    No attr time: 14493936 ticks
    AggressiveInlining time: 14484785 ticks
    NoInlining time: 13151725 ticks
    NoOptimization | NoInlining time: 13353220 ticks

    Interestingly enough, Unity can't optimize this at all.
    -----

    int Method(int a) => (int)Math.Sin(a)

    Net7:
    No attr time: 30406371 ticks
    AggressiveInlining time: 29801296 ticks

    NoInlining time: 38488637 ticks
    NoOptimization | NoInlining time: 39022894 ticks

    Unity (Mono):
    No attr time: 68336181 ticks
    AggressiveInlining time: 48249740 ticks
    NoInlining time: 67608942 ticks
    NoOptimization | NoInlining time: 68356525 ticks

    As I mentioned before only AggressiveInlining makes a difference in Unity. Seems not needed in Net7.
     
    CaseyHofland likes this.
  45. runner78

    runner78

    Joined:
    Mar 14, 2015
    Posts:
    760
    The example here could simply be too simple and the compiler always inlines it.
    I observed a few years ago that it also makes a difference whether you use inline attributes in a static struct method or in a static class method. But I don't know if I tested it in Unity or .NET 4/6. There the version in the static class without attributes always seemed to be inlined, in the struct version not.
     
  46. Gotmachine

    Gotmachine

    Joined:
    Sep 26, 2019
    Posts:
    30
    Compared to modern CoreCLR, the mono runtime used in Unity perform inlining a lot less aggressively, and is severely limited in what it will inline. If I remember right, it can only inline non-virtual, non-generic, non-interface methods that don't have any out or ref parameter. Outside of those limitations, the basic heuristic for trying to inline is a method size limit (I think it's 20 bytes currently in the unity mono fork), and the only thing AggressiveInlining does is telling the JIT to ignore that limit.

    See https://github.com/Unity-Technologi...be4311dd/mono/mini/method-to-ir.c#L3890-L4020
     
  47. tannergooding

    tannergooding

    Joined:
    Jun 29, 2021
    Posts:
    16
    Yes, RyuJIT can and does generate instructions based on the hardware its running on. We more light up based on ISA availability than doing micro-architectural specific optimizations, but we are able to do the latter.

    "time" in this context simply means that we want to produce good results fast. The speed of the JIT impacts startup, time to first response, and other details so we spend a lot of effort ensuring that it produces good code and does it fast.
     
    YegorStepanov likes this.
  48. tannergooding

    tannergooding

    Joined:
    Jun 29, 2021
    Posts:
    16
    RyuJIT uses 16 bytes of IL as the cutoff for "always inline". Above that, it uses a computed heuristic to determine if it thinks the inlining will be "profitable". That heuristic currently takes into account things like the size of the inlinee vs the size of the inliner, whether any operands are constant, whether operands feed checks, whether the method throws an exception conditionally vs unconditionally, if the method uses any intrinsics or "well-known" patterns (such as `if (typeof(T) == typeof(...))` for a generic T), etc.
     
    Kamyker and TeodorVecerdi like this.
  49. Digika

    Digika

    Joined:
    Jan 7, 2018
    Posts:
    225
    So, a bit of discourse back to co-routines/async talk from few pages ago. There is GDC 2023 about and UE introduced new Verse language. It was made by/with Skookum guys and I found this article:
    https://error454.com/2017/03/09/the-death-of-tick-ue4-the-future-of-programming/
    I can't help but grin at the irony of the things. Whereas Unity and its peers now try to move away from old, bad, and slow co-rouintes, UE rediscovers them and re-embraces going forward.
     
  50. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    ??
    They called them coroutines in one slide but they are nothing like Unity's coroutines. If anything they look more similar to C# async/await. They even have type called task with Await().
    Code (CSharp):
    1. spawn{AsyncFunction3()}
    2.  
    3. # Get task to query / give commands to
    4. # starts and continues independently
    5. Task2 := spawn{Player.MoveTo(Target1)}
    6.  
    7. Sleep(1.5) # Wait 1.5 Seconds
    8. MyLog.Print("1.5 Seconds into Move_to()")
    9.  
    10. Task2.Await() # wait until MoveTo() completed
    11. Wait(0.5)     # Wait 0.5 Seconds
    12. # Explicit start and wait until completed
    13. # Task1 could still be running
    14. Target1.MoveTo(Target2)
    Tbh I find this syntax pretty unclear, in C# you see "await" and immediately know that method is async.

    Speaking of Verse they plan to make it open source including Verse VM with intention to be used in all engines for metaverse interoperability. Huge competition to Roblox as porting game from Unreal for Fortnite to Unity (or Unreal) would be much easier.
     
    Last edited: Mar 23, 2023
    Thaina likes this.