Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice
  2. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  3. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Official DX12 is out of experimental since 2022.2.0a17

Discussion in '2022.2 Beta' started by tvirolai, Jun 28, 2022.

  1. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Hello,

    As everyone might know, the DX12 backend has been slow, and somewhat unstable, for as long as there has been one. Back in 2020 the per drawcall overhead on the CPU was a bit over double what it was on other backends. With recent improvements the DX12 backend is now as fast as (or in some cases faster than) DX11 when it comes to CPU usage for the vast majority of use cases. In scenarios with a large amount of drawcalls, DX12 beats DX11 handily in standalone with regards to CPU performance. Results based on our testing show that there isn't a substantial difference between DX12 and our other Editor backends to warrant an Experimental tag anymore. Therefore:

    We are happy to announce that as of 2022.2.0a17, the DX12 graphics backend is officially out of experimental state.

    What does this mean in practice? You can now make a player that has only the DX12 backend. Experimental tag forced you to add at least one non experimental backend but now DX12 can be the sole backend in a player build. This is the only functional change you see with the removal of the experimental tag.

    Another feature which has been unique to DX11 is the memory residency management. Windows allows you to essentially use more GPU memory than what you have. In DX11 this is fully automatic but DX12 requires us to give it a slight helping hand so to speak. This has now been implemented so you can overprovision GPU memory. It's not quite as good as in DX11 as we do this essentially on a per frame basis. But otherwise huge scenes should be far more stable now.

    However DX12 is not always the fastest. The DX11 driver can do things like reorder compute shader dispatch calls, which it does aggressively. With DX12 the driver cannot do this, the same applies to Metal and Vulkan. And even when possible, such as in many OpenGL drivers, the driver doesn’t always do it. So if you have a complex set of compute shaders it is likely that in DX11 it will be faster. There are plans to do this sort of reordering in Unity but that will take time. At the moment, heavily GPU bound scenes with a complex workload are likely to run faster on DX11.

    A major painpoint in Unity is the Editor performance. In order to alleviate this DX12 allows you to run native graphics jobs in Editor. This feature is still experimental but you can already try it by launching the Editor with the following command line argument:
    -force-gfx-jobs native

    User feedback has been that in massive projects running the Editor with native jobs is essentially mandatory, but we're not quite there yet to confidently enable them by default for everyone. Therefore we'd love feedback and bug reports for any issues that pop up when using them.

    tl;dr: 2022.2.0a17+ You are GPU bound = Use DX11. You are CPU bound = Use DX12. If you are brave and need Editor perf run DX12 with native graphics jobs in Editor.
     
  2. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    @tvirolai Since on this topic, could someone at Unity explain Unity's use of D3D11On12 API? https://docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-11-on-12. Unity seems to use this at least on HDRP (nsight does complain about it but nsight still works).

    Why does Unity use such wrapper and not use DX12 directly? What are the pros and cons of this (other than Unity being able to reuse old API calls internally)? Does this have performance implications?
     
  3. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    We do use DX12 directly. However we do create a D3D11On12 device on top of it due to some internal shenanigans related to how DXGI works. The only downside of this is that Nsight complains about it. It probably could be refactored now if MS has improved things. We do not actually perform any API calls through it when rendering. It's only used for a tiny sliver of time during startup.

    We also create a full fledged D3D11 device in some cases and use interop with that. And that's because the video decoding in Windows didn't support DX12 back in the days. Now there is a DX12 way too but no-one has just gotten around to implement it because the interop way works fine.
     
  4. PutridEx

    PutridEx

    Joined:
    Feb 3, 2021
    Posts:
    1,136
    Since HDRP is all about compute, doesn't this mean that if you use HDRP, DX12 is still 'experimental'?
    As in you can't really get better performance with DX12, instead you'll always get lower performance compared to DX11 unless you use absolutely no effects and create your own.

    I imagine you can see this with a simple scene that uses some post processing, volumetric fog. Although I haven't really done it so I can't say for sure.
     
  5. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Experimental state is more of Unity's way of saying: "things can change and we will not be accountable for your troubles if you use this in production". Now that it's out of experimental, expect higher level support and less chances for API breakage etc. And if they now change API, you can expect some API upgrading scripts now to run automatically. Plus like mentioned already, now you can actually do DX12 only builds.

    I don't quite get the compute being experimental assumption. HDRP has been released long time ago and have used compute since day one. How I see it, whatever experiments you do yourself with it is up to you :D
     
    LooperVFX and tvirolai like this.
  6. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    It doesn't mean that one always gets lower GPU performance compared to DX11. It means it's way easier to get good performance in DX11, well at least on one particular vendors DX11 that is really great at reordering things optimally for the user. On others the DX12 backend can actually be faster also on the GPU.

    Right now every backend works in the way that things are submitted exactly as they are given to us. So you can make it faster by giving it in different order, we just don't do it automatically. HDRP team does things quite well in this regard. And in standalone the async compute HDRP uses also makes it go bit faster.

    As we are Unity we must be able to work easily for the user and not require super detailed knowledge of the pipeline and dependencies even if the user is capable of writing their own compute shaders. When we do reordering on the backend we'll do it in a way that will speed up all the other backends too. It wouldn't make sense to do it just for DX12.
     
    DonPuno, ftejada and dnach like this.
  7. Saniell

    Saniell

    Joined:
    Oct 24, 2015
    Posts:
    191
    So if DX12 is out of preview, maybe we could also get support for ExecuteIndirect/DrawIndirectCount (vulkan) APIs, please?
     
    Silas-VuCity likes this.
  8. merpheus

    merpheus

    Joined:
    Mar 5, 2013
    Posts:
    202
    Okay, but does this mean VRS and mesh shaders are directly supported now?
     
    pwka, IPL and graskovi like this.
  9. PerunCreative_JVasica

    PerunCreative_JVasica

    Joined:
    Aug 28, 2020
    Posts:
    47
    Great work. Are these improvements across all the RP? (BRP, URP and HDRP)
     
  10. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Not yet. We don't want to just expose something DX12 specific because it will lead the feature to be only working on DX12.

    Or more realistically it would be exposed directly with the promise "Yeah we don't need to support it elsewhere" and then massive panic emulation on other platforms because of course it must be supported everywhere and incredible amount of bugs because not all the edgecases work exactly identically everywhere.

    When we bring those features they must work on all the platforms with similar features. Because if you make a game in Unity it must work the same across all the platforms you want to publish it in (with the obvious limitation that the feature must actually exist in the platform in at least some form).

    It's across all. From the low level perspective there isn't really much difference between SRP and BRP. The only exception to this is the SRP Batcher which also has gotten several rounds of speedups.
     
    mariandev, ashtorak, ftejada and 2 others like this.
  11. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    To be fair.. current Raytracing already only works on select platforms and on DX12 only with Unity. VRS is also tech that could easily be optional for some platforms only as it's more of optimization than whole new way to author new content. I get the argument for Mesh shaders though.
     
    Pr0x1d, Egad_McDad, jjejj87 and 4 others like this.
  12. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    HDRP has a quite-high CPU cost vs URP, and is that because of the SRP part or something your team's work will also indirectly help with? Thanks for any insights.
     
  13. merpheus

    merpheus

    Joined:
    Mar 5, 2013
    Posts:
    202
    Then honestly, I don't really understand why it will be out of preview. Since clearly, this is one of the key points of the dx12. Do we even have SM 6.6 support or sth? It has two main advantages from what I can get: 1- Better editor performance sometimes 2- we don't have to add dx11 as fallback api.

    Dx12 should allow things to run much faster if the gfxdriver abstracted rendering code is written in a way that can allow it. It doesn't appear to be the case from what I get. Plus, new APIs coming with it are also not supported. I understand you folks have done some pipeline cleanup work but as an end-user, plus stuff like mesh shaders are supported in other gfxdrivers such as vulkan and metal but I don't think this "out of preview" news covers the cases it should have after months and months of waiting.
     
    Egad_McDad and Deleted User like this.
  14. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Well to be fair, it wasn't being developed at all for "months and months". The team on this is relatively new and working hard despite being small. So I think they're doing well and have a picture of where they're going.
     
  15. Henrarz

    Henrarz

    Joined:
    Mar 20, 2016
    Posts:
    37
    So... when can we see ray tracing being available for Vulkan and Metal? Both APIs have ray tracing extensions.
    And when can we see HDRP being compatible on iOS devices? It runs on M1 Macs, but for some weird reason it doesn't work on M1 iPads.
     
  16. merpheus

    merpheus

    Joined:
    Mar 5, 2013
    Posts:
    202
    team on the core rendering/gfx driver code is small and new? Because surely this is the gfx device code-related feature.
     
    Ruchir and Wolfos like this.
  17. dnach

    dnach

    Unity Technologies

    Joined:
    Mar 9, 2022
    Posts:
    89
    The DirectX12 backend is out-of-preview due to a significant mass of performance and stability improvements, and can now be targeted reliably for both editors and runtime players. One of the main advantages of targeting DX12 is the support of native graphics jobs which can reduce CPU bottlenecks and in many cases improve performance compared to DX11.

    We will continue to improve performance moving forward, and eventually promote DX12 as the default graphics API for Windows platforms.

    High level features like Mesh Shaders or VRS would need to be supported in a cross platform fashion as @tvriolai mentioned above. If these features are important to you, we would appreciate taking a minute to upvote these features in our public roadmap:
    @Henrarz, same thing! would help if you could upvote these:
     
    Pr0x1d, pwka, Lars-Steenhoff and 5 others like this.
  18. sarbiewski

    sarbiewski

    Joined:
    Sep 8, 2017
    Posts:
    27
    This is how the Roadmap entry looked recently
     
    Last edited: Jul 6, 2022
  19. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Wording on that card is quite unfortunate. I'm pretty sure what they meant was that they can support new APIs with DX12 and just list those things as examples. There's no way they planned to have mesh shaders inplemented by the time DX12 got out of preview.
     
    sarbiewski likes this.
  20. Matjio

    Matjio

    Unity Technologies

    Joined:
    Dec 1, 2014
    Posts:
    108
    @sarbiewski Indeed, this card was created a year and a half ago "Under consideration" to gather feedback on DirectX12. The result of our investigation was that performance and stability were the top issues, and number one usage was raytracing / path tracing, while APIs like variable rate shading, mesh shading seemed less of a blocker for going out of experimental.
    We have only updated recently the description of the card when we moved it to "Released".
    Apologies for the confusion.
     
    NotaNaN, ftejada, hippocoder and 2 others like this.
  21. sqallpl

    sqallpl

    Joined:
    Oct 22, 2013
    Posts:
    384
    My foliage/detail rendering system is based on RenderMeshIndirect and GPU culling. I'm using compute shader and Dispatch calls for multiple instancing chunks every frame for frustum culling, distance culling, LOD selection etc.

    On my GPU it costs about 0.50ms (I mean dispatch calls, not the whole rendering) on average on DX11. Do you think that it can increase significantly on DX12 in 2022.2?

    Thanks.
     
    Last edited: Jul 6, 2022
    hippocoder likes this.
  22. Matjio

    Matjio

    Unity Technologies

    Joined:
    Dec 1, 2014
    Posts:
    108
    Hi @Henrarz !
    Our focus for the current development cycle is to make sure that we have the right offering (DX12 + Raytracing on PC and latest gen consoles) and possibly consolidate it if something is blocking adoption. Raytracing and Path tracing are now getting really viable solutions for these platforms both in terms of feature set and performance (especially thanks to dynamic resolution upscaling technologies like DLSS, FSR, TAA upscale,…).
    If we succeed there, then we will evaluate what is the next best step, whether it is extending PC platform support or going to mobile.
    There is no plan in this cycle to support mobile with HDRP as we think that the architecture would only fit a very restraint range of devices, and for example prefer investing in improving URP as well as cross render pipeline initiatives.
    But again, your feedback is very valuable, and it is important for us to understand user’s use cases. What would be for you the typical usage of Raytracing on mobile?
     
  23. Henrarz

    Henrarz

    Joined:
    Mar 20, 2016
    Posts:
    37
    In my case I would like to have just HDRP support for iOS devices and ray tracing support in Metal in general (for macOS, but then again, M1 chips are in iPads already, so ray tracing support for iPads would be easy to implement).
     
  24. Matjio

    Matjio

    Unity Technologies

    Joined:
    Dec 1, 2014
    Posts:
    108
    @Henrarz Would it be for games or non games use cases, and would you see enough users of your application if it is limited to the hardware supporting this feature?
     
  25. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Immortalis-G715 GPU seems like a good companion to metal going forward (for the droids etc). Seems like CPU would still be a big issue for phones though (HDRP etc).
     
  26. Henrarz

    Henrarz

    Joined:
    Mar 20, 2016
    Posts:
    37
    Games use and yes, I believe so (for RT support in Vulkan/Metal for macOS).

    HDRP support for mobile is not a priority, but then again - the chips used in Macs and iPads are slowly becoming the same and one platform is artificially limited now.
     
  27. Exceptionull

    Exceptionull

    Joined:
    Dec 3, 2019
    Posts:
    5
    Somewhat unrelated, but are there any plans to have Shader Model 6.0 features (needs #pragma use_dxc currently) cross compile to both DX12 and Vulkan, or is it going to stay DX12 only? If there are plans, is it being actively worked on or is it low priority for now?
     
  28. aras-p

    aras-p

    Joined:
    Feb 17, 2022
    Posts:
    74
    A bunch of them should be working (e.g. wave operations, barycentrics etc.) on both Vulkan and Metal. If some/any of them does not work, file a bug.
     
    shikhrr, Exceptionull and hippocoder like this.
  29. guoxx_

    guoxx_

    Joined:
    Mar 16, 2021
    Posts:
    55
    Nice work! Since the focus is DX12 and ray tracing, is there any plan to support inline raytracing and bindless resources?
    It's hard to get proper performance and enough flexibility without those features.
     
  30. treviasxk

    treviasxk

    Joined:
    Apr 22, 2015
    Posts:
    33
    I tested version 2022.2.0b4 with directx 12, it really is much faster than directx11. Will you bring these changes to the other LTS versions? or is this exclusive to 2022.2?

    bug I found, I can't put an HDRP project to directx12, Unity gets stuck in a loop when restarting.
     
  31. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    It can also be slower than DX11 depending on which asset store assets are used (compute shaders and the like).
     
    treviasxk likes this.
  32. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,521
    Games targeting high end mobile apple devices like the M1 that are starting development today.
    And when the game is finished in 4 years everyone has upgraded the phone to a faster model.
    so the user base today is less relevant than the user base after dev time.
     
    laurentlavigne likes this.
  33. yuriy000

    yuriy000

    Joined:
    Oct 14, 2021
    Posts:
    7
    Is there any documentation out there of which DX12 features are supported/unsupported/planned? A few such features are in the roadmap (links are in this thread), which would seem to imply that all other features are supported, but I doubt that's the case.

    I'm interested in unbounded arrays, descriptor heaps, register spaces, in particular.
     
  34. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Those are internal things. The issue with exposing any of them publicly is how would they work on DX11. Or on OpenGL. Even if we disregard the older APIs how would they work on Vulkan? What we expose for users needs to be platform independent.

    There are features that will internally use those more efficiently coming eventually. But as you may guess bringing them in is very time consuming as we must bring them in on all platforms at the same time.
     
  35. Saniell

    Saniell

    Joined:
    Oct 24, 2015
    Posts:
    191
    So raytracing is allowed to be DX12 only but bindless resources are not?
     
    PutridEx likes this.
  36. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Some historical experience with porting to other backends is precisely the reason why we want to be bit more careful.

    Don't worry, there will be bindless support eventually. The question is just how and when. It won't be just DX12 exposed as is. It will be something that then uses argument buffers on Metal and descriptor indexing on Vulkan. We just need to get the edgecases right.

    Example of a difference where DX12 approach might screw Metal up: Metal requires one to actually tell in renderpass what resources one might use as it doesn't do automatic tracking when argument buffers are in use. This means that a design that would just expose a large desc heap and be done with it would then need to make resident every single resource that has been added, probably leading the system to run out of memory.

    Same issue actually happens with DX12 if you want similar residency management as with DX11 (and our current DX12 backend). If we put conservative estimation and just toss everything in we'll get bug reports that using the bindless system leads to out of memory whereas without the bindless the system automatically swaps things in and out.

    Another major consideration is debug tools. It would be nice to be able to actually see what resources the shader tried to read etc, instead of just getting a GPU driver crash and no idea what happened. And this requires shader instrumentation.

    It needs to be robust. It needs to be debuggable. It needs to work everywhere.

    We'd rather design it well rather than rush it out in few months and then try to fix it for the several years.
     
    rdjadu, ftejada and graskovi like this.
  37. Saniell

    Saniell

    Joined:
    Oct 24, 2015
    Posts:
    191
    Yeah I understand why you wouldn't want SRP experience (lol). Was just worried that features might get ignored at all because they "don't work everywhere"
    Thanks for the answer for confirming that features are actually coming but just not right now

    P.S. I do hope, however, that "eventually" doesn't mean "in 2 years"
     
    Last edited: Sep 16, 2022
  38. TheOtherMonarch

    TheOtherMonarch

    Joined:
    Jul 28, 2012
    Posts:
    862
    It is good to see that DirectX 12 is getting some love. Obviously, Unity needs to cater to the lowest common denominator. Maybe in 3 or 4 years the HDRP will require DX12. The Steam survey already shows that the vast majority of PC gamers 91.4% are on DX12 systems. Not sure about Xbox but it must be high.
     
    Last edited: Sep 17, 2022
    Lex4art likes this.
  39. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    You can actually no longer make new Xbox games with DX11. Microsoft stopped accepting game submissions made using the old SDK (XDK) and the new SDK (GDK) is DX12-only.
     
  40. yuriy000

    yuriy000

    Joined:
    Oct 14, 2021
    Posts:
    7
    Indeed, these are internal things, but that's precisely what I'm asking about - as a developer who wants to write their own ScriptableRenderPipeline, I want to use those features.

    I'm not asking that Unity should just dump all the platform specific features into its public API or rush any new features. That would probably be a nightmare for all involved. I also realize that the leap from last gen graphics APIs to modern graphics APIs is huge.

    My question is really about what sort of timeframe we would expect to see a public API in Unity which exposes some underlying graphics features in a platform independent way (which is pretty much exactly what you've described). If these features are on the radar at all or if they are already in progress. I see no mention of these things on the "official" Unity roadmap, which is why I'm asking about it.

    Based on this thread it seems that Unity is thinking about these things but "eventually" sounds like 2-5 years.
     
  41. Alterego-Games

    Alterego-Games

    Joined:
    Jul 13, 2015
    Posts:
    350
    Will the support for DirectX12 be backported to earlier versions? Or will they forever have experimental directx12 support?
     
  42. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,051
    Great to hear that dx12 is out of experimental, but I'm surprised with the rather conservative language being used as to performance gains. I thought the whole point of dx12 was a complete shift in approach to talking to the gpu that would offer substantial benefits over previous versions ( same with Vulkan too ). Granted that would not mean that every case was faster, but in terms of demanding games it would be a huge boost.

    Saying that it 'handily' beats dx11 with large amounts of drawcalls feels a bit lacklustre, I would be expecting 'considerable' gains in performance. Perhaps its just a langue issue here, 'handily' vs 'considerable' are not helpful terminology as they have no consistent meaning. In that case I would have expected some actual performance numbers, direct comparisons using Unity's catalogue of samples/demos. However the lack of numbers is also worrying, it suggests to me at least that the performance benefit is really not very impressive or possibly even worth it.

    Which leads me to wondering why?

    Is it that dx12 really isn't that much faster? For quite a while I don't remember it making much of an impression outside of a handful of games. I still have the feeling that many games are running in dx11 and maybe some dx12 games are actually just D3D11on12.

    Conversely maybe Unity simply does not have the architecture to take advantage of the benefits that dx12 offer?

    Regardless my question is will Unity continue to improve the Dx12 renderer?
    Do you expect to find more performance, but perhaps limited to using SRP?

    Guess I'm just a bit disappointing having waited for so long to get such a meh release for dx12 support. I mean I wouldn't even have known it came out of experimental if I hadn't been browsing the forums as I was bored. I remember being so excited by its marketing of reduced driver overheads, but I guess after 7 years ( though several revisions ) its just no longer a exciting news event.
     
  43. aras-p

    aras-p

    Joined:
    Feb 17, 2022
    Posts:
    74
    I think the whole industry realized that after all the initial fanfare of "new low level APIs will unlock so much more performance!" (dx12, vulkan, metal), the actual results ended up much smaller. The drive for way lower level APIs was done primarily by people (wishfully) thinking they can do a much better job than GPU driver developers. And some of them do, like maybe 10 people in the world :) For the "rest of us", well guess what -- turns out it's quite hard to beat something like 400 driver developers that say nvidia has (I picked the number out of thin air), who have spent the last two decades doing nothing but making the driver faster & better.
     
    rdjadu, horeaper, Ryiah and 4 others like this.
  44. dnach

    dnach

    Unity Technologies

    Joined:
    Mar 9, 2022
    Posts:
    89
    We are now working on a new graphics jobs threading mode, which will further improve DX12 CPU performance compared to DX11. While we cannot share performance figures for user projects tested, we can definitely share some figures for internal and publicly available Unity samples that we tested (e.g BoatAttack, URP/HDRP templates, draw call performance tests) and will do so as soon as possible.

    Beyond this, we are fully committed to promoting DX12 as the default API for Windows platforms, and there are other additional optimizations in our backlog to improve both CPU and GPU performance. Some more drastic GPU performance improvements will take a bit more time, as they require some more fundamental changes across all graphics backends, but these are also on our radar. As Aras and Teemu previously mentioned above, DX11 drivers may perform some heavy optimizations (e.g for GPU synchronization) which we are working hard to compete with.

    One important point to consider is that beyond performance, DX12 also enables more advanced rendering features that impact both visuals (e.g Raytracing) and performance (e.g Foveated Rendering via Variable Rate Shading) and these will become more and more prominent over time!
     
    Last edited: Oct 7, 2022
    Ryiah, newguy123, Lex4art and 11 others like this.
  45. TJHeuvel-net

    TJHeuvel-net

    Joined:
    Jul 31, 2012
    Posts:
    838
    Hi every time i startup the editor in 20222.2.0b10 i get this message:

    Code (CSharp):
    1. d3d12: Profiler is enabled, but stable power state is not. GPU timing errors are expected.
    I dont know what this is, or what i should do with it.

     
    robrab2000-aa likes this.
  46. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Hi,

    No backporting unfortunately. It's a massive hassle of backporting anything that's not a bug.

    Some numbers: Toss in 1000 cubes. Nothing more in the scene but just 1000 cubes on top of eachother (with shadowmaps on so we get 4000 drawcalls per frame). Standalone version with native graphics jobs on: Around 160fps on DX11 on my machine and 500+ fps on DX12. On a machine with more cores and by putting more drawcalls in you get even better results.

    Yes that is an extreme case as there is nothing else in the scene but drawcall spam. So you don't really see it that often. But I hope that suffices as "that much faster". And we're getting even more improvements in the upcoming versions.

    By the time DX12 is set as the default API on windows the situation should be such that there is no question on which api to choose anymore.
     
  47. Alterego-Games

    Alterego-Games

    Joined:
    Jul 13, 2015
    Posts:
    350
    Does that mean 2022.2 is planned to have it as default? Or will that be 2023 or later. Is there some kind of roadmap for this? (lots of questions, I know!)
     
  48. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,051
    Sounds good, would be nice to get some more realistic benchmarks in the future.

    Just to be clear, when I talk about the Dx12 support in Unity being a bit 'meh' or lacklustre I don't mean it directed purely at Unity, more that the marketing by Microsoft promised so much, yet it appears to be very difficult or very specific to obtain those sort of gains. I suspect this is even more true of a generalised engine like Unity, though its good to hear that work will continue on the dx12 support and hopefully provide further improvements in the future.

    Was wondering about this myself, though more from the case that unless its changed to be the default ( i rarely use recent Unity versions so wasn't sure if it was or not yet ) I don't see developers choosing to use it.

    While it might be out of experimental I suspect there is still a 'chicken/egg' situation where experienced developers will likely be more resistant to switching until they know that bugs/issues will be minimal, but of course the fewer people who switch the less feedback and bug discovery etc.

    I'm trying to remember what happened with dx11 support. I do remember being very happy with dx9 support and initial dx11 support was a bit ropey, but nothing on the scale that dx12 seems to have had. What I can't remember is when I switched to defaulting to dx11 or if I switched before Unity made it a default. Was it even made the default or was it simply the defacto default after dx9 was removed?

    I guess something to keep in mind is it may not be sensible to switch to dx12 unless using the version of Unity where it has been promoted out of experimental tag, as older versions are likely to more issues. So I guess i'll probably not switch until I start using 2023 as my default unity version, which is probably in about 3-4 years ;)
     
    Last edited: Oct 11, 2022
  49. itsjase

    itsjase

    Joined:
    Apr 18, 2017
    Posts:
    4
    Could you elaborate on this a bit, I was under the impression they ran in order?

    eg. If i have a dispatch twice like the following:
    Code (CSharp):
    1.  
    2. myShader.dispatch(0, 64 , 1, 1);
    3. myShader.dispatch(1, 64 , 1, 1);
    4.  
    If both kernels shared a buffer, and kernel 1 had dependencies on kernel 0's output wouldn't this cause issues?

    Or am I misunderstanding this reordering.

    Thanks
     
    OhneHerz likes this.
  50. aras-p

    aras-p

    Joined:
    Feb 17, 2022
    Posts:
    74
    It would not reorder this case. But across a bunch of dispatches, on "old" APIs (like DX11 / OpenGL), the driver can look at them, figure out which ones have dependencies on which other ones, and reorder (or issue in parallel) the ones that don't have dependencies. How much of that is done depends on the particular GPU and driver (and probably driver version); AFAIK NVIDIA on Windows traditionally has been extremely into these types of things (and any other optimizations they can legally get away with).