Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Why is DX12 so much slower than DX11?

Discussion in 'General Graphics' started by funkyCoty, Mar 6, 2020.

  1. Wawruch2

    Wawruch2

    Joined:
    Oct 6, 2016
    Posts:
    68
    It is stable because your sample project doesn't have a lot of high resolution textures. When you exceed the memory budget for the textures performance gets crushed. It seems like DX11 has it's threshold much higher, for the 8GB GPU i could pack 2 times more 4k textures before seeing any performance impact.



    I hope that new memory manager will fix those issues
     
    Gasimo, funkyCoty, ontrigger and 3 others like this.
  2. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Are you running with graphics jobs on? (It can be found in the player quality settings) Without graphics jobs we are bound to be slower, because if the whole idea of DX12 is to give responsibility of threading to the devs and then we are forced into a single thread there is nothing we can really do.

    Graphics jobs are disabled on Editor, so the toggle only affects the standalone player. However you can force them on with
    -force-gfx-jobs native
    , which works both in standalone and in editor. It might crash, it might not. But based on my experience generally it works ok in the Editor. We are going to Eventually(tm) make them work there too but that requires more fiddling with the platform neutral side, so it will take a bit of time as the Editor can be... well, interesting when it comes to threading.

    As for the kind of effect you can expect from them. A simple scene with a ton of cubes and no SRP batcher on my work machine in the editor (alpha from Monday this week):
    DX12 around 45 fps. DX11 around 53 or so fps. DX12 with native graphics jobs: 180 fps.

    This is in case of no heavy compute workload. Another bottleneck that this doesn't address is the barriers for compute shaders and other UAV heavy workload. But it can help by pushing work away from critical path.

    Do note that the graphicsjobs are fully supported in standalone, are enabled by default, and should not crash. It's only Editor where they're not on yet.

    Once we start really enabling them on the Editor we would greatly appreciate all kinds of reports on crashes and other stability issues. We wouldn't mind reports even before we start pushing them forwards.

    This is definitely what the memory manager is focused on. The version in the current alpha doesn't really switch resources around if there is no need for it to evict, so it can still lead to issues like this. But we are working on making it both more fine grained and also to swap things around to make sure that all the critical resources are in the GPU memory and not remain hanging around in GPU visible host memory.
     
    keeponshading likes this.
  3. funkyCoty

    funkyCoty

    Joined:
    May 22, 2018
    Posts:
    719
    Our game Wave Break used compute shaders heavily in our fluid sim for the ocean, which might explain why performance was especially bad on DX12? Are there plans to improve this?
     
  4. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    There are. But nothing that we can really promise yet. The reason being is that the actual ID3D12GraphicsCommandList::ResourceBarrier is a very heavy call (based on unguided sampling CPU profiling). On Vulkan the equivalent barrier call is light on the CPU, but the usual caveats of GPU performance apply. And when one is writing compute heavy code that tends to be effectively single threaded.

    On HDRP template that call is the heaviest in our render thread. Not any of the usual Unity shenanigans, simple setting of barriers in the commandlists is the single most time consuming thing. The actual tracking on what barriers to place is peanuts compared to that.

    So on our side this is very much under research. Maybe the upcoming new style barriers on DX12 are lighter? Or we just yoink all compute stuff out into it's own dedicated thread. We need to see. And for the time being we can make all the other parts lighter in order to allow more time for this critical path.
     
  5. funkyCoty

    funkyCoty

    Joined:
    May 22, 2018
    Posts:
    719
    It would be nice if there was a way for you to give developers control over this. In our case, I know what needs to wait on what. Just need a single barrier at the start and the end of the compute stuff, so rendering can happen after and before it.
     
  6. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Assuming you don't have any dependencies between your compute shaders there will not be any barriers either. But if you use a resource as an UAV on one dispatch and then access it on next dispatch there must be a barrier between them. All of this is fully automated with the logic "As if it was DX11".
     
  7. RR7

    RR7

    Joined:
    Jan 9, 2017
    Posts:
    254
    just to be clear here, are you suggesting the SRP Batcher should be disabled when using native graphics jobs?
     
  8. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    I might have written unclearly. SRP batcher is definitely useful and should be kept on. With SRP batcher on the native jobs version is even faster. I just wanted to give a magnitude for the improvement native jobs will eventually bring to the Editor, just as how they are currently in standalone. The example scene was on builtin.

    As for SRP related stuff the SRP batcher benchmark (available on github) runs around 24fps on DX11, way less than that on DX12 on trunk and around 53 with DX11 with SRP batcher on. With DX12 and native jobs it jumps to 60fps. 70+ with native jobs + SRP batcher (SRP batcher tends to serialize some things so it doesn't scale that well, it's still a good net benefit to combine it with native jobs).

    Do note that your mileage will strongly vary depending on how many cores you have etc. The more the better in case of native jobs.
     
    tmonestudio likes this.
  9. slime73

    slime73

    Joined:
    May 14, 2017
    Posts:
    107
    Small benchmark demos can be useful for measuring and comparing individual parts of a frame. But if they're the main or only thing that's tested for total framerate, it tends to result in misleading numbers because non-demo full games tend to need a lot more of both the main thread and job threads for their own code (plus a demo setup might not be representative of what a typical large-scale game has in terms of its rendered objects, passes, bound resources, etc).

    In my own tests in the past, the render thread and graphics job threads in the Vulkan backend have been significantly faster than the D3D12 backend. The render thread in the D3D11 backend has also been much faster than the render thread in the D3D12 backend. Absolute framerate numbers in a test scene weren't too different, but those parts had drastic differences which heavily impacted framerate of full games. I'm curious how those compare with the latest Unity in the demos you're testing.

    (Also to be clear, I'm really excited that the D3D12 backend is finally getting more attention from Unity and that you're engaging people here - it's just that my past experiences with it have made me cautious).
     
    keeponshading likes this.
  10. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Having small benchmark demos running fast is not sufficient for good performance overall, but it is necessary. Naturally once these are in tip top shape we can then move on to the other remaining issues.

    We have several optimizations related to render thread performance in the pipeline. It won't make us exactly as fast as Vulkan, but we're not far off compared to the current situation.

    As for OPs issue. There is a thing they, and others, might want to try in the current released versions. Limit job worker counts. We're going to tune it but the released versions are what they are, especially if it's LTS (essentially only bugfixes can be pushed in).

    Either limit the job workers from the editor UI or just launch standalone with
    -job-worker-count N
    with N being the amount of workers. The optimal amount depends heavily on the scene, the more stuff the larger. But the effect of this can be staggering. On this 18 physical core 36 logical core machine the HDRP template standalone, when set to low resolution so that we are well and truly CPU bound, chugs along at maybe 80fps on DX12 by default. If I limit the worker count to 4 it jumps to 120fps. The reason for this is that the default load balancing algorithm seems to choke a bit and has one thread doing a massive SRP batcher work, with everything else waiting for it to finish.

    Vulkan does better tuning at this than DX12 does by default. So we will improve on this. But for now manual tuning might be of help.
     
    Gasimo, Neto_Kokku, sqallpl and 3 others like this.
  11. unity_8p6oRXXqiWbRTQ

    unity_8p6oRXXqiWbRTQ

    Joined:
    Dec 20, 2018
    Posts:
    21
    just wanted to say thanks for a clear and well put together answer.
     
  12. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    It's been another year and a half, and DX12 is still slow as molasses.
     
    Lymdun likes this.
  13. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    In IL2CPP build, with graphics jobs on?
     
  14. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    Yes and yes, unfortunately. Standard Renderer. Granted my project is sorely un-optimized at the moment. I can eek by on a precarious 30fps currently on DX11. Switching to DX12 makes that 7-10fps. I was hoping to use Unity's built-in Dynamic Resolution feature, or at least explore what it has to offer. In Standard Renderer this is a DX12-only feature, and the inherent DX12 performance hit makes this "feature" completely pointless.
     
  15. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    What version? If it's 2021 then yeah I can understand that.

    If it's 2022.2a16 (that just landed and is available in Unity Hub) then definitely not expected. Give it a go, you might be pleasantly surprised. Also watch the forums for some announcements (very soon!)

    Spoilers: Gigaya got 50% perf boost in CPU vs 2022.1 and older alphas.
     
  16. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    That smells a lot like a performance cliff.
    WHAT THE HECK? 50% is not a joke.
     
  17. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    I'm spoiling the reason for that so that it doesn't seem too far fetched, even if it's true: For whatever reason some UI package there released rendertargets each frame and didn't use the temporary rendertarget manager. And there is a native graphics jobs stall whenever rendertargets are released. This was fixed so that there is no longer mainthread/renderthread stall when releasing rendertargets. End result: DX12 jumped from 24 to 36fps with DX11 staying at 30fps.

    There are plenty of other improvements in the 2022.2.a16 that combined have a large impact. But for the final piece of magick wait for the next beta/alpha/whichever is the one that ends up appearing in the hub.
     
    jiraphatK, m0nsky, chingwa and 2 others like this.
  18. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    This is a show I have a front row ticket for! Really great your team and Unity is smashing through these problems. I'm on a16 but not DX12 so I shall have to give it a whirl. Question: do these improvements also improve the performance of hybrid renderer? I imagine they do but not so obviously. And will have to wait till Entities moves to 2022.2.
     
    FernandoMK and ali_mohebali like this.
  19. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Is that UI package UGUI, by chance?
     
  20. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    We haven't specifically tested hybrid renderer. But we have tested SRP batcher and that path (they both use similar backend in the lowest levels) has gained more performance due to the way we have improved our uniform buffer logic.

    I don't really know which of the packages in the project it was. (E: There was one version where UI was broken, and the rendertarget spam vanished, and then reappeared, hence the UI being culprit)

    For optimization perspective if DX11 and OpenGL don't choke on it then DX12 must not choke on it either, so the reason as to why the upper levels do things the way they do don't really matter in these kinds of issues, even if they're not best practice.

    Performance penalty must always be the expected kind related to the operation in question. If one allocates and deallocates memory each frame it's to be expected that there is overhead from that operation. But full halt the world kind of stall is not expected.

    Incidentally this particular optimization made Vulkan faster too, and Metal.
     
    Last edited: Jun 13, 2022
    hippocoder likes this.
  21. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,115
    This might be the first official case of "why Unity needs to make games" to know what really needs work proven right.
     
  22. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    I wholeheartedly agree. Gigaya has been an absolute treasure trove for us, and many other teams. It's great to see it's actually happening and bearing fruit.
     
  23. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,115
    Quick question, is Gigaya using Unity Terrain?

    also, are the scenes big? Mind giving us a general scene size glimpse?

    I ask this because the current scene size and terrain performance issue is a huge problem and I was wondering if this wall hit the Gigaya devs in the face. Hopefully it did and will start a terrain and scene size rework. Double float is really needed for the future...the performance hit will have to be something to deal with hardware and not within engine...
     
  24. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    851
    please do an HDRP based game next ;)
     
    Shizola, NotaNaN, jiraphatK and 4 others like this.
  25. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Yes please. I've moved back to HDRP since it's really quite time. Even porting Gigglecakes to it would probably work.

    I bet there's ALL SORTS of naughty buffer related shenanigans going on.
     
    NotaNaN, RR7 and Qleenie like this.
  26. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    Yeah I'm using 2021.3. I'll give the latest 2022alpha a try and report back.
     
    hippocoder and ali_mohebali like this.
  27. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,631
    All I'm getting from this thread is that the old sage advice of "avoid Unity UI packages like the plague" still holds up well.
     
    Saniell, ontrigger and hippocoder like this.
  28. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    Seeya Hippo, I'm going over the edge!
     
    hippocoder likes this.
  29. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    After downloading and troubleshooting 2022.2.0a16 (there were crash-on-play issues with the jobs package installed, so had to be removed, argh!) I can see no improvement under DX12.

    There is a slight performance improvement of perhaps 5% in both DX11 and DX12, however there is no relative improvement between DX11 and DX12. Performance under DX12 is still at about 30% the performance of DX11. this goes both in editor, and in build.

    I did see an improvement in the stability of frames in both DX11 builds and DX12 builds, with frametimes being more consistent than in 2021 builds. But that also came with other bugs one might expect in an alpha (strange movement bugs, and graphics glitches).
     
    ftejada and hippocoder like this.
  30. Wawruch2

    Wawruch2

    Joined:
    Oct 6, 2016
    Posts:
    68
    Same, I'm currently on 2022.1.3f1, DX12 still unusable for higher end projects. I kind of gave up already.
     
  31. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Could you perhaps share your project or tell a bit more about what you're doing so we could make a repro? Because 30% of performance is definitely NOT expected. And even in the worst case previously we saw 50% performance. The only possible case I can think of, if I'd had to artificially produce something like that, is some extremely GPU compute heavy which depends on the ability for DX11 driver to reorder Dispatch() calls based on dependencies, something that DX12 backend cannot do (they go exactly how you tell them to).

    In most cases we've seen we either match or are faster than DX11 when it comes to CPU overhead right now. But it's also likely that there are some weird things down in the guts of the graphics jobs system just like how there was for Gigaya.

    Have you tested in Vulkan?
     
    Deleted User likes this.
  32. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    Vulkan gave me crashes on play, both in editor and in build, using 2022.2.0a16. I didn't troubleshoot it any further than that.

    Well since you mention the GPU I tried turning off the heaviest contributor to my GPU overhead in this project, which happens to be GPUi instancing of a large number of trees and grass details. When these are turned off, DX12 runs just as well if not a bit better than DX11, so GPU load definitely is the bottleneck at least in my case. DX12 did appear more unstable despite the slight performance increase as I had a few crashes while playing under DX12, none under DX11.

    When referring to GPUi I'm referring to GPU Instancer:
    https://assetstore.unity.com/packages/tools/utilities/gpu-instancer-117566
     
  33. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    Thanks for the information. I need to check that package (if possible as it's a paid package and there are Rules(tm) about such packages). But based on the FAQ one can already make some educated guesses.

    My first hunch is that it would do something like Dispatch() -> Draw() -> Dispatch() -> Draw() internally which in good DX11 driver would become bunch of dispatches -> bunch of draws but in our backends (including the DX11, but there the driver can reorder things) it will essentially have a full barrier between every single step. Absolutely trashing the performance and in that case running at 30% is very much expected.

    Doing reordering is in the roadmap, but as you might guess it's a very involved process. Right now we expect DX12 to be as fast or faster than DX11 when it comes to CPU overhead. In ideally written code we're identical with DX11 (as we use the exactly same shaders), but when it comes to reordering commands it's something none of our backends can yet do.

    You can also issue a report to the dev of said package for a fix.
     
  34. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,789
    @tvirolai Thank you for all your responses and information!
     
  35. funkyCoty

    funkyCoty

    Joined:
    May 22, 2018
    Posts:
    719
    Are there plans to improve this? We had to ditch rtx support in Wave Break at launch because DX12 was so much slower - if compute dispatches are the biggest drawback that would make sense, because we ran an ocean sized fluid sim via compute shaders.
     
  36. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    It sounds like developers can work around the problems as well. Is there advice for these scenarios given that it sounds like reordering could take some time.

    Would like to get a picture of the pitfalls to avoid as I'm able to occasionally throw out random compute shaders and not realise the damage I might do.
     
    ftejada likes this.
  37. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    At some point yeah. But not in the near term. Do note that it's not the dispatches themselves. It's their ordering and dependency handling.

    We do aim to be even faster than DX11 though. If you have a large UAV and you're accessing it from two different dispatches and you know you're not accessing same areas DX11 forces serialized dependency here, but on all other APIs (amusingly including OpenGL) we can allow the dispatches to run in parallel. Exposing this kind of ability to the dev can be a large boon in certain cases.

    Thankfully this indeed is a situation where developer can affect the performance a great deal.

    I'll give a few examples. Let's say I have the following kind of situation dispatch(a) -> dispatch(a); dispatch(b) -> dispatch(b) with a and b being some uav's they access. Now this means DX11 driver can reorder it so that it goes like dispatch(a); dispatch(b) -> dispatch(a); dispatch(b); so that only a single barrier is needed (or even issue a split barrier). You can try reordering them yourself to see which one works fastest.

    Another case is dispatch() -> draw() dispatch() -> draw() kind of dependency. As an example in Vulkan calling dispatch() issues end of renderpass which means a full tile cache flush. Draw 5 times with dispatch right before the draws and you get 5 tile cache flushes trashing the performance (and the pipelining, which is what happens in DX12). If possible execute these dispatches yourself in a single batch before drawing.

    You can also get DX11 to trash the perf in a similar fashion if you just make the dependencies so that it cannot reorder. As an example dispatch() -> draw dispatch() -> draw kind of situation where all of them access exactly the same UAV forces the driver to serialize the execution.
     
    Ryiah, Deleted User, NotaNaN and 3 others like this.
  38. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,115
    Last I heard Unity DX12 is similar to emulating in DX12 in DX11. Is there any plans to rewrite the DX12 so that it actually performs like DX12? I mean 30% performance hit is a bit hard to swallow. Maybe something in the region of 5%...maybe.
     
  39. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,084
    So... do I have to be that poster? I'm not asking permission because I'm going to be that poster.

    This thread is over two years old and DX12's performance issues in Unity have been a known factor since as long as it's been included in the engine. Why is Gigaya a treasure trove compared to the near decade of user feedback? A lot of the reasons dx12 implementation is avoided is because of those issues.
     
    Ryiah, fherbst, pbritton and 6 others like this.
  40. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    There are plans. Just remember that the 30% perf in case of dependencies is not something that's related to DX12 specifically.

    Also currently in the latest alpha with native graphics jobs in most cases when it comes to CPU usage DX12 is faster than DX11. So if you are drawcall bound = use DX12. If you have some compute heavy stuff that relies on reordering by the driver = use DX11. If you run on team red hardware then DX12 is almost always a great choice.

    Similar considerations apply also to Vulkan. And non green team OpenGL. DX11, especially on team green, is such a special case.

    Because we as devs cannot attach profiler to user feedback. So while user feedback is invaluable resource in the end we must have something available that we can actually use and compare against.
     
    OCASM and hippocoder like this.
  41. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,631
    Have you been here long? The forum account says 2020, but maybe you've been working at Unity before that.

    In any case, this is a bullshit answer, although this definitely isn't on you.

    The reason it's bullshit is simple, Unity (the company, management) didn't give a F*** about DX12 performance and just wanted to do the bare minimum to be able to included dx12 in the list of features.

    Because I CANNOT believe that there was a dedicated graphics team with a mandate of "make dx12 a 1st class citizen" since 2015 and yet its state was unchanged until very recently. I can't. My opinion of Unity can't really go much lower of at this point, but this cannot be the case.

    Because if it is, the dev team should be fired, or management. Preferably both.
     
    funkyCoty likes this.
  42. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    There has been one since last November as I mentioned in a previous post. That's why the CPU performance has went up so much in the recent times. Right now you can expect that every backend, and platform, to have a single dedicated owning team.

    I understand your frustration. Saying "Just wait a bit more" doesn't help after such a long time. But what I can say that if you want faster CPU performance in Windows standalone then use DX12 right now in the latest alpha. Got tons of drawcalls? Use DX12. In the extreme end of things it's actually bit faster than Vulkan.

    Or if you want to be wild you can already enable native graphics jobs in Editor. That makes Editor a great deal faster than what DX11 is. It's not production ready feature yet but we're ironing out the kinks, so bug reports with graphics jobs on are appreciated. (Just start the editor with
    -force-gfx-jobs native
    )

    I am yet again talking about CPU performance. On GPU DX12 either matches DX11 or is worse.
     
    timmehhhhhhh and hippocoder like this.
  43. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Thanks for explanations. It sounds like in the longer term when this relatively newly formed team has a chance, we can probably expect DX12 to be better for the greens. Sounds like they have a really well-funded driver team with profiles for all the top games out there.

    Would obviously take Unity (or any DX12-targeting studio) some time to catch up. Do you estimate maybe a year or so for random guesses?
     
  44. PacoChan

    PacoChan

    Joined:
    May 28, 2015
    Posts:
    44
    To be fair, of all the games that I've played that offer both DX11 and DX12 (and I'm speaking of other engines), DX12 always works much worse or crashes more often than DX11. It's like this thing of giving full control to the developer was good in theory, but not in practice, when even seasoned developers struggle so much with this. And if the average Unity user which doesn't know about low level APIs has to figure out in which order to dispatch compute shaders and stuff like that, it's gonna be a rough time for everyone.

    I have some basic knowledge of OpenGL and DX11. DX12 and Vulkan are WAY above my abilities. My hope is that DX13 offers some middle ground regarding complexity.
     
    OCASM likes this.
  45. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Well the average gamedev isn't going to be required to act- in any engine. Because that's the engine's job and the engine developer's job. Unity is out in front, in control of their code - Burst etc, and they need to do the same for their rendering. Unity does have the scale and chops to do this - providing upper management give the budget to the teams that require it.

    They had not up to a year ago. Hope that this budget is carefully expanded, because 12+/Vulkan really will offer higher performance for non-green machines, and in in the end DX11 will fall behind in performance, even for team green as they can't keep doing per-game tweaks, and more and more specialised renderers are coming along.

    DX11 won't be around forever.
     
    Ryiah, NotaNaN and hopeful like this.
  46. jjejj87

    jjejj87

    Joined:
    Feb 2, 2013
    Posts:
    1,115
    This is a bit confusing. Are you saying that DX12 performance is now equal or better than DX11? or are you saying that it is sometimes equal? Or mostly worse?

    What exactly are you trying to say?

    If DX12 CPU and GPU is faster than DX11, then you are saying that it is faster in all cases....correct? Or am I misunderstanding something. I am seriously confused...Can we just bring down the description down to earth a bit? I want to understand exactly what is going on with DX12.

    Or are you saying DX12 is faster/equal than DX11 as long as Native Jobs is on and its on the latest Alpha? So, as long as we are using the latest Alpha with native jobs, then there shouldn't be any performance issues but better in some cases?
     
  47. tvirolai

    tvirolai

    Unity Technologies

    Joined:
    Jan 13, 2020
    Posts:
    79
    As a rule of thumb: On CPU DX12 is faster. On GPU DX11 is faster.

    Thankfully many Unity games are commonly CPU bound, hence one can get real performance gains from using DX12.

    In Unity 2020 and 2021 DX12 was also slower on CPU compared to DX11. But now this has changed.
     
  48. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,966
    Neither will DX12. Let's just hope they manage to succeed in getting it to work before something new comes out.
     
  49. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,084
    You are missing my broader point. These issues are documented, often for years. No, UT can't attach a profiler to user feedback, but if users are saying "we are reliably seeing 15-27% performance decreases in static test scenes," there's already an immediate jumping off point for internal testing that doesn't require a whole game project to be developed. I harp on this a lot for a reason: user feedback seems to be extremely low-rung when it comes to internal test processes.

    User feedback should inform internal decision making and testing. It shouldn't feel like user feedback is near completely ignored until a pet project comes around that verifies it.
     
  50. funkyCoty

    funkyCoty

    Joined:
    May 22, 2018
    Posts:
    719
    I've sent whole real game projects to Unity over the years, as I'm sure many other studios have. You couldn't take a look at those?
     
    timmehhhhhhh likes this.