Search Unity

CPU/GPU Skinning

Discussion in 'Editor & General Support' started by ChrisWalsh, Sep 6, 2010.

  1. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Well that is a matter of debate, but maybe more importantly who makes your gpu. I've often heard that Nvidia provide the best GL drivers. I've also heard some people saying GL is rubbish on Win, others say its better than DX but very few ever provide details to back up their statements. I suspect much has to do with the fact that there are at least 3 versions of GL drivers, MS crappy default that comes with windows and i think is stuck around 1.1? Then you have Nvidias and AMD's, so whilst its meant to be a single spec, the fact that its implemented by so many means just like browsers you get weird differences.

    I'd also be wary of going gl only on Windows with Unity. I've done projects in the past using -force-opengl and its always been a pain one way or the other. Oddities like window mode not being able to fill the screen and other stuff. These days I tend to steer clear of it.

    Probably more Pro than Indie, I have Unity Pro and iOS Pro but not Android Pro ;)


    I noticed the machine said GLES 3 a few days ago when setting it up for development. Unfortunately while i can force gles 3 in Unity i have no idea if it actually is, or if there is any way to tell. I find it strange even in GLES 3 I'm still limited to SM 3.0 restrictions on the number of uniforms that can be passed in


    True, though its annoying not to be able to provide support then for older versions of Unity.[/QUOTE]
     
  2. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Worked fine so far on the 3 different dev machines (all nvidia+intel though) I've touched in the last 2.5 years. (Only 1 year in Unity though, before was Go+OpenGL.) Maybe -force-opengl had issues previously, but so far seems to run OK. I have a long-range project scoped at around 2-3 years, I figure the situation isn't going to get worse and it means I can be calmer about current GL-ES capabilities/limits than other projects. Also if I find pre-release I need to go DX, it'll probably be 12 rather than 11. Get something running beautifully in GL first works for me for now. I can also release on Linux/Mac/mobile first and do DX if the thing takes off at all :D

    As long as it gets in before v5 (say 4.6), assuming (as I do) that it's a relatively simple constant hard-limit that was put in years ago and then forgotten, should be OK.. requiring a certain minimum 4.x isn't as tall an order as requiring v5
     
  3. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Yeah that sounds sensible, unfortunately sensible doesn't always fly when dealing with shaders, just look at the hassle i've encountered between dx and the various versions of gl ;) Personally I find it easier to develop for dx then go to opengl if necessary, not sure why though since I started off learning openGL many years ago. Perhaps that experience means i find it easier to convert cg to opengl than the other way.


    I guess we're see. Its a bit quiet on the Unity front, maybe due to people being on holiday.


    Small Update
    Had some good success at using packed matrices. So instead of using Vectors to pass in matrices (and hit the weird glsl max array elements restriction) I've gone back to sending matrices. Except now instead of sending float4x4 per bone, I pack float3x4 matrices into float4x4. In other words if I have three float4x4 matrices, i can pack four float3x4 into the same memory.

    Alas while it works fine on the PC on Android some of the bones are messed up for some unknown reason and performance is really bad. Ah the joys of cross-platform development ;)
     
    Last edited: Jul 12, 2014
  4. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    You're not using fixed or half for them right? That's one of the things that you notice only on-device, and not on PC.. annoyingly
     
  5. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Nope. Seems the problem was much more simple. My Android graphics setting had reverted to GLES 2.0, which probably restricted the number of bone matrices I could pass in greatly.

    I then had issues with forwardAdd rendering creating what looked like a translucent ghost animation for the second light source. I'm guessing forwardAdd needs a few extra uniforms/attributes (whatever you call them), meaning I effectively lost a few bone Matrices at the end of the hierarchies even in GLES 3.0 in these passes. This led to vertices using some bones to be distorted and transformed all over the place and as it was in the forwardAdd pass these appeared translucent. It was a cool looking effect, but it had me stumped for some long as to what the problem was.

    So long story short forcing GLES 3.0 and my test scene animates fine, but the number of bones supported is quite low and more advanced shaders will lower it even further. Unfortunately its half the framerate of Unity CPU skinning on the same device! I'm guessing that its the unpacking of the matrices that is the problem. Its quite instruction heavy and happens for every vertex, so while it was an inventive solution to packing more bones into the data it ended up being much slower. I guess with some effort and looking at the precision of values being used I could improve performance, but I doubt I can get it faster than cpu skinning. Works fine on desktop though.

    So another half failed attempt for Android, sigh.



    Edit:
    Just occured to me that GPU skinning is possibly worse than CPU Skining when dealing with shadows. Its obvious really, in cpu skinning, the vertices are displaced once then reused, in GPU skinning the vertices are displaced up to three times ( render, collector, caster). This issue also applies to pixel lights in forward rendering, where each one will require the gpu to do all the skinning again.

    Suddenly i'm not so sure that gpu skinning is quite as good a deal, at least on mobile hardware.
     
    Last edited: Jul 12, 2014
  6. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Well I'm optimistic it's still a big win in quite a few use-cases and contexts, even if it's not the overwhelming conquers-all-others superior approach in any and all use-cases and contexts!

    Mobile is still tricky despite all the marketing hype suggesting devices have reached or exceeded last-gen consoles. Only in their conference demos, not in real-world gamedev. Shadow passes are still a significant cost and many mobile-first devs still go for drop-shadows rather than the two additional geometry passes. Likewise, forwardAdd passes are often avoided in mobile at all costs ;)

    As far as shadows go I guess that's what makes stream-out/transform-feedback so useful.. IF only it would run in desktop GL, not just GL-ES 3!

    (Fun fact, all recent GL desktop drivers would even allow creation of a real proper GLES3 context on the desktop but that option isn't in Unity so far)

    As long as Unity doesn't get a proper GPU skinning story for Mac, Linux and WinGL users, you have a winner. As long as their GPU skinning is Pro-only, you still have a winner for all non-mobile platforms, and for mobile it depends on how crazy gamedevs go with additional passes (forwardAdd, shadowCast, shadowCollect)...

    So good stuff!
     
  7. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Figured i'd post some results as they are quite interesting.

    Essentially for low polygon characters my custom GPU solution is not as efficient as either Unity CPU Skinning or Unity GPU skinning. For high polygon characters and perhaps with multi-submeshes my custom GPU solution is as good as or better than Unity CPU and sometimes Unity GPU skinning - on my machine at least.

    One interesting thing to note is that dx11 CPU skinning appears to be slower than dx9!

    I don't think you can take these as gospel, different machine specs is likely to cause different results, though I feel that perhaps the difference between low poly vs high poly will remain regardless. In other words what I can achieve with a custom GPU skinning solution is unlikely to outperform Unity CPU or GPU skinning in most cases.

    I need to do another round of testing where I really up the character counts (currently 16 instances) as some of the differences amount to just a few hundredths of a second.
    I could also really use a character with a low bone count, something in the 30's, though ideally low bone count for both low poly and high poly characters.

    Finally all testing so far has been using Legacy, I really need to start testing Mecanim to see if that throws any curve balls. Not to mention doing some opengl testing and then gles, but those are a pain due to the array limit and for older versions lower max attribute inputs.

    Added force-opengl results, as you can see performance is not as good as DX on a window machine which you might want to be wary about. GPU skinning is also adversely affected, but shows the same advantages/disadvantages as before.

    All test performed in Editor, using Unity 4.3.4f1.
    Directional light (important) + Hard Shadows + Directional light (not important)
    Quality: 2 bones, no v-sync, medium shadow resolution, two cascades, 2x MSAA

    Each character instance has its animation start staggered from all others, i.e. the instances don't play back in sync, to ensure no optimisations are performed such as sharing animation data.

    To recap VectorsM34 is passing 3 Vector4's into the shader per bone, whilst PackedMatrix is packing float3x4 into float4x4 Matrices. So Vectors have more efficient shader code, but requires more shader attribute setting, whilst packedMatrix has more complex shader code (longer), but requires less shader attribute setting.

    Code (JavaScript):
    1. Instances                          4 x 4 = 16                10 x 10 = 100
    2. API            Skinnng          Avg fps     Delta (s)     Avg fps     Delta (s)
    3.  
    4. Dude.fbx  - Bones: 58  Verts: 14331  Polys: 22501  SubMeshes: 5
    5. dx9            Unity CPU          130       0.0075         23         0.0430
    6. dx11           Unity CPU          106       0.0094         18         0.0491
    7. dx11           Unity GPU          290       0.0034         67         0.0149
    8.  
    9. dx9            GPU VectorsM34     148       0.0067         24         0.0396
    10. dx9            GPU PackedMatrix   290       0.0034         55         0.0182
    11. dx11           GPU VectorsM34     302       0.0033         53         0.0187
    12. dx11           GPU PackedMatrix   290       0.0034         59         0.0166
    13.  
    14. force opengl   Unity CPU          87        0.0115         16         0.0610
    15. force opengl   GPU PackedMatrix   128       0.0076         35         0.0281
    16.  
    17. Lerpz.fbx  - Bones: 69   Verts: 3036   Polys: 3534   SubMeshes: 1
    18. dx9            Unity CPU          429       0.0023         105        0.0091
    19. dx11           Unity CPU          390       0.0026         94         0.0105
    20. dx11           Unity GPU          571       0.0018         166        0.0060
    21.  
    22. dx9            GPU VectorsM34     354       0.0028         73         0.0136
    23. dx9            GPU PackedMatrix   362       0.0027         75         0.0132
    24. dx11           GPU VectorsM34     372       0.0026         82         0.0120
    25. dx11           GPU PackedMatrix   352       0.0028         76         0.0130
    26.  
    27. force opengl   Unity CPU          251       0.0039         81         0.0122
    28. force opengl   GPU PackedMatrix   204       0.0049         48         0.0207
     
    Last edited: Jul 14, 2014
    metaleap likes this.
  8. KristianDoyle

    KristianDoyle

    Joined:
    Feb 3, 2009
    Posts:
    63
    Here's something you can use he's a crowd guy from my own project. I will send him into the wild.

    37 bones - 8 of them have zero influence weights (endbones, trajectory).
    Included variations of single-skinned mesh:
    ~1200 vertices Axl_LowPoly.fbx
    ~5000 vertices Axl_5000.fbx
    ~20000 vertices Axl_20000.fbx
    FBX2012 Standard format for Characterisation, Mecanim etc.

    animationengine.org/docs/Axl.zip

    Axl.png
     
  9. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Thanks, i'll try and get him into the test builds. Spent some time writing up a testing scene that will cycle through various settings and a player build script to auto build various set ups. I'm hoping this will make testing performance easier in the long term.

    Interesting that you have some zero weight bones, I wonder if that's a regular occurrence with animated characters in general and if so whether they could be ignored thus reducing bone requirements.

    Edit:
    Ok slight problem. Currently i'm only testing legacy animations to keep things simple. The models you provided have no animations, so unfortunately they can't be used currently. Any chance of adding a simple run cycle to them or something? Animation doesn't have to be fancy, just needs to loop.
     
  10. KristianDoyle

    KristianDoyle

    Joined:
    Feb 3, 2009
    Posts:
    63
    Don't know really if it's common. I should have pruned small skin weights and culled zero influence bones for good practice. If you want a few sets for comparison I'll provide.

    Here's a looping (crowd) sequence for Legacy
     

    Attached Files:

  11. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Another update

    I've uploaded a spreadsheet of some results here

    These results come from a TestFramework i've written that automatically goes through all the various options. Unfortunately it still means I need to build 6 different exe's and thats just for windows (x86 vs x64, dx9 vs dx11, dx11 cpu vs dx11 gpu) and it takes about 10 minutes to run through all the tests so its not a quick thing, but it is useful for comparisons.


    Progress has gone pretty well, particularly on Windows, where as long as your model has > 4k vertices you'll see benefits over unity cpu skinning and in most cases my custom gpuskinning is comparable to native dx11 Unity skinning, sometimes it even appears to be faster!

    This news should be tempered slightly in that i've yet to move away from a simple diffuse shader, meaning that normal mapped models may not see the same gains. Further more as previously mentioned anything that increases render passes on the model (e.g more pixel lights in Forward Rendering) will reduce performance of the custom GPU skinner. I'm not sure if Unity's GPU solution would suffer the same problem as I seem to remember reading something about being able to re-use the result.

    To continue the good news, on windows using force-opengl also see's good gains over cpu skinning, though overall opengl performance is pretty poor compared to dx9/dx11.

    In terms of max bone counts, it varies between 72 - 112 in D3D, but realistically that will drop for more complex shaders that need to use its own uniforms. On GLES the number is lower, but i've no idea why, its as though its using more uniforms.

    Now the bad news.

    Currently it performs horribly on Android, something in the order of 4 to 10 times slower than unity cpu skinning. This suggests something is very wrong, even to the extent that maybe its falling back into software mode if that's still a thing in GLES. Working out whats wrong will take considerable time, as there seems to be no good reason for it.

    Further bad news on the Mac front, where gpu skinning is not only 50% slower than cpu, but there appears to be a major bug and none of the shaders which use SetMatrix() to pass bone transforms work correctly, however the SetVector versions of the shader work fine. This suggests that there is nothing wrong with the shaders as such but the passing of the data. I've done some digging to see why, but so far without results. Going to try 4.5.2 to see if that has any effect.

    Somewhat puzzled these issues manifest themselves on the Mac, but not on windows using -force-opengl.

    So currently i'm not sure that GLES/Mac are even feasible at this point. Its quite frustrating as i've read numerous articles/tutorials online that describe doing gpu skinning on these platforms with no obvious problems.


    Couple of other interesting points.

    On Windows should be able to set the shader to SM4/5 in dx11 to support 1000 bones, though no idea what performance would be like. In fact it may be possible to implement a much more efficient system overall and avoid using color vertex stream to pass bone info.

    In Unity dx9 versions of the shader do not use arrays, instead they are turned into individual uniforms. Opengl/Gles does use array's though.
     
    Last edited: Jul 18, 2014
  12. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Thanks for this, it proved very useful (see previous post with link to spreadsheet).

    Generally AxlLow has minimal, or worse performance than cpu skinning. This is not too unexpected, though Unity dx11 gpu does beat my custom solution comprehensively.

    The only problem is that my most efficient mode using Dual Quaternions (DQ), fails with this model as it does not support scaling and Axl arms have some scaling applied, so they tend to flap around weirdly. I believe if the scaling was fixed then DQ would work fine.
     
  13. MakeCodeNow

    MakeCodeNow

    Joined:
    Feb 14, 2014
    Posts:
    1,246
    FYI that there is no Software rendering in GLES. Also, I've shipped many games with GPU skinning on Mac, iOS and Android. It's almost always been a CPU win and only mildly more expensive on GPU. Unity's constant setting overhead must just be crazy high, at least in 4.3.
     
  14. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Could well be an issue with how Unity updates the uniforms. Unfortunately i'm not really sure how to go about debugging this issue on either Android or Mac (Unity still missing gpu profiler support). Whilst opengl on the mac is slower than cpu skinning its only about 50%, Android was dropping huge amounts. I did notice that the Unity compiled gles and opengl shaders looked rather in-efficient, but its hard to know as I think they get optimised before being final compilation?

    Hmm, just thought I could assign the bone matrices once at the start and see what performance that gives. The shader will still be doing skinning, just I wont be evaluating the animation or the bone updates. That might help narrow down where the bottleneck is. However overall my gut feeling at the moment is that custom gpu skinning is just no going to happen on Mac or Android (probably iOS too).
     
  15. KristianDoyle

    KristianDoyle

    Joined:
    Feb 3, 2009
    Posts:
    63
    Here's an update for that test character. Bone lengths normalised (no scale). Also small weight bones pruned and zero weight bones culled from deformer list. So pretty much 30 or 31 bones are in the deformer list.

    I included an 80,000 vertex count mesh and normal map just in case they are useful for testing.

    Updated the legacy animation to source from the new skeletal rig.

    http://animationengine.org/docs/Axl_v2.zip
     
    Noisecrime likes this.
  16. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Thanks again for this.

    May I ask what the intention is for the model? You mention being a 'crowd' model, in which case you'd think it would be a good candidate for gpu skinning. However if you check out my spreadsheet you'll notice that AxlLow has marginal increases in performance with a small number of models (16 instances) and actually performs worse with a large number (100) .Unity's GPU Skinning performs better here.

    Performance changes in favour of my custom GPU Skinning once you move up to AxlMed and AxlHigh though, with increases of 100% to 200% ( 2 to 4 times) and sometimes even appears to out perform Unity's for some reason.

    So while its good that my custom GPU Skinner works so well with vertex counts > 5000, if you want a big crowd using lowest polygon count (axlLow) then the Unity's cpu skinning is probably better. Though interestingly AxlMed at 100 instances has similar performance to that of AxlLow, so maybe that means you can just use the higher polygon count for the same cost?

    Ah, so many different permutations and issues to consider, it starts to hurt my head ;) I plan on updating the spreadsheet over the next couple of days as I refine my test framework , shaders and output data. This should make it more comprehensive and easier to read.
     
    Last edited: Jul 18, 2014
  17. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Awesome update and progress.. @MakeCodeNow 's info is also "principally reassuring".. Btw. note that on quite a few Android devices CPU/GPU is essentially the same processor as well as RAM/VRAM being the same --- not sure about your devices and no clue if that matters.

    For multi-pass of course the transform-feedback approach is superior to the vertex-shader approach, but we don't always get access to the former in Unity. Then again there's many mobile project contexts where multipass is avoided as a matter of principle. Though still even with just supporting shadows the VS approach suffers compared to TF. But that's just the tradeoff..
     
  18. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    Have since discovered the issue with SetMatrix() in Opengl on the Mac is a Unity bug that will be fixed in a later version, but it does little/nothing to address the performance issues. At this point i'm not sure what the problem is with performance, there are many areas it could be, but havng the gpu disabled in the profiler on Mac's doesn't help.

    Its a 2013 Nexus 7, so supports GLES 3 and has a Qualcomm snapdragon gpu. It should be pretty good, though i've not actually compared it to other tablets.

    Yep. I guess the only positive is that at least my GPU Skinning solution works on dx9 and doesn't require Pro, that might be enough to make it useful. Of course there is a rather sticky issue of how to implement the system around it in a way that works for many users and other subsystems, such as LOD. I'm not entirely sure that will be possible, and with the sheer number of permutations (i have 4 different shader techniques alone), I might be better off looking at releasing a framework rather than a complete product. It will mean that users will be expected to get a bit dirty if they want anything beyond basic functionality (i.e. attach a script to get gpu skinning on a skinnedmesh render)

    I'm slowly beginning to understand why Unity never bothered to support GPU Skinning prior to dx11/GLES 3. Although it took just a couple of days to get a working version, sorting out all the weird edge cases, platform differences, shader API's (can't use non-squared matrices in glsl as I can't find a way to force Unity to use version 1.2), testing in multiple environments, architectures etc, really hammers development progress. I mean I lost a whole day to the Mac SetMatrix() bug thinking it was a problem on my end.
     
    Last edited: Jul 18, 2014
  19. KristianDoyle

    KristianDoyle

    Joined:
    Feb 3, 2009
    Posts:
    63
    It's a crowd guy so yes the intention is for lots of them. Probably a mix of foreground models and video texture backgrounds is what I will end up with. It's been my hunch that the real benefits of GPU skinning come into play only with super-high resolution meshes. Not really the case here - but I thought I'd hand you some high-resolution meshes just for testing.
     
  20. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,054
    I see, well thanks again for the models, they have proved invaluable for testing, though i've been unable to use the 80k one yet since Unity splits it into 3 models. I don't think there is any issue with it, just my testing framework doesn't account for it.

    As for benefiting high polygon meshes, that's true these days, mainly because the vertex animation can happen on a separate core and if the overall vertex upload per frame is relatively low. That is its a complex equation between cpu power, bandwidth and gpu against the number of models and number of bones per model as well as vertex count.

    Certainly in my tests models with less than approx 3k vertices will perform better on cpu, though number of models and number of bones per model can influence the results a bit. Once you get into 5k vertices and above then the number of bones and number of models have less or minimal affect on the relative performance and gpu wins out.


    So the following is just some free thinking....

    I was actually asking because if you keep to low polygon models (which make sense for large crowds) then what you'd probably want to do is to combine several into a single mesh. This is possible with Unity already (been several threads about it), but with Unity cpu skinning that means you have to skin the total number of vertices in the 'combined mesh' and upload them to the gpu every frame. On the plus side it does mean that each instance can retain its own unique animation, but it also means you are doing a lot of bone animations too. Its typical trade-off between unique or shared animation instances.

    I would guess that with dx11/GLES supporting 1k-4k of bones Unity's GPU solution would also work well using the same method and would support unique animations too. It would still get hit by animating many model bones, but the skinning would be very fast.

    Alas for my custom solution, due to the limited number of bones supported (SM 3 is approx 56-72) its not as suitable. Though I see no reason why I couldn't write a similar or even better dx11 version, but that's for the future.

    Now if the instances share the same animation, then I don't think there is any real benefit with Unity cpu skinning at all, since all vertices have to be updated and sent to the gpu each frame. Similarly with dx11/GLES gpu skinning I don't think there would be a straightforward method to share the animations ( need to think a bit more on that). However my custom solution could provide substantial performance benefits but would require a special case shader.

    Essentially with a low poly mesh using say 30 bones that should leave room for 24-40 instances (limited by max vert/poly count hitting 65536) all being skinned on the gpu but using a shared animation. Might be something i'll have a play around with some time, though sadly it is very special case, so might be hard to generalise. Of course things then get even more complex should you want LOD systems involved.

    As for my custom solution, I'm still working on it. Trying to organise the shader code into something that can be easily re-used and integrated into any existing shader. Main problem is I have 5 distinct methods of bone input and each have 2 or 3 different code bases for doing the Maths (trying to see which is more optimal) so the permutations get big very quickly. I wanted to re-create all the basic Unity shaders (diffuse, spec, bumped, reflection, transparent etc), but the number of permutations might be unworkable. Still its getting closer.
     
    Last edited: Jul 22, 2014
  21. KristianDoyle

    KristianDoyle

    Joined:
    Feb 3, 2009
    Posts:
    63
    Good luck with integrating with the Unity shaders. Interesting ideas. Never occured to me that there were skinned mesh combining methods. But yes - if your limited to sharing animation among instances - it's a special case use.
     
  22. lhoyet

    lhoyet

    Joined:
    Apr 20, 2015
    Posts:
    6
    Hi, I know the thread is a bit old, but that's one of only a few that discuss custom GPU skinning for unity.

    I'm actually looking into writting a shader for Dual Quaternion skinning in Unity to avoid the usual candy wrapper artefacts. But before starting from scratch I was wondering if Noisecrime (or anyone else) would be willing to share his code as it looks like he had such an implementation for testing.
     
  23. ChenMo2

    ChenMo2

    Joined:
    Mar 4, 2013
    Posts:
    5
    And, I am looking for the method how to access uniform array in Unity shader.
     
  24. Reanimate_L

    Reanimate_L

    Joined:
    Oct 10, 2009
    Posts:
    2,788
    @Noisecrime sorry for thread necro, but is it possible for you to show a sample shader code for emulate skinning via GPU for at least 2-4 bones?
     
  25. Ferritah

    Ferritah

    Joined:
    Aug 5, 2014
    Posts:
    1
    @Noisecrime I've been trying to do this, and this thread has come in very handy. Each frame is displayed, though in a really skewed way, and I can't figure out why. Do you think you could share your code?

    Thanks!
     
  26. mdYeet

    mdYeet

    Joined:
    Oct 21, 2020
    Posts:
    22
    anyone looking to use GPU skinning , I think it would fall under Compute Skinning checkbox in Player Settings.

    "Enable this option to enable DX11/ES3 GPU compute skinning, freeing up CPU resources."