Search Unity

[BETA RELEASE] GPU Instancer - Crowd Animations

Discussion in 'Assets and Asset Store' started by LouskRad, Apr 29, 2019.

  1. LouskRad

    LouskRad

    Joined:
    Feb 18, 2014
    Posts:
    655
    Hi there, and thanks!

    unity_WorldToObject is actually being overridden with each instance's transform matrix in the GPU instancing setup. You can take a look at the GPUInstancerInclude.cginc to see this. What looks missing from your vertex method, however, is the required instancing setup call in the beginning of the method, namely:

    UNITY_SETUP_INSTANCE_ID(v);
    GPUI_CROWD_VERTEX(v);
    ...

    Having said that, it might actually be a better idea to do the rotations matrix-based instead of doing it vertex-based. To do this on the GPU side, using a custom compute shader would be ideal. You can take a look at the included boids fish demo in the package for an example of this.

    It is also possible to do the instance matrix rotations on the CPU side if you are not familiar with compute shaders to do this. In that case, I would recommend using a threaded system to divide load in the CPU (like the job system).
     
  2. pdinklag

    pdinklag

    Joined:
    Jan 24, 2017
    Posts:
    51
    This is definitely the better idea, yes. Thanks for pointing at the boids fish demo, I got it to work!

    Using the CPU works too and that's what I did, but since these kinds of computations are what GPUs excel at, I thought why not use it. I don't render too much and the audience is really the heaviest part. So give the GPU something to chew on and don't waste any precious CPU on it. :)
     
    LouskRad likes this.
  3. xmalix

    xmalix

    Joined:
    Aug 8, 2017
    Posts:
    1
    Hi, do you know if support for mobile is coming soon?
     
  4. LouskRad

    LouskRad

    Joined:
    Feb 18, 2014
    Posts:
    655
    Hi there,

    Mobile support is currently not a priority in our roadmap, so at this point I can't confirm if or when this feature will be implemented.
     
  5. blacksun666

    blacksun666

    Joined:
    Dec 17, 2015
    Posts:
    175
    that's a shame, can't wait to use this on the oculus quest platform
     
    hungrybelome likes this.
  6. PiAnkh

    PiAnkh

    Joined:
    Apr 20, 2013
    Posts:
    110
    Hello,

    I just got your asset and am looking forward to experimenting and discovering if it feasible to move our project to this.
    Could you tell me if the use of layers or masks is on the roadmap?
    Or is there perhaps someway around the lack of layers using blending, for example blending where one animation has no keys for some of the bones?

    Thanks!!
     
  7. LouskRad

    LouskRad

    Joined:
    Feb 18, 2014
    Posts:
    655
    Hi there,

    We are currently looking into layers and masks, but it is at this stage experimental and I cannot give you an ETA on this. A workaround using blending would not work out of the box since blending would be applied to all bones.
     
  8. hungrybelome

    hungrybelome

    Joined:
    Dec 31, 2014
    Posts:
    279
    Me too. I'll buy this asset once Android is supported.
     
  9. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    78
    Another question for you -- and apologies in advance as we're still somewhat new to the animation/rigging side of things, so I may butcher some of the terms here, but I'll do my best to describe the situation and what I'm looking to do:

    • We're using the "No GO" workflow with the Crowd add-on; Unity 2019.2.17f1 currently.
    • We've got N different humanoid models (distinct meshes) that are all rigged the same (Mixamo).
    • Each of the models has is configured as Humanoid with its own 'Avatar'.
    • We have a library of animations that work with all of these imported humanoid models (via importing them, dis-associating them from the model they were imported with, and adding them all onto a 'generic animation controller' that is assigned to each character [to get them recognized by Crowd-GPUI]). This works/previews within Unity correctly for all agent models/rigs.
    • Creating a 'Prefab Variant' for each of the models so that they can be added as Prototypes (Unity/GPUI requires it and no longer allows editing the imported GO-model asset).
    • At this point, we can drag each into the Prototype list for GPUI and then bake the animations; as far as I can tell, and perhaps I'm not understanding it correctly, but I think that the animation data will be the same for them all -- but GPUI will require me to bake the animation data for each individually, even though they're sharing the same animation controller & underlying animations (the actual meshes do differ though).

    • The Question:
      Is it possible to share the animation data (texture) in this situation? Or is there per-avatar data contained within the baked animation texture, and it just visually looks similar/identical (didn't photoshop-compare it, but it sure looks the same in the Unity preview)? I'm not well-versed on 3D animation overall (learning more each day, though!) -- but my question is basically whether or not there is any per-"avatar" (or per-mesh data) contained within the texture file, or if the animation texture data is purely "rig + animation" derived.

      In the latter case, then I'd expect that we could re-use the animation texture for all of our characters despite them being distinct models -- simply because they're rigged the same (the animations play correctly, within Unity for all of them, outside of GPUI, for instance). It's not a huge issue either, but if we can avoid baking+uploading N copies of each animation texture to the GPU, and instead only have to upload/bake 1 texture per animation, then it seems we'd be able to realize a fairly substantial savings.

    Thank you!! :)
     
  10. LouskRad

    LouskRad

    Joined:
    Feb 18, 2014
    Posts:
    655
    This is an interesting point, and we will look into adding this as a feature. For now, however, you could use a hacky method to establish the same effect by baking all the prototypes' animations, and then assigning the same animation texture to their AnimationTexture field in their prototype scriptable objects. You can find these SOs under:

    /GPUInstancer/PrototypeData/Crowd/Prefab

    You can then remove the other generated texture files.
     
  11. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    78
    Thank you very much, that's what I suspected but wanted to double-check!

    Based on my description of the scenario, you'd generally expect it to work without issue it sounds like -- given that the characters are all Mixamo-rigged and sharing the same animations? Are we potentially "doing it wrong" by having each character (prototype) have its own 'Avatar', or does it generally sound like we're using the systems properly here?

    Two more questions actually, if I may -- only tangentially related, but along the same lines (sorry!):
    1. Are blend-shapes supported?
      EDIT: They are not. Blend shapes are not supported, and I had asked about it a while back via email and completely forgotten that I had done so. Sorry!

      For context: Our intended use-case would be to gain agent variation (ex: "thin-to-chubby" or female "long-skirt to no-skirt") while still being able to instance them from a single mesh.

    2. As an alternative, and I believe (but may be wrong) that this is essentially how GPUI works, but could we basically have a prototype with two meshes that have identical vertex layouts and 'bake vertex offset' into a map to achieve the same result as the examples I mentioned above?

      For example, we have two female characters -- 'chubby' and 'thin' -- and they have exactly the same mesh layout/topology, the only difference being certain vertices are displaced in certain areas. My goal would be to instance them as one single prototype and then send an extra "chubbyFactor" float via buffer, to allow us to essentially "lerp" between the two sizes.

      For a standard mesh, I'd expect that what I've described would work fine & without issue -- but I'm far less confident on whether it would work (or not) when animation comes into play. I think it's basically the same thing that GPUI is doing, (though it may be baking bone positions instead of individual vert offsets..?), but then I'm left unsure/wondering what that may end up looking like/resulting in when it's lerping between two "base models" and then applying the animation "on top" as well.

      Any thoughts on this overall? BTW, if you'd prefer I'd be more than happy to take this offline & email direct (and would be happy to compensate you for your time as well). :)
    Thank you!!
     
    Last edited: Jan 12, 2020
  12. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    78
    I don't want to take away from my question above(!), but after doing substantial work with/in/around the GPUI Crowd Animations asset the last few days I have a couple of feature requests & pain-points that I think are worth mentioning. Not demanding any of this, and am able to workaround it all, mainly just a wish-list / some quirks / and food-for-thought! Apologies in advance, this list got a whole lot longer than I had anticipated. :)

    1) Add 'Bake All [MISSING]' + 'RE-Bake ALL' buttons.
    Don't even worry about the multi-select stuff IMO, just a quick-fix queue solves 90% of the "problem". For me that problem is being able to hit the button and let it run (ie bake 5-15 prefabs with >100 animations each) while I walk away, without having to wait & click twice between each, before finally hitting 'Bake' again. My hack-around queue works great for now but this is easily the #1 request I'd have. I'm guessing the main obstacle is getting an Update() and/or editor coroutine running; I used a hacky solution, but if you wanted to make the 'Editor Coroutines' package a required dependency (or similar approach) that wouldn't bother me at all... This is *easily* the biggest pain point that people will hit IMO; only took ~20min to wire up a queue, but definitely worth having out-of-the-box IMO.

    2) Multi-select for setting animation frame-rate.
    Such a tiny thing, and trivially worked around/scripted away -- but it annoyed me as I was mass-dragging stuff around the past few days.

    3) Allow optional use of a custom data structure for List<Animations>
    We use the API only, and I'm not familiar with the Animator UI which is fine as I can quickly "delete all" and re-drag the animations back in -- but the Animator UI stinks for updating/maintaining what amounts to a list, and (AFAICT at least) there's no way to get it to *list* them. Obviously the Animator UI isn't your fault, but I'd sure prefer to just have a List/data structure. Others may disagree here though if they're using GameObjects probably -- but it's just not a UI or workflow that's overly useful for me as a "code developer" who generally wants to avoid editor UIs. I can almost surely hack around this too, but haven't done so yet.
    Screenshot -- shows why/how ugly is actually is w/ my setup: https://imgur.com/a/p6KVW4q

    4) Detect & alert if "delta" between Animations in list (controller) vs what is Baked.
    Ideally allow user to bake only the delta/whatever is missing. I'm not sure if delta-baking is already happening or not (because of issue #3), or if it's trivial do or not -- if not, then even just alerting that "this one has N animations in the List/Controller that are NOT YET BAKED" would be helpful.

    5) Auto-Detect "Same Animation Controller"
    As mentioned & we discussed a few posts back. I'm just about to take a look to see if I can bypass the actual image output part of the bake process or not, am guessing it may be required (or data may need copied) to get 'pixel offsets', but I currently have 5 prototypes that I'm not going to bake until I go to sleep tonight (due to how many animations are involved); if able to bypass it, that'd be excellent.
    [Edit: Took a real quick peek, looks like I can copy the data as long as rig is truly identical, but I will have to test it to be certain -- though it *should* be the same I think, the only vert positions that change are due to 'thickness'/bloat, but rig+indices should all be otherwise precisely identical, I think!].

    6) Delete animation data/textures when removing a CrowdPrototype from CrowdManager.
    Either a checkbox to "delete related stuff" down by the delete button, or honestly even just provide a tool somewhere that can multi-select the ones that are no longer in-use by registered prototypes. After testing/working with it for a few days I find myself with quite a few textures that are 'orphaned'; I can write some quick code to find/fix/delete but would rather it stayed clean so I don't accidentally commit textures to LFS that aren't used anymore. If they are supposed to be getting removed already, then there may be a bug.


    Anyways -- I've hacked in my own "solutions" to most/almost all of these for now, but figured it'd be worth providing the feedback if nothing else; at least a few of them are likely to be pain points for others too. Also plenty of well-deserved "Thank You" is warranted, both for the product & the support! :)

    PS - Far and away the best asset on the store. Code quality alone blows away anything else I've picked up [which is more than I'd like to admit, and 98% of anything code-related it is outright unusable due to code quality issues] ; not only does GPUI/Crowd Animations plugin work flawlessly, but you can work with it so easily and it's clean -- you don't find yourself having to go through to fix bad code everywhere. Thank you for making [and standing behind] a high-quality product. :)

    PPS - Gentle reminder: My my question above WRT blend-shapes / Lerp'ing between two 'base' models. =D
     
  13. LouskRad

    LouskRad

    Joined:
    Feb 18, 2014
    Posts:
    655
    Hi Arthur,
    and thanks for the detailed description.

    GPUI CA is generally bone based; that is, the compute shader works on bone layout data. That is why the avatar is completely not related to how it works. You can have your rigs use the same avatar or different ones, and wouldn't change anything as far as runtime execution is concerned. At runtime, GPUI will only use the baked bone data regardless of how that data was originally set up for the animation.

    As for your "bake vertex offset" strategy for blend shapes, this would be possible to do in a shader (using an instance based variations strategy - e.g. offsetting vertices depending on vertex color). This of course would be a rather complicated undertaking, and the performance considerations should be kept in mind. We are looking into implementing a generic solution for blend shapes, but at this point I cannot offer you an ETA on if and when this will be added to the asset.
     
  14. LouskRad

    LouskRad

    Joined:
    Feb 18, 2014
    Posts:
    655
    Thanks a lot for such a detailed wishlist :)

    We have took note of all these points, and we will look into implementing them in future updates. Also, thanks for your review of the codebase, we appreciate it.