Search Unity

[RELEASED] Puppet Face - All-In-1 Facial Animation for Unity

Discussion in 'Assets and Asset Store' started by jamieniman, Dec 17, 2020.

  1. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    I bought it a long time ago and it never gave me good results tbh.
    Currently only Salsa and now Puppet Face are up to the task.
    If they could work together nicely instead of competing, that would be the best solution on the market.
     
  2. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    What version of Unity are you using? I can't seem to reproduce this - Its playing the correct lipsync from the start of the clip for me?
     
  3. DigitalIPete2

    DigitalIPete2

    Joined:
    Aug 28, 2013
    Posts:
    44
    Hi Jamie,

    I cant get the Performance component to work at all because it collapses into itself everytime I click on an 'open' button:

    upload_2021-1-15_21-59-51.png

    upload_2021-1-15_22-0-21.png

    Like this. It is so wierd, but also completely unuseable. Do you have any ideas?

    Im using Unity 2019.4.4f1

    No matter how I try to open with different other components in the stack this collapses into something I can't even see to use, everytime.


    A quick edit - Not sure if this is a clue, Im using a VR set up, maybe I can set it up in a standard scene and switch to VR later. Does the system use a standard camera only to edit performances?

    edit 2 I seem to have stopped the collapse by removing the VR rig and inserting a standard maincamera. But it still occasionally freaks out.
     
    Last edited: Jan 16, 2021
  4. Recon03

    Recon03

    Joined:
    Aug 5, 2013
    Posts:
    845
  5. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    This is very strange, so when you say after removing the VR camera it fixes it, but occasionally freaks out - you mean it occasionally does the collapse bug again? Really odd. Can you help me reproduce your setup - what vr rig are you using?
     
  6. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    Does RT Voice Pro allow you to precompute your text to audio as wav files?(not only at runtime) If you can then it should work with Puppet Face.
     
  7. squallfgang

    squallfgang

    Joined:
    Sep 13, 2016
    Posts:
    21
    Hey there your software looks really cool!

    I have a question:
    I would like to animate a face with the blendshape but have certain objects move along the animated blend shape. For instnace I will have a mustache which is a seperate object and will be animated by BouncyBones, but it should be moved along the top lip of the face which is animated with blendhsapes.

    Is there a way to combine these two objects so that the mustache moves where the blendshape goes? Or should I move completely to Bone Animation to make that work?
     
  8. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    Great question! There is a lipsync array attribute on the lipsync component that you can apply all the other objects with lipsync components on them (like the moustache and tongue) then the main lip sync will drive all the other ones together.
    Then you can make blend shapes for the moustache or set bone position s and rotations for the mouth poses.
    With dynamic bones you could parent them to a moustache parent bone and use this to make the poses, that way there is dynamics on the lower child level
     
    bluemoon likes this.
  9. huuhau

    huuhau

    Joined:
    Mar 28, 2014
    Posts:
    17
    Does Performance Capture works on device camera? Is it looks like Face mesh of AR foundation?
    Cool asset btw.
     
  10. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    The performance capture works with a web cam.
    My future plan is to get it to also work with AR Foundation.
     
  11. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    873
    Can this generate and play lipsync from text alone?

    Also how does bone based animation work? Can it blend with separate expression triggers?
     
  12. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    It needs an audio file to make the lip sync
    The Lip Sync animation will override any bones and blendshapes it's working on, you can have separate expression blendshapes that work at the same time.
     
    Last edited: Feb 2, 2021
  13. yusufbesim

    yusufbesim

    Joined:
    Jul 1, 2017
    Posts:
    4
    Hello,

    Have 2 question

    1) in my mac performance capture not working. No cam video on canvas. Everything ok from your tutorial video
    2) Can we use puppet-face with timeline ? If yes how?


    Thank you
     
  14. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    873
    I would love it if lip animation could be driven by text. Especially integration with the Dialogue System asset. i think Salsa has that feature which is why I've been considering it.

    How do bone based expressions work then? Like if I want to frown while lipsync or something?

    Is it possible for lipsync to trigger by animator to allow for blending or layering?
     
  15. Wow, I was just browsing the forums and found your thread about this. This looks amazing, instantly bought it.
    I used two separate lipsync solutions, Salsa and Lipsync Pro. My biggest pain point is that Salsa only works real-time, LipSync Pro is nice, but it gives mixed results and needs a lot of tweaking to achieve good results.
    I'm also using Fmod, so the real-time lipsync solutions don't work for me. I will have my shiny new thing to play around over the weekend, apparently.

    Also if you're looking for ways to improve it, adding "official" support for FMod and Wwise are always welcome. Fmod more, since that's better for indies. I would love to work with a pipeline where we don't have to rely on the built-in audio system for the end results (during content creation is okay).
     
    Last edited by a moderator: Feb 3, 2021
    awesomedata likes this.
  16. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    I don't know why Unity's built-in audio gets so much hate. My only complaint is the lack of a built-in HRTF spatializer (and they could fix that very easily by actively supporting the free Resonance Audio library).
     
  17. I don't hate it, actually that's also Fmod-based. It's just so much easier to create vertically dynamic music in Fmod (changing music and FX when certain events happen) and I'm working with a sound designer/composer, who actually knows what he is doing. :D
     
    atomicjoe likes this.
  18. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    Do you see any devices listed at the top of your PerformanceCapture component?
    There is a lipSync timeline track, here's how to use it:



    btw the LipSync conversion uses an exe (which I don't believe works on your mac?)
     
  19. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    I'm going to add this to my trello, thanks for pointing it out :)
     
    Lurking-Ninja likes this.
  20. yusufbesim

    yusufbesim

    Joined:
    Jul 1, 2017
    Posts:
    4
    Yes i'm working on mac.

    Just found device list. With screen camera (low res) everything working but when switch Macbook Buildin camera (HD) captures very slow. Any solution?
     
  21. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    The performance capture is built on top of OpenCV, I'm not sure if its this that is slow on your macbook. Does the capture work with the lower res one?
     
  22. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    873
    I would also like to request Master Audio support! (Granted it may already work with that to some degree but I haven't bought yet).
     
  23. michael-y

    michael-y

    Joined:
    Apr 9, 2013
    Posts:
    18
    Does this asset supports MacOS unity ?
     
  24. michael-y

    michael-y

    Joined:
    Apr 9, 2013
    Posts:
    18
    And what's more , can it creates face rig ? So I can rig face in unity, instead of editing in Maya and shift back to unity ?
     
  25. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    The (in editor) lipsync conversion from audio uses an exe so needs to be done on Windows. The playback will work on MacOS though.
     
  26. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    It can create the blend shapes in Unity (so you wouldn't need to go back to Maya for making them). If you want to add bones (such as the Jaw bone) in Unity then you would need Puppet3D to do that.
     
  27. DigitalAdam

    DigitalAdam

    Joined:
    Jul 18, 2007
    Posts:
    1,204
    Looks great! Can you export the morph targets as fbx's to use in other applications?
     
  28. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    Not as of yet :)
     
  29. DeepShader

    DeepShader

    Joined:
    May 29, 2009
    Posts:
    682
    Any plans for macOS support "(Requires Windows for Lip Sync Converter)" ?
     
  30. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    Hi! I just bought Puppet Face and installed it into a project and I'm really excited about the performance capture but so far it is performing incredibly poorly. The video capture shown in the performance canvas is delayed and plays at about 1 frame / 3 seconds and the capture takes a few seconds to update the mesh. I'm running this on a a new Macbook Pro 15". Unity 2020.2.1f1. The project is set up for URP. Any tips regarding performance? Are there more tutorials besides the one on youtube?
     
    Last edited: Feb 17, 2021
  31. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    I'm using Mac and it's working for me. If you have a recent OS, you should be able to extract Rhubarb from the EXE
     
  32. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    So, I was going nuts trying to find the SAVE button in the LipSync editor and went back to watch the video and saw where it's supposed to be but for me it ends up outside the window 9C6552D6-8123-4FD5-B333-ACD112219763_1_105_c.jpeg


    Fixed this by commenting out a line in PuppetFaceEditor.CS

    9F176DB3-91E1-43DF-BEE2-7ECC2A9BC902_4_5005_c.jpeg

    1CD4ED97-AE21-4949-9F4A-9269A7973AD2.jpeg
     
    Last edited: Feb 18, 2021
  33. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    That's odd - the flexible space line is supposed to keep it centered. I wonder if this is a mac issue. I'll look into it, thanks for bringing it to my attention :)
     
    SuperDanOsbourne likes this.
  34. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    I'm trying to figure this out - so far I've only had this reported on Macbooks - I wonder if it has to do with how its webcam is reacting with OpenCV (used for the face recognition). If you happen to have an external webcam lying around you can connect to it and try I'd be interested to see if it makes a difference?
     
  35. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    If you are using an older mac - you can download the mac version of Rhubarb and do the conversion manually. Then PuppetFace can read and edit the html file created.
     
  36. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    I don't have an extra camera. i would maybe be willing to buy one if there was a chance it could help. If it helps to give you a clue, the capture for Adobe Character Animator works fine. Any plans to include an option for pre-recorded video in the future?
     
  37. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    Thanks for replying quickly to this and the capture issue. To avoid causing any confusion, I changed the order of the buttons in the script before ending up taking out the Flexible Space.
     
  38. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    Nevermind about the extra webcam - I'm not certain that will help. Hopefully when I get back to the office I can do trials on macbooks myself. Doing performance capture on videos would be a nice feature - I've added it to the trello list.
     
    SuperDanOsbourne likes this.
  39. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    Another thing that would be great would be an option to import blend shapes instead of editing. I'm having a hell of a time trying to use the mesh modifier. In topological mode, it will only move one vertice (or 2 if mirrored) the temporary generated mesh is wrong scale. Partly due to how I'm exporting, I think. it's closer when I export as DAE but slightly off.

    5DCFAA62-42FB-4F58-8484-04B0D30C5F9B.jpeg
     
  40. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    It will be the model in its Bind Pose If you reset it to its bindpose when you export it, the poses should match.
    How many polys is your mesh? To use topology mode you currently need it to be under 10K. In the next update this limit will be per submesh. (If you email me puppet3dunity@gmail.com I can send you the update before its released)
     
  41. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    .

    Ah! this model is 66K polys. As for the pose, everything is zeroed out when I export so I'm not sure what's going on.

    On the bright side this finally gave me the push to figure out once and for all how to export my blend shapes :) so I'll at least be able to use the lipsync feature. That alone is worth more than the price for sure.

    one thing I'm still wondering about though. Is there a way to have more than the 9 poses in the lipsync editor. Sure would be useful to be able to define more for more definition or expressions.
     
    Last edited: Feb 18, 2021
  42. SuperDanOsbourne

    SuperDanOsbourne

    Joined:
    Oct 3, 2017
    Posts:
    46
    Hi, seems I have a new problem. Looks like I can't analyze audio files.

    When I submit a WAV file (in this case "old man introducing"), I get:

    Win32Exception: mono-io-layer-error (2)
    System.Diagnostics.Process.StartWithShellExecuteEx (System.Diagnostics.ProcessStartInfo startInfo) (at <aa976c2104104b7ca9e1785715722c9d>:0)
    System.Diagnostics.Process.Start () (at <aa976c2104104b7ca9e1785715722c9d>:0)
    (wrapper remoting-invoke-with-check) System.Diagnostics.Process.Start()
    System.Diagnostics.Process.Start (System.Diagnostics.ProcessStartInfo startInfo) (at <aa976c2104104b7ca9e1785715722c9d>:0)
    PuppetFace.LipSync.ConvertAudioToPhoneme (UnityEngine.AudioClip audioclip) (at Assets/PuppetFace/Scripts/LipSync.cs:505)
    PuppetFace.PuppetFaceEditor.ConvertAudioToLipSync () (at Assets/PuppetFace/Scripts/Editor/PuppetFaceEditor.cs:439)

    I'm not sure if this was happening the whole time since I was only working with "buy cookies". Maybe I never noticed the error.
    I've tried re-importing and even starting a whole new Unity project to no avail.
     
  43. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    That's for sure on the trello feature list :)
     
  44. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    I wonder if this is a macOS issue. Could you try this - download the mac version of rhubabrb here
    and then run it in a terminal on the file (replace with correct paths) - something like this:
    Code (CSharp):
    1. rhubarb -o "Assets/PuppetFace/Demo/Audio/Oldman_Introducing.xml" -f xml -r phonetic "Assets/PuppetFace/Demo/Audio/Oldman_Introducing.wav"
     
    barge9 likes this.
  45. xylowkey

    xylowkey

    Joined:
    Apr 24, 2017
    Posts:
    1
    This tool is great. We will develop applications on mobile in addition to PC. I don’t know if it supports Android platform?
     
  46. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,419
    This looks amazing.

    Only three major things I see so far that bothers me even remotely about this tool (but no other solution out there offers these features either, so please don't take it as criticism):

    1. There is no way to specify (and maintain) a "global" jaw bone selection at the press of a button (without digging through the hierarchy) when editing the open/close mouth setup on the lipsync timeline, which makes editing appear tedious here:
      Code (CSharp):
      1. https://youtu.be/7UF5kq0HxeY?t=553
      To remedy this, I suggest having a single panel dedicated to having a global character / face setup for bones like the eyelids, eyebrows, jawbone, mouth -- and any additional bones or "chains" of bones the face might need to create more expressive characters.

      At the moment, the workflow for editing facial expressions while working on lipsync is unclear and doesn't appear (at first glance) to be supported very well (which is a tragedy for a tool as nice as this).




    2. Not everyone's facial features (and expressions) are blendshape-only. Some of us even have 2d eyes and mouths that we want to animate using the lipsync. It shouldn't be hard to use bones to drive material-based expressions. Offering this workflow in a visual way would be useful to the few of us who aren't going for realism and/or pixar-styled characters.



    3. Lastly, the bone-based facial expressions I suggest would be nice if they had a procedural (i.e. Freeform Modular Rigging) setup that handles bone chains for things like eyebrows and mouth shapes where you only need to select the start, middle, and tip transform (akin to moving around tails, etc) to slide them around for facial expressions. At first these could simply be single points the user places on the face that could be dynamically set to a certain number of bone transforms in a chain-like fashion that shift around the verts (or sprites/material UVs) for groups of verts painted on the face for each set of points.
      In the case of a single-material character, you could have 4 sets of evenly-spaced eyes on the same material (for mobile optimization) whose painted eyes are evenly-spaced across the texture, letting the shader shift those UVs dynamically based on a bone transform position, letting this kind of optimization be used in Lipsync setups. Would this be something I could work with you to implement? -- I am working on a character like this right now.


    Outside of these 4 things, this asset is an insta-buy IMO. -- Please make it the best it can be in terms of workflow, yeah?
     
    Last edited: Feb 24, 2021
  47. FennecMoon

    FennecMoon

    Joined:
    Aug 12, 2015
    Posts:
    13
    I'm very interested in this but have a question before I purchase: does this work with DAZ3D models, and if so do I need to do anything special when exporting from DAZ? Thank you.
     
  48. jamieniman

    jamieniman

    Joined:
    Jan 7, 2013
    Posts:
    987
    Currently I recommend using the face bones to setup the lip sync expressions for DAZ3D. (There is a 10K limit to topology mode for the blend shape sculpting - I have a remedy for this that I'll be releasing in the next update).
     
  49. WickedRabbitGames

    WickedRabbitGames

    Joined:
    Oct 11, 2015
    Posts:
    79
    UMA support? I'm guessing it would work, but since I've invested heavily in UMA, I just wanted to check before I buy the asset. Thanks!
     
  50. netpost

    netpost

    Joined:
    May 6, 2018
    Posts:
    388
    @jamieniman

    Congratulations! Puppet Face seems like a great asset. I have a few questions if you don't mind.

    1-Do you have any plans to offer the option to record the facial using the iphone for AI face recognition?

    At the moment, I am recording with an iphone in a third party app (Reallusion Live Face) and exporting the fbx animations to Unity which is a pita.

    2-Do you think Puppet Face can edit the blenshapes from these T_pose FBX files?

    Thanks!