Search Unity

[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools

Discussion in 'Assets and Asset Store' started by Rtyper, Mar 11, 2015.

  1. haleler51

    haleler51

    Joined:
    Apr 9, 2015
    Posts:
    30
    The developer's last response in this thread is 9 months ago. I assume something unforeseen must have happened that made him strongly de-prioritize his own asset, because right up until then he was on this forum very regularly; seemingly enthusiastic, even.

    I can deal with that as long as there is an update that fixes the issues with 2018.3. This really is a good asset, and I hope it doesn't get totally abandoned soon.
     
  2. philc_uk

    philc_uk

    Joined:
    Jun 17, 2015
    Posts:
    90
    Anyone converted Amazon Polly Viseme to the XML import format in LipSync?
     
  3. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Hello everyone,

    I owe you all a massive apology for letting this thread slide all this time. I don't have any huge excuse to give, just a general combination of personal life/work and everything getting a bit too much for a while. I had initially put off replying back here as I was making no progress on the macOS issue and felt like I didn't have enough to show for it, and then obviously the time that passed just made it even harder to deal with.

    I want to make it clear though that I'm absolutely not abandoning LipSync Pro. You may have seen that there's been a couple of updates while I was off the forum, and email (contact@rogodigital.com) is still the best way to get in touch as I can keep track of what emails I have and haven't replied to there.

    All the same, I'm really sorry for not keeping up the support on this thread, I messed up and I'm going to do my best to stay on top of replies here from now on.

    As for AutoSync, I finally have some good news. The upcoming 1.5 update for LipSync Pro is a complete overhaul of the AutoSync system, designed to avoid issues like this where something outside of my control causes problems in the future. Under the new system, AutoSync is split into various modules that each do a specific task, that can be chained together as presets. Each module can use different libraries or technologies, giving you more options if you want to really customise it for your project and also allowing a level of redundancy in case something causes one library to stop working.

    I put together a video on my YouTube channel demonstrating one of the new modules (based on a the Montreal Forced Aligner tool) and the new AutoSync UI - you can find that here:
     
    alexrau, ftejada, haleler51 and 2 others like this.
  4. fildax

    fildax

    Joined:
    Apr 7, 2017
    Posts:
    37
    Welcome back! Months ago you mentioned that you are working on v2 which should also have timeline integration out of the box. Any news?
     
  5. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Nothing solid just yet - I mentioned v2 briefly at the end of that update video, but when I started working on it, the plan was to try and integrate the Playables system into the existing code for LipSync Pro. That fairly quickly proved to be more difficult than I expected it to be, and Unity then announced the new C# jobs and ECS systems as well, so I decided to basically start over, and write version 2 with these new features in mind. It's a long way off though, and I don't expect it to support anything lower than 2019.1 as a minimum.

    So instead, Timeline support will be improved for 1.X - the new timeline markers system in 2019 should make it much simpler, so I think I'm going to use that as a base for the new integration, and offer the old version for people stuck on older Unity versions. Both of them will be available from the extensions window.
     
    Last edited: Feb 18, 2019
    ftejada likes this.
  6. fildax

    fildax

    Joined:
    Apr 7, 2017
    Posts:
    37
    Awesome, thanks for info!
     
  7. ftejada

    ftejada

    Joined:
    Jul 1, 2015
    Posts:
    695
  8. Dorian-Dowse

    Dorian-Dowse

    Joined:
    Jun 2, 2016
    Posts:
    95
    Hi Rtyper,

    Good to hear you're still on board. I'm about to launch into a whole bunch of lip syncing.

    My request: Please implement the Spacebar as a Play/Stop control in the lipsync editor. This is very standard across many animation programs and video players (inc. Youtube). While working on Lipsync Pro I lost track of how many times I hit the spacebar expecting it to play/stop. It's ingrained. I'm hoping this is a simple thing.
     
    Ana_22 likes this.
  9. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Thanks :)

    You can actually do this already - if you open the settings screen in the clip editor (the gear icon in the top right) and go to the "Keyboard Shortcuts" tab, you can set a shortcut for play/pause.
     
    Dorian-Dowse likes this.
  10. domdev

    domdev

    Joined:
    Feb 2, 2015
    Posts:
    375
    this can build for ios and android right?
     
  11. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    That's right - all Unity platforms are supported. There are a couple of limitations on WebGL and UWP, but iOS and Android work just fine.
     
  12. domdev

    domdev

    Joined:
    Feb 2, 2015
    Posts:
    375
    thanks I have another question.. cause our app is using Text to speech then the speech will be the lipsync.. I see in you note "LipSync Pro does not support automatic lipsyncing at runtime" , so its not gonna work?
     
  13. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Ah ok, then yes it probably won't be suitable for your need at the moment, sorry! I do have plans to enable AutoSync (our automatic lipsyncing system) to work at runtime, but it's not a high priority at the moment.
     
  14. jeromeWork

    jeromeWork

    Joined:
    Sep 1, 2015
    Posts:
    429
    Hi @Rtyper Been looking at this asset for a while but it seemed abandoned... seeing you back on the forum got me to finally buy it. Looking forward to using it and the upcoming improvements.

    I've clicked on Download the Actions for NodeCanvas fro the Extensions wiondow and getting two errors in the console:
    Code (CSharp):
    1. Assets/NodeCanvas Integrations/LipSync/Tasks/Actions/Play.cs(22,15): error CS1061: Type `RogoDigital.Lipsync.LipSync' does not contain a definition for `isPlaying' and no extension method `isPlaying' of type `RogoDigital.Lipsync.LipSync' could be found. Are you missing an assembly reference?
    2. Assets/NodeCanvas Integrations/LipSync/Tasks/Actions/Play.cs(28,14): error CS1061: Type `RogoDigital.Lipsync.LipSync' does not contain a definition for `isPlaying' and no extension method `isPlaying' of type `RogoDigital.Lipsync.LipSync' could be found. Are you missing an assembly reference?
    I'm guessing an API change in LipSync hasn't been copied over to the Actions, can you tell me what I need to change to get it working?
     
  15. jeromeWork

    jeromeWork

    Joined:
    Sep 1, 2015
    Posts:
    429
    Really enjoying this. Great asset, and all worries about AutoSync not working turned out to be completely unfounded. Great stuff.

    Just a few initial thoughts/requests after first use:

    In the LipSync window > Edit. It would make sense that 'Reset Intensities' altered all markers using the values set in 'Default Marker Settings' (currently it defaults everything back to 100%).
    Similarly the autoSync values should use those 'Default Marker Settings'. (currently only new manually added markers seem to use those settings)

    Also Emotions > Marker Settings. Really could do with the option to change the emotion used on that 'marker/range' (like you can change the Phoneme marker settings) i.e. a little pulldown to change the emotion selected.
     
  16. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Glad you're finding it useful so far!

    Ah, my mistake - that should be "IsPlaying" now, with a capital I. I'll make this change and update the package on the server asap.

    All good points, and they should be pretty simple changes to make too - I'll get them added to the next update.
     
    jeromeWork likes this.
  17. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    @jeromeWork Just a heads up, the fixed NodeCanvas integration is live now, just in-case you hadn't already sorted it yourself :)
     
    jeromeWork likes this.
  18. jeromeWork

    jeromeWork

    Joined:
    Sep 1, 2015
    Posts:
    429
    Thank you.
     
    Rtyper likes this.
  19. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    LipSync Pro 1.45 is now available from the Asset Store!

    This is a smaller update with a few fixes and a couple of new features, before the big AutoSync update in 1.5 later this month. Here's the change list for this version:

    Features
    • Exposed the "Keep Emotion When Finished" option in the LipSync component editor.
    • [Experimental] Presets can now be marked as "relative" when saving, to make bone transforms applied relative to their existing location, rather than the absolute position/rotation/scale of the original.
    • The emotion value can now be changed from the marker settings window when editing an emotion marker, or a selection of emotion markers.

    Fixes
    • Fixed issue that prevented the LipSync.PlayFromTime method from working.
    • Fixed issue with the LipSync component editor when inside a new-style prefab, after changing the project's Phoneme Set.
    • Fixed bug where AutoSync would cause an unhandled error when attemping to process an audio clip that contains no dialogue.

    Changes
    • The "Reset Intensities" menu option in the clip editor now resets to the intensity in the "Default Marker Settings" instead of 100%.
    • AutoSync now creates markers that adhere to the settings in "Default Marker Settings".
    • BlendSystems are now found regardless of what assembly they are in, allowing them to be used with .asmdef files or pre-compiled .dlls.
    Cheers!
     
    ftejada likes this.
  20. ceebeee

    ceebeee

    Joined:
    Mar 7, 2017
    Posts:
    395
    I just downloaded 1.45 and installed it into a fresh project on Unity 2018.3.8f1, and when I play the Example_04_Lincoln_Advanced, I get hundreds of this error:

    Code (CSharp):
    1. Thread group size must be above zero
    clicking on the error shows this in the inspector:



    it would appear this is a compute shader, so I suspect this has something to do with the GPU blendshape support Unity added in 2018.3.

    This is further supported by the fact if I turn off GPU skinning, the errors don't happen.



    Do you know if GPU skinning support will be added with 1.5?
     
  21. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I'm looking into this now - thanks for pointing it out! Interestingly, it only seems to happen when blending to/from emotions, not phonemes, and the animation happens anyway so you can probably safely ignore it for now. I'll see if I can get it fixed anyway for 1.5.
     
    ceebeee likes this.
  22. ceebeee

    ceebeee

    Joined:
    Mar 7, 2017
    Posts:
    395
    Rtyper likes this.
  23. TobiasW

    TobiasW

    Joined:
    Jun 18, 2011
    Posts:
    91
    I'm currently trying out the AutoSync with version 1.45. I've installed SoX and the VC++ Redistributable wouldn't install saying that there is already another version installed, so I guess I have that too. When I click on AutoSync->"Start (Default Settings)" it opens the progress bar window, stays a bit on "Recognizing Phonemes", then switches to something else before closing again - but it doesn't add any Phonemes markers. This is the case both for my file and the example Gettysburg audio file. There are no errors in the console.
     
  24. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Yep, I understand it's not the most reassuring thing, don't worry! It's looking like this might be a Unity bug rather than a LipSync one, so I'll try and sort out a simple reproduction and submit a bug report.

    This is especially strange, I haven't seen AutoSync fail with no message or errors whatsoever before. Do you think you could check if this happens in a completely clean project? Just import LipSync Pro straight from the Asset Store and try running AutoSync on the Gettysburg.wav file without changing anything else. This should help narrow down whether it's something about your system, or the project that's causing this.
    (If all else fails though, I am still aiming to release the 1.5 update with the new AutoSync in about 2 weeks, so there's that as a fallback!)
     
    ceebeee likes this.
  25. Jimbo_Slice

    Jimbo_Slice

    Joined:
    Oct 1, 2015
    Posts:
    44
    Hi Rtyper,

    I have been really struggling to get the Texture Offset Blend System to work. I can't find any documentation or examples on this system. I have used LipSync Pro for blendshape and sprite based animations before without any difficulty but I just can't get the texture to move whatsoever.

    I am using LipSync Pro 1.45 and Unity 2018.3.2 (have also tried 2018.3.7).

    I am using the Gettysburg LipSyncData file to Play on Awake and hit play in the Unity Editor and it plays the audio but no movements to the character's face. No errors.

    I also tried a separate texture and material that controls the character's eyebrows and set it for the different emotions and it didn't have any effect.

    Can you take a look at the screenshot below and tell me if I am missing anything obvious?

    Also if you had an example scene or tutorial it would be greatly appreciated!







     
  26. Jimbo_Slice

    Jimbo_Slice

    Joined:
    Oct 1, 2015
    Posts:
    44
    Ok, I got it - Texture Property Name needs to be "_MainTex". Oversight on my part but a tooltip on the editor would be helpful
     
    Rtyper likes this.
  27. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Ah, glad you got it sorted. It's been a while since I updated that blend system, so I hadn't managed to work out what was wrong myself! Yes, that field is the name of the material property that will be changed, so "_MainTex" for most purposes. I'll add tooltips and a better default value for the next update!
     
  28. TobiasW

    TobiasW

    Joined:
    Jun 18, 2011
    Posts:
    91
    Done - and it still fails. This is how it looks after the AutoSync:
    upload_2019-3-15_22-10-32.png

    I'm running Unity 2018.3.4f1 on Windows 7 64bit.
     
  29. thegamer4590

    thegamer4590

    Joined:
    Sep 23, 2018
    Posts:
    1
    I love the asset, however, the lite version seems to be missing, and I'd love to try it before buying it.
     
  30. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I'm sorry Tobias, I've been trying (unsuccessfully) to recreate this myself, and I just can't think of any reason it'd be happening... Most of the reasons I can think of for AutoSync failing would produce errors, so it implies that the .dll file is there, but just failing silently for some reason. I'm looking to release 1.5 by the end of next week, which has a few new AutoSync systems in. Would you be alright waiting until then to see if the new system works any better for you?

    Yes, it was deprecated by Unity a little while ago. Don't worry, it is coming back very soon - I'll be releasing a new version of it to accompany LipSync Pro 1.5 in a week or so.
     
  31. AKQJ10

    AKQJ10

    Joined:
    Feb 9, 2012
    Posts:
    33
    Hi, @Rtyper,
    PocketSphinx has added a Mandarin package. I have downloaded the package from here, and already purchased LipsyncPro.
    https://sourceforge.net/projects/cmusphinx/files/Acoustic and Language Models/
    How can I use it to lip-sync Chinese dialog ? Is there any document, or tutorial?
    Thanks in advance. :D
     
  32. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I just gave this a try, although they have added a phonetic dictionary to it (which was missing from their Mandarin language model before), something about it isn't working with AutoSync - trying to use it just causes the Unity editor to crash silently. I'll look into it a bit more if I can, but for the time being I'm more focused on getting LipSync Pro 1.5 out.

    The current PocketSphinx AutoSync version still exists as a module in the new AutoSync 3, so once it's been released I'll see if I can get the Mandarin model working with it.
     
  33. AKQJ10

    AKQJ10

    Joined:
    Feb 9, 2012
    Posts:
    33
    Thanks a lot for the reply, @Rtyper.
    Please keep me posted.
     
  34. Dorian-Dowse

    Dorian-Dowse

    Joined:
    Jun 2, 2016
    Posts:
    95
    Sub titles with Lipsync Pro? Any suggestions?
     
  35. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,697
    Many devs use the Dialogue System for Unity to do subtitles with LipSync Pro.
     
    P_Jong and Rtyper like this.
  36. bz_apps

    bz_apps

    Joined:
    Aug 19, 2014
    Posts:
    72
    Hi is there a built-in way to handle volume and in particular muting the game?
     
  37. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    As Tony said, Dialogue System for Unity is a great system for handling dialogue in your game and it includes subtitle functionality. If you're not looking for something that in-depth though, you can make use of the "transcript" field in a LipSyncData object when coding your own subtitle system. There's currently no global way to find when a LipSyncData clip starts playing, but if you use some system to manage triggering clips, you could get the transcript then and set the text on a UI component.

    LipSync Pro plays audio through standard Unity AudioSources, so you can control the volume on each audio source manually, or if you want something project-wide, have a look into Unity's Audio Mixer. If you assign all your LipSync audio sources to a single mixer group you can control the volume of the whole group from a script or using snapshots.
     
  38. fuzymarshmello

    fuzymarshmello

    Joined:
    Feb 22, 2015
    Posts:
    17
    Hi, I'm having just one small problem when trying to use the eye controller for blinking.

    In my lipsync, I have an emotion that has the guy kind of squinting, and I guess that overrides the eye controller's blink?
    Basically when the guy is squinting when using that emotion, he never blinks.

    lipsync and eyecontroller scripts are on the same object, using the same blend system (bones only).

    How you would you recommend I fix this?

    Great asset by the way, thank you!
     
  39. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    Hello there,
    I cannot get AutoSync to work anymore (I tried new Project in 2018.3.12, Win7). It usually displays the editor warning "Phoneme Label '+SPN+' not found in phoneme mapper. Skipping this entry." and no phonemes are added. It used to work some time back (Unity versions as well as LipSync Pro) with the same setup of sox. I tried all kinds of audio files including those from the examples without any success..
     
  40. TobiasW

    TobiasW

    Joined:
    Jun 18, 2011
    Posts:
    91
    Sure. In the meantime, I got a colleague to execute the AutoSync for me and commit the results, so at least I'm not blocked here.
     
  41. MichaelPickering

    MichaelPickering

    Joined:
    Oct 11, 2017
    Posts:
    14
    Any updates on the release 1.5 please?
     
  42. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    This is quite a difficult problem, at the moment there's no way to manage conflicts between two scripts like that. Eye Controller tells the blend system to set one value, and LipSync tells it another, the one that shows up depends on script execution order.
    If you want Eye Controller to always take priority, you could try changing the EyeController.cs script's execution order to be later than LipSync.cs (just setting it to run after the default time should work), but I haven't tested this myself so there may be undesirable side-effects to the resulting animation.

    One of the things I've done some work on for LipSync 2 is a way to resolve conflicts like this, though I've tried a few different approaches and I haven't decided on one yet!

    +SPN+ is the "garbage" phoneme label that gets returned when it just detects noise or unintelligible audio. I don't know for sure, but I have a feeling the problem might be to do with Windows 7 and newer versions of the DLLs. I've attached an older version of the DLL files, could you try replacing the existing ones in the Plugins folder with these, restarting Unity and trying to run AutoSync on the Gettysburg.wav file?
     

    Attached Files:

  43. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Great - as you probably guessed I'm a bit behind schedule on 1.5! :eek:

    Speaking of...
    Yes! The TL;DR is that I'm down to the last 2 necessary features to complete before I'll be able to release it. An unexpected problem with one of them kept me from releasing it 2 weeks ago as I was planning to, but I'm hopeful I can solve it fairly soon.

    If anyone's interested in more detail (wall of text ahead ;)): The new AutoSync system is modular, and of the assorted modules that I have planned for it, the important completed ones as of now are a PocketSphinx Phoneme Detection module that behaves identically to the old version of AutoSync (and so doesn't work on macOS), and a new Montreal Forced Aligner Phoneme module, which was demonstrated in the video a few posts up and provides higher-quality, cross-platform phoneme detection solution. The downside to this new module is a) It's much larger than the old PocketSphinx one, so would make the total download much larger, and b) it requires a text transcript of the audio in order to work.

    I'm aware that the "one click" instant nature of AutoSync is what appeals to a lot of people, and even though it isn't really the focus I designed LipSync with, I'm weary to release this in a state that doesn't allow an un-transcribed audioclip to be processed automatically. To get around this, I am planning on later releasing modules (through the Extensions window) to do online speech-to-text with a few different cloud services, Google for sure, and likely IBM Watson and/or Microsoft Azure. These services provide some really accurate results in a huge number of languages without bloating the download size - great! They also tend to cost money if they're used a great deal, and they require setup with creating accounts etc. The solution I was trying to implement was allowing the use of PocketSphinx for simple transcription as well, but actually getting this to work with the C# wrapper I have has proved more difficult than I expected.

    I'm open to suggestions from anybody here - would you guys prefer having the update released sooner rather than later, even if the newer workflow doesn't have quite the same level of ease-of-use as the old one or would the lack of offline transcription be a major dealbreaker for most users?
     
    ElevenGame and haleler51 like this.
  44. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    @Rtyper Thanks for your response. I have tried the older dlls you posted, but still have the same issue: No phonemes whatsoever after running AutoSync.. :-/
     
  45. aoinekostudios

    aoinekostudios

    Joined:
    Sep 20, 2012
    Posts:
    15
    @Rtyper For me the advantage of Montreal Forced Aligner syncing outweighs the comparatively small downside of requiring a text transcript. So I would say update released sooner, but that's just me! Either way keep up the great work.
     
    Rtyper likes this.
  46. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    146
    I agree with @aoinekostudios, those things are the most important for me, too.
     
    Rtyper likes this.
  47. SKNKanimation

    SKNKanimation

    Joined:
    Feb 24, 2017
    Posts:
    12
    The latest version of Lipsync Pro is having a minor issue on Unity 2019. It's creating a warning that reads "There are menu items registered under Edit/Project Settings: LipSync Consider using [SettingsProvider] attribute to register in the Unified Settings Window."

    Another little thing for the 1.5 release <3
     
  48. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Damn, ok. Not sure what else to suggest I'm afraid, except try the new version when it's out - sorry! I think you and @aoinekostudios are right though, I'll aim to release it soon, and if/when I can get the offline transcription working, I'll make it available from the Extensions Window.

    Thanks, I had noticed this (it's in 2018.3 too, I believe) but the proper fix for it is quite involved so I hadn't got around to doing anything about it yet. I think I may just remove that menu item for now (it's a duplicate anyway) to keep the compiler happy :)
     
    ElevenGame likes this.
  49. Alvarezmd90

    Alvarezmd90

    Joined:
    Jul 21, 2016
    Posts:
    151
    Code (CSharp):
    1. this.GetComponent<RogoDigital.Lipsync.LipSync>().SetEmotion("Smile", Time.deltaTime * 4.0f);
    2. this.GetComponent<RogoDigital.Lipsync.LipSync>().DisplayEmotionPose(4, 1.0f);
    Display emotion works. But blending into it immediately causes the whole face and jaw to collapse. I don't understand.
     
  50. AKQJ10

    AKQJ10

    Joined:
    Feb 9, 2012
    Posts:
    33
    @Rtyper
    As lipSync 1.5 will use Google cloud services, IBM Watson and/or Microsoft Azure, it will support Chinese language eventually?
    Maybe not with 1.5.0, but in future updates?