Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools

Discussion in 'Assets and Asset Store' started by Rtyper, Mar 11, 2015.

  1. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Thanks for your feedback! I've actually reached a bit of a breakthrough with the MFA module problem - after a bit of work, I've been able to process your audio clip perfectly. I'll PM you an updated module to try out, and if there aren't any other issues with it, I'll put it out as an update in the next few days.

    Regarding your other points, there's not really anything I can do about import locations unfortunately - Unity's .UnityPackage system is supposed to (or used to) match files based on GUID, but it seems not to always work properly and it just imports the folders at the location they were exported from. Your points about transcripts are totally fair though - I definitely want to improve the messaging regarding compatibility checks in AutoSync. Possibly I'll make the Clip Settings window pop up when you try to run the MFA module and there's no transcript.
     
    dgoyette likes this.
  2. skinwalker

    skinwalker

    Joined:
    Apr 10, 2015
    Posts:
    423
    Hi, if I dont have blendshapes for my characters facial expressions, can I use this plugin to make it talk using bones or other technique that doesnt require blendshapes?
     
  3. ggendron

    ggendron

    Joined:
    Mar 19, 2019
    Posts:
    4
    Hi,

    I'm having troubles using the batch process feature.
    It successfully generates a file for the first sound in the list, but fails for the others.

    I have a log warning saying that there is a .meta file for the converted sound, but no wav file.
    I've attached a screenshot of the log messages.

    (nb : note that it works well when doing a single process)
    (nb2 : it seems to work better with MFA)
    (nb3 : With MFA, it seems to fail when there's a lot of audioclips in the batch process. I'm still investigating.)
     

    Attached Files:

    Last edited: Jun 6, 2019
  4. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    I did the same thing which causes the sound asset to only produce phonemes for the amount of time that the original Gettysburg file plays (approx. 12 secs). However after I do that, if I rename the sound file I want and its word script txt file and the Gettyburg.asset (using my sound clip, not the original Gettysburg) to something else, it fixes itself and works like usual. I don't know why and haven't tested it beyond a .wav sound clip of 30 sec
     
  5. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    Also @Rtyper , for a future update, can you make it so that the pics you have for the Phoneme Guides work for any screen resolution? I was working on my personal laptop at first before switching to a larger monitor because only a portion of the guides could be seen on the smaller screen's resolution.
     
  6. Suckapunchy

    Suckapunchy

    Joined:
    Apr 1, 2018
    Posts:
    1
    Been having trouble with the update. After not being able to get the autosync to work, I followed the directions here on the board and updated the extensions. It worked for a bit, then it started to crash and I started to get these messages Only the audio that I did during that brief window would continue to autosync. Trying to figure out next steps.
     
  7. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Sorry for the late responses guys, had a busy couple of weeks! I'll be putting out a new update for the MFA module this week, and there's an update for LipSync Pro next week. I've fixed a fair number of issues with the MFA module update, including most instances of the "MFA output TextGrid file does not exist." error, and the new LipSync Pro update (amongst other things) adds new warnings to clarify what's wrong when an AutoSync module fails the compatibility check. More on this later!

    Yes, bones are fully supported (though there is currently a bug with the "Set Emotion" method when using bones that's fixed in the upcoming 1.51 update), and some other systems such as 2D sprites and UMA characters are supported using LipSync's "Blend Systems". As long as your character has some means of moving the lips, eyes, jaw etc (as much detail as you require), then it should be usable with LipSync Pro.

    I haven't had the chance to look into this myself yet, but I'll get on it asap. If this is an issue in LipSync itself I'll make sure it's sorted out for the 1.51 update!

    In a couple of days, could you try downloading the Montreal Forced Aligner again from the extensions window and trying it? I think this new update may fix the issues you were having.
    Thanks for pointing this out, I hadn't come across it myself but I'll look into it. At the moment, I believe it's just anchored to the lower right of the scene view, so I might try having it scale according to screen size.
    Sorry about that - unfortunately your image isn't showing up for me so I can't see the error. Can you try copy + pasting the text in here or uploading the image somewhere else?
     
    LadyKattz and pilamin like this.
  8. Clrj14

    Clrj14

    Joined:
    Nov 25, 2017
    Posts:
    1
    This is the error I get...

    "Failed to import package with error: Couldn't decompress package
    UnityEditor.AssetDatabase:ImportPackage(String,Boolean)"

    This occurs when I try to run the AutoSync Setup

    EDIT: Uhhhh I fixed it? lol Tried it a couple more times and then it worked.
     
  9. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Strange! Possibly a network error when downloading it or something? Oh well, glad it's working now!
     
  10. skinwalker

    skinwalker

    Joined:
    Apr 10, 2015
    Posts:
    423
    Does it have automatic setup with bones or I have to animate them myself? (I have daz characters with a face rig)
     
  11. bluetenu

    bluetenu

    Joined:
    Dec 6, 2014
    Posts:
    5
    Is it possible to have more than one character model in a scene use LipSync? I ask because when I attempt to add another character, and point to the character mesh, the previously setup LipSync component on my first character switches to point to the new mesh as well.
    To clarify, they will not be talking at the same time -- I did review the dual Lincoln scene -- but they do use the same blendshapes.
     
    Last edited: Jun 14, 2019
  12. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    You'll have to pose the character yourself, yeah. There are presets for certain blendshape setups, but with bones there really isn't any way to predict in advance how your character will be rigged (how many bones, how they're laid out, what they're called, what rotations/positions will look right for them etc) so we can't really do that automatically. Once the character's set up though, it (and any others) will play any dialogue you've synced.

    Absolutely, the two LipSync components just need to be on different GameObjects if they use the same type of blend system. If they're on the same GameObject, then they will try to share a single blend system between them (just like when you have both LipSync and Eye Controller on the same object) to save on having to set both up separately.
    Edit: If this isn't working though, it may be to do with prefabs? I know it sounds obvious, but make sure you're changing values on a single prefab instance and not the base prefab if that's the case. If either of those things sorted it then great, otherwise let me know if it's still causing problems - if possible a screenshot or two of your scene/character setup would be invaluable!
     
  13. bluetenu

    bluetenu

    Joined:
    Dec 6, 2014
    Posts:
    5
    Thanks for the quick response! No, I am using two separate game objects, neither is a prefab, but what I did do was to copy and paste the Lip Sync component rather than making a new one. So I guess what I should have asked is how to transfer over the phonemes/emotions blendshapes so that I do not have to redo them all for each character -- again they use the same ones and the copy when applied to any of the characters looks and works great [other than of course not being independent right now]. Should I perhaps use the advanced blendshape system, redo them all there and then be able to copy/paste the Blend Shape Manager component without issue after the first time? Or will I just need to set up the blends for each phonemes/emotions for each character? Thank you for the responses!
     
  14. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Ahh, I think I get what's happening now. It's technically the "correct" behaviour for copy+pasting a component (referencing the blend system on the original object), but it's definitely confusing in this case and I'll see if I can change it! For now, what you could do is add a new LipSync component instead of copying and pasting, and on your first character click the presets button (next to the phonemes, emotions, gestures buttons) and create a new preset.

    You should be able to then apply that preset to your second character using the same menu but on that new component, and have all the poses copied over.
     
  15. bluetenu

    bluetenu

    Joined:
    Dec 6, 2014
    Posts:
    5
    Terrific! The only confusing thing was just that I couldn't find where to save and transfer over the blendshapes I had set up, but it worked perfectly as you described -- thank you so much!
     
  16. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    265

    (This is with a high blink time so i could see what was happening)

    Please help, my bone based eye blink only works in reverse lol. Values go between 30 and 330 and i guess the system can't resolve this?

    What can i do? I'm not sure somehow changing the way the values work is viable for me.

    EDIT: I tried to see if i can do different values but it seems to prevent me from going past 60 and 270 in their respective directions even if i try to type something in. The pose editor only lets me go between my base 30 value and the 330 but the lerp for the eye pose is doing 30 -> 330 instead of 30 -> 0 -> 330.

    EDIT2:

    I'm also have issues with the look direction, where it seems like it can only handle values between 90 and -90, and anything else just doesn't work lol.

    upload_2019-6-16_1-22-55.png

    Here the target is selected but the eyes are looking behind the head, but let me be clear, this doesn't seem like an issue with the z position. Right now it is set to "Y Positive" but here it is at "Y Negative".

    upload_2019-6-16_1-24-20.png

    But then look what happens when i move the target behind the head.

    upload_2019-6-16_1-26-4.png

    "Y Positive" is the exact same except the eyes actually look at the target. But due to the values being limtied between 90 and -90 it only works with the eyes looking backwards lol..

    I just can't use any of this at all and i've wasted so much time trying to figure this out! And i haven't even gotten to the actual lipsync part yet XD

    I can see it's making a "Dummy" to handle the orientations but for whatever reason it won't work with my model. Please help me.
     
    Last edited: Jun 16, 2019
  17. skinwalker

    skinwalker

    Joined:
    Apr 10, 2015
    Posts:
    423
    Well if face bones are used instead of blendshapes

    1.Are there any tutorials on how to properly animate the face and is it a hard thing to do?
    2.Do I have to make a different face expression for each letter that the character says (like daz blendshapes have A-Z)?
    3.Once I made it for 1 character, can I reuse it to other characters, they have the exact same face rig?
     
  18. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    I am now having an issue where if I add emotions to the lipsync .asset file, it works at first but then if I click on the character again to maybe change the look of the emotion or add another emotion, it resets everything to the default face. Its extremely frustrating losing my work on the emotions every time but I'm hoping this is related to the hacky method I did to get the phonemes onto the sound file in the first place. If that's the case then hopefully this update should fix that.
     
  19. Yannou

    Yannou

    Joined:
    Jan 2, 2015
    Posts:
    11
    Hello, I just bought the asset, and suceeded to convert a 30s speech. Worked fine.
    After a few adjustments I had this error :


    [AutoSync] ERROR: Input audio file has [8] bits per sample instead of 16

    UnityEngine.Debug:LogError(Object)
    RogoDigital.Lipsync.AutoSync.<>c:<RecognizeProcess>b__10_0(String) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/PocketSphinx/SphinxWrapper.cs:81)
    RogoDigital.Lipsync.AutoSync.SphinxWrapper:PSRun(MessageCallback, ResultCallback, Int32, String[])
    RogoDigital.Lipsync.AutoSync.<>c__DisplayClass10_1:<RecognizeProcess>b__2() (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/PocketSphinx/SphinxWrapper.cs:142)
    System.Threading.ThreadHelper:ThreadStart()

    [AutoSync] FATAL: Failed to process file 'D:/_UnityWorkspace/LipsynchWorkspace/Assets/Rogo Digital/LipSync Pro/Examples/Audio/Gettysburg.converted.wav' due to format mismatch.

    UnityEngine.Debug:LogError(Object)
    RogoDigital.Lipsync.AutoSync.<>c:<RecognizeProcess>b__10_0(String) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/PocketSphinx/SphinxWrapper.cs:81)
    RogoDigital.Lipsync.AutoSync.SphinxWrapper:PSRun(MessageCallback, ResultCallback, Int32, String[])
    RogoDigital.Lipsync.AutoSync.<>c__DisplayClass10_1:<RecognizeProcess>b__2() (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/PocketSphinx/SphinxWrapper.cs:142)
    System.Threading.ThreadHelper:ThreadStart()


    Tried to change to other Unity versions 2018.3 to 2019.1 ... but nothing and also tried to clean my temp files, create a new project and reimport the asset...

    I don't know I you already experienced this error.
    Regards !

    EDIT :

    I Found that when I untick audio conversion in the autosync settings, It works without the [8] bits per sample instead of 16 error.

    Also, to avoid conversion errors, here's the SOX script that works :
    - Install Sox, and add sox.exe's folder to your Path environement variables (google it if you don't know how to)
    - press windows + r , and enter CMD. Paste :
    sox YOUR_FILE_INPUT.wav -r 16000 -b 16 -c 1 YOUR_FILE_OUTPUT.wav
     
    Last edited: Jun 19, 2019
  20. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    The updated Montreal Forced Aligner works with both mp3 and ogg files without having to change the name to Gettysburg. I'm going to keep working with the emotions portion and see if the update fixed that too or not.
     
  21. Yannou

    Yannou

    Joined:
    Jan 2, 2015
    Posts:
    11
    Hey ! I have another question (didn't fount the answer on the forums), I hope it's a new problem :p

    When using Autodetection (legacy default settings), I can't lipsync more than 34 seconds. Otherwise, Unity crashes.
    You have an idea of the workaround ?

    Regards !
     
  22. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    Do you have a way to make a LipSync sound asset file from code and I'm just missing it or is the editor the only way? If the editor is the only way and you don't plan on making a way to do it in code, do you mind if I send you whatever I come up with?
     
  23. r3dstar

    r3dstar

    Joined:
    Jul 17, 2012
    Posts:
    3
    Hi,

    Just bought your asset and trying to AutoSync a few words, but I'm getting errors

    I have v1.1.0 of the MFA Aligner. The path is D:/Unity 2019.1/Maternity/MaternityApp/Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/Montreal Forced Aligner/montreal-forced-aligner-win/bin/mfa_align.exe

    Path to Sox Assets/Rogo Digital/LipSync Pro/AutoSync\SoX Sound Exchange\sox-14.4.2-win32\sox.exe

    I've run the Tempest file and it's fine, so it looks like my recording. I've used Audacity and saved as .ogg and .wav but both fail. Wav setting is Stereo 16-bit PCM 44100Hz.

    It seems to create a .converted.wav file thats Mono, 16000Hz.

    Any ideas?


    AutoSync Failed: MFA Application Failed. Check your audio encoding or enable conversion.
    UnityEngine.Debug:LogFormat(String, Object[])
    AutoSyncWindow:FinishedProcessingSingle(LipSyncData, ASProcessDelegateData) (at Assets/Rogo Digital/LipSync Pro/Editor/Modals/AutoSyncWindow.cs:553)
    RogoDigital.Lipsync.AutoSync.AutoSync:ProcessNext(LipSyncData, ASProcessDelegateData) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/AutoSync.cs:77)
    RogoDigital.Lipsync.AutoSync.ASMontrealPhonemeDetectionModule:Process(LipSyncData, ASProcessDelegate) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/Montreal Forced Aligner/ASMontrealPhonemeDetectionModule.cs:214)
    RogoDigital.Lipsync.AutoSync.ASMontrealPhonemeDetectionModule:Process(LipSyncData, ASProcessDelegate) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/Montreal Forced Aligner/ASMontrealPhonemeDetectionModule.cs:209)
    RogoDigital.Lipsync.AutoSync.AutoSync:RunModuleSafely(AutoSyncModule, LipSyncData, ASProcessDelegate, Boolean) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/AutoSync.cs:54)
    RogoDigital.Lipsync.AutoSync.AutoSync:RunSequence(AutoSyncModule[], ASProcessDelegate, LipSyncData, Boolean) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/AutoSync.cs:29)
    AutoSyncWindow:OnGUI() (at Assets/Rogo Digital/LipSync Pro/Editor/Modals/AutoSyncWindow.cs:307)
    UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)

     
  24. Yannou

    Yannou

    Joined:
    Jan 2, 2015
    Posts:
    11
    The only workaround I found (for a french approximated autosync) was :

    - Install Sox from officiel website, and add sox.exe's folder to your Path environement variables (google it if you don't know how to)

    - press windows + r , and enter CMD.

    Paste :
    sox YOUR_FILE_INPUT.wav -r 16000 -b 16 -c 1 YOUR_FILE_OUTPUT.wav

    open Autosynch window and untick audio conversion in the autosync settings and press start
     
  25. r3dstar

    r3dstar

    Joined:
    Jul 17, 2012
    Posts:
    3
    No joy with the manual encode and unticking "Use Audio Conversion". Hangs on "processing module 1/2 Please Wait" and kicks out the same error message about "Check your audio encoding or enable conversion".
     
  26. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Sorry about this, would it be possible to send me the model so I can test it myself? You can use our email contact@rogodigital.com. There are some issues with Eye Controller at the moment, and I'm planning on an update to it soon to add a few new features and prevent conflicts between it and LipSync - if I can figure out what's going on here I can include this fix in the update as well :)

    There's a video tutorial series on our YouTube channel, though it doesn't specifically cover bone-based rigs. The process is fairly straight forward though - you add as many bones to each pose as you like, and use the gizmo in the scene to position/rotate each bone how you want for that pose. You can also import poses from an animation clip instead (take a look at the "New Bone Tools" video on that same channel) if you'd prefer to pose the character in some other 3D package.

    You only need to create these poses for each of LipSync's 9 phonemes (+ "rest" if you want the neutral pose to be different from the model's standard neutral pose), and once you've set-up a character you can save a preset for other characters. They will have to have the same face rig though, you're correct (though there is some flexibility in terms of bone names etc).

    I can't say I've come across this problem before. Bear in mind that the poses aren't stored in the LipSyncData file at all - working with phoneme/emotion/gesture markers in the clip is designed to make it so any LipSync component can play any LipSyncData file. The only way I can think that you'd be losing work on your character is if you're using bones, you have the clip editor open and in preview mode, and then you open a pose on the character - that would cause both the clip editor and the LipSync component editor to sort of "fight for control" over the model and could cause your poses to get messed up.

    Do you think you could send a video of what you're doing perhaps? It would help to know what steps you're taking that cause the problem so I can get a better idea of what's actually happening.

    You can also create a LipSyncData object from code - it's a ScriptableObject, so you'll have to create it using the CreateInstance method, but you can then add whatever data to it you want and pass it to a LipSync component to be played. Have a look at the API reference for more information on this. I hope I'm not misunderstanding the problem - feel free to send anything over if you want advice on something that's not working!
     
    LadyKattz and skinwalker like this.
  27. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Had you run the AutoSync Setup Wizard at first? It should be able to convert your files to the correct format automatically because it has SoX built-in. There's also an update to the PocketSphinx module (used by the Legacy presets) if you haven't already got it. You can get it from the Extensions Window (Window > Rogo Digital > Get Extensions) which fixed an earlier issue with conversion, so this may be what you were experiencing? Obviously you can convert audio manually, but it's definitely less convenient, and usually means your audio is lower quality than you'd like as LipSync keeps your original clips intact and only uses the converted ones for processing.

    @ggendron - I've found the cause of the batch processing bug. I've fixed it for the 1.51 update, and I'll PM you a patch as well so you can get on with using it.

    I'm afraid I wasn't able to recreate this - I can run the Legacy (Default Settings) preset just fine on a 40 second clip. Can you please try opening the AutoSync Window, and running the Legacy preset from there, both with the "Use Audio Conversion" option checked and with it unchecked, and let me know if that makes any difference? If there's no change there, can you send me the audio clip you're using so I can see if it works for me or not? In the end, it may just be that you'd be better off switching to the MFA module (as used by the "Default" preset) - which I've found works on files even as much as 3 or 4 minutes long.

    I can't tell straight off what's going wrong here - as I've said to some others, could you send me your audioclip? Either here or by email to contact@rogodigital.com, whatever's easiest. It'll help me figuring out what's going wrong, so I can patch it.
     
  28. r3dstar

    r3dstar

    Joined:
    Jul 17, 2012
    Posts:
    3
    Well, that's the damnest thing. Reading something else above, I installed the PocketSphinx module and tried the legacy high quality and it worked. Then I went back to the MFA (default) with a text file and that now works... Very odd, but thanks for replying.
     
  29. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    To my understanding, I would need to input the phoneme, emotion, and gesture markers myself if I did that? Unless I'm still missing something about this that is. I want to use the autosync's "Run Default" but in code instead of manually putting the audioclip into the LipSync Editor.

    But (again, unless I'm missing something), it seems that this is only in the editor code which is of course in the Editor folder and therefore the Assembly-CSharp-Editor project code which makes it not accessible for the Unity default Assembly-CSharp project code.

    I'll send the video in a bit. At the moment, I still haven't tested if the update solved the emotions problem.
     
  30. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    Video not needed fortunately, @Rtyper . The update seems to have fixed the disappearing emotions issue. The problem was the hacky way I was using to get the lipsync.asset file to add phonemes before.
     
  31. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    265

    Email sent.
     
  32. LadyKattz

    LadyKattz

    Joined:
    Sep 26, 2017
    Posts:
    8
    Scratch that. It still happens but only if I don't close the lipsync editor after adding an emotion. I can play the file from the unity editor but not from the lipsync editor for some reason because it will rest the face and I will lose all my data unless I reopen unity from a previous save. But again, this isn't as a pressing issue since I found out that I can at least play the clip with emotions form the unity editor in the Inspector panel.
     
  33. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Ah, I misunderstood - you're right, at the moment AutoSync is editor-only as a lot of it relies on getting paths to audio files and things that aren't really possible at runtime after Unity's packed everything into resource files.

    The emotions problem does sound like the situation I described before, unless you're not using bones? If you have an emotion/phoneme expanded out in the LipSync component then it will be trying to set the pose's bone position/rotation/scales to whatever they are currently so that you can pose it with the scene handles. If the clip editor is open at the same time, and trying to preview on that character, it will be setting those bone positions and overriding the values that are stored.
    It's best to just keep these processes separate. Pose your character, then do clips or vice-versa. If you need to adjust a pose while you're previewing a clip, just disable preview first.

    Thanks - sorry for the delay, I'll see if I can sort this out today.
     
  34. Luedrin

    Luedrin

    Joined:
    Aug 21, 2018
    Posts:
    2
    Hello, sorry if this is a repetitive question but I've been shifting through this thread and am unable to get a definite answer.
    Basically, I've got Dutch audio with transcripts that I'd like to use with LipSync.
    I've looked for how to add other languages to LipSync but there isn't a clear tutorial out there other than a comment that you've made in 2016 which I'm unable to understand.
    Another post which was more recent (May) said that Dutch support isn't available since there's some issue with Pocketsphinx?

    Is that the case? If not how do I get my Dutch audio to work?

    Forgot to mention:
    Doing it by hand would take far too long, can I edit the lexicon with phonetics to add words on which it now throws an error? Or what would you suggest.
     
    Last edited: Jun 26, 2019
  35. jeromeWork

    jeromeWork

    Joined:
    Sep 1, 2015
    Posts:
    354
    Hi @Rtyper I've sent a few emails last week to contact@rogodigital.com with what seem to be similar problems to what others have noted above (MFA Application Failed. Check your audio encoding or enable conversion). Would really appreciate a reply :) Could you let me know if you haven't received them and I'll post here instead. Cheers.
     
  36. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    265
    Yeah, I have a similar issue, but I got the automatic (high) to work, but not the batch.

    I imagine the default with transcripts would get better results .
     
  37. sebsmax

    sebsmax

    Joined:
    Sep 8, 2015
    Posts:
    116
    Hello Rhys,
    Do Lipsync pro works with the timeline?
    If not directly, is it possible to trigger the voice while the timeline is playing?
    Thanks
     
  38. shubhshrivastava

    shubhshrivastava

    Joined:
    Dec 13, 2018
    Posts:
    1
    Does it support for Hololens? Tried putting it for hololens but its not working.
     
  39. ColtonKadlecik_VitruviusVR

    ColtonKadlecik_VitruviusVR

    Joined:
    Nov 27, 2015
    Posts:
    179
    Hey @Rtyper we are currently porting our game from PC to PS4. When starting a fairly long, dense lipsync file we are experiencing CPU spikes of approx. 120ms. I have added custom profiler tags and found that the load data function takes approx 1.6ms and the process data function takes approx. 111ms. What exactly does process data do? Can I somehow pre-process this data in the editor to avoid this spike as spike as the lipsync file & phenomes are not changing?

    We are using Unity 2017.4.27 and Lipsync 1.4.1

    Cheers,
    Colton
     
  40. Duraxz

    Duraxz

    Joined:
    Oct 30, 2017
    Posts:
    11
    Unity 2018.3 Lipsync 1.501, some wav files are auto lipsynced fine but some give me MFA failed error with all default or high-quality auto lipsync options.
     
  41. jeromeWork

    jeromeWork

    Joined:
    Sep 1, 2015
    Posts:
    354
    @Rtyper Would really appreciate some contact from you. Been ten days and no response. Even if you're busy, a quick note to say you'll get back us when you can really does help.

    For everyone else getting 'MFA Application Failed. Check your audio encoding or enable conversion' errors...
    I think I've made some progress. It appears that the issue isn't with the audio file, but with the txt file transcript.

    The same audio file failed with my original transcript:
    Code (CSharp):
    1. My god, she said she'd meet me in TKMaxx but then she never showed. I waited half an hour, but she never turned up. Then guess what I see her having a chat with Sharon in Boots, not even an hour later.
    but then worked just fine after I rewrote it:
    Code (CSharp):
    1. My god, she said she'd meet me in T K Max but then she never showed. I waited half an hour, but she never turned up. Then guess what I see her having a chat with Sharon in Boots, not even an hour later.
    Same with another clip:
    This failed:
    Code (CSharp):
    1. Dubstep? Not really. It's a bit too dark for me.
    This worked:
    Code (CSharp):
    1. Dub step? Not really. Its a bit too dark for me.
    My guess is that if the parser comes across a word it doesn't understand then it trips over itself.
     
    AdminOhrizon likes this.
  42. ElevenGame

    ElevenGame

    Joined:
    Jun 13, 2016
    Posts:
    32
    @jeromeWork I agree. I encountered similar cases, where little details in the text like that made the difference between it failing and succeeding.. It would be good to know which symbols/formulations won't work with the montreal forced aligner.
     
    jeromeWork likes this.
  43. philc_uk

    philc_uk

    Joined:
    Jun 17, 2015
    Posts:
    85
    it does not like the word 'hippies'.
    Also if it does not have a transcript it does not attempt to load it from file. Changing the AutoSyncUtility function at line 34 to use AutoSyncUtility.TryGetTranscript(data.clip); when its empty fixes this.
    Also you cant paste into the transcript window - changing the component to the Editor GUI fixes this too.
     
    jeromeWork likes this.
  44. philc_uk

    philc_uk

    Joined:
    Jun 17, 2015
    Posts:
    85
    I think that the MFA exe is pausing with this error when run from command line :

    Setting up corpus information...
    Number of speakers in corpus: 1, average number of utterances per speaker: 1.0
    Creating dictionary information...
    Setting up training data...
    There were words not found in the dictionary. Would you like to abort to fix them?

    So its waiting for a y/n answer, and the process times out in the unity script, but its not producing any oovs_found.txt as stated here: https://montreal-forced-aligner.readthedocs.io/en/stable/data_format.html#textgrid-format
    which could then list the words it did not understand and fail on.
    From the command line after using the lipsync params -
    Press N and it will output the files with the unrecognised words.

    If you goto

    C:\Users\[username]\AppData\Local\Temp\DefaultCompany\[project]\
    and then the audio filename you might see those
    utterance_oovs.txt
    oovs_found.txt
    which contain the words it cant understand And if you fix them , run it from the command line it might work!
     
    Last edited: Jul 10, 2019
    ElevenGame and jeromeWork like this.
  45. philc_uk

    philc_uk

    Joined:
    Jun 17, 2015
    Posts:
    85
    Building for final ship is a complete nightmare. No ADFs anywhere, having to mess about with adding new ones, and moving directories, everything is pointing back to the editor.
     
  46. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    Apologies for the lack of replies here, guys - the forum hasn't sent me any notifications since Colton's post on the 5th! I've been working on several things related to LipSync recently though:

    1) Montreal Forced Aligner update - I'd been looking into the failures with out-of-vocab words, and realised the same thing you posted above @philc_uk that the application was sitting there waiting for input. I've added the --q argument in this update, so it quietly goes ahead and ignores the OOV words. I've also put in a new warning when this happens to explain the potential gaps in the output.

    Obviously this isn't perfect, as you clearly don't really want gaps in your lipsync animations, so I'm also working on two additions: a way to temporarily include new word-to-phoneme mappings in the lexicon for one process (could also be stored in a preset), and a per-project extension to the lexicon that would be applied to every use of the module in that project. This will probably take a bit of dev time to get right, so I'll put out just the ignoring OOVs update on its own first. In the meantime, you can manually add words to the "librispeech-lexicon" file in the English language model and they should get processed.

    2) LipSync Pro 1.51 update - this is nearly done now, it includes some improvements to the displaying of errors from AutoSync (to be more descriptive when something like a transcript is missing), a fix for the batch processing issues, an improved SetEmotion method (that doesn't suffer from bone rotation issues) and a new way to pre-process a LipSyncData clip to a specific character, so that it can be played back without incurring the runtime processing cost.

    3) Brand-new Eye Controller - I've re-written Eye Controller from scratch. This is still a little way off being done, so it probably won't be included in 1.51, but after about 2 years without a major update, Eye Controller was lagging behind a little. The new version performs better, supports full bone- and blendable-based poses even with target tracking, and features much more realistic animation with separate saccade and smooth pursuit motion. I'll post some more info on this later on, but it really is a big improvement!

    I've just finished some fairly big contract work this last week, so my development time can be focused on LipSync for at least the next 3 or 4 weeks. I'm aiming to have this MFA module update available on the extensions window before the end of this week, then 1.51 not too long after - 2 weeks tops.
     
  47. sebsmax

    sebsmax

    Joined:
    Sep 8, 2015
    Posts:
    116
    Ok, I took half a day to integrate an animation export for blend-shapes, I'm pretty happy with the result (It makes it compatible with the timeline ;) ):

     
    MoYing likes this.
  48. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    437
    That's great! I do actually have an integration package for Timeline (it's attached to a post in this thread somewhere, I'd have to have a look for it) but it can be prone to going wrong if you rewind or skip around in the timeline. Yours should be pretty robust, and perform better too. I'm actually really interested in how you implemented it - did you have to modify the clip editor or did you do it using the menu callbacks?

    I am going to release a proper timeline integration eventually as well, using the new signals system, so hopefully that should be useful for people who aren't using blend shapes.
     
  49. archcomix

    archcomix

    Joined:
    Aug 22, 2015
    Posts:
    3
    Removed the plugin and re-installed the latest version, but still getting error messages about auto-sync not being able to find the ASMontreaPhonemeDetectionModule, despite downloading it 100% through the auto setup window. I get this error message in the console:

    Instance of ASMontrealPhonemeDetectionModule couldn't be created. The the script class needs to derive from ScriptableObject.

    DirectoryNotFoundException: Could not find a part of the path 'D:\VR Projects\VR GP\Assets\AutoSync Montreal Forced Aligner Module (Win).unitypackage'.

    Any help is appreciated - been trying to get this working since yesterday.
     
  50. sebsmax

    sebsmax

    Joined:
    Sep 8, 2015
    Posts:
    116
    I hocked my code into the same system you did for exporting XML.
    But instead I got the curve generation that you did for models.

    The system simply bake what you are already calculating and output it as an animation file!

    I'll send you my code in a PM
     
    Last edited: Jul 19, 2019
unityunity