Search Unity

[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools

Discussion in 'Assets and Asset Store' started by Rtyper, Mar 11, 2015.

  1. philc_uk

    philc_uk

    Joined:
    Jun 17, 2015
    Posts:
    90
     
  2. skinwalker

    skinwalker

    Joined:
    Apr 10, 2015
    Posts:
    509
    Hi,

    What does my character need to have in order to have lipsync? Do I need bones or blendshapes? Is there a way to automatically generate the blendshapes in another software?
     
  3. Jimbo_Slice

    Jimbo_Slice

    Joined:
    Oct 1, 2015
    Posts:
    44
    Hi,

    I have used LipSync Pro with no issues before but I cannot get any kind of lip sync to work with the new update. Using the SoX setup I get the following warning:

    C:\Program Files (x86)\sox-14-4-2\sox.exe FAIL formats: can't open input file `unsigned': No such file or directory

    I have tried deleting and installing, including previous versions of SoX. I have tried different Unity versions but still the same issue.

    So then I tried to download the new ASMontreal module, which I first tried to download through the AutoSync Setup Wizard where I watch the package download, then install but the continue button on the wizard stays greyed out. So I try downloading it myself through the Extensions panel, same thing - it downloads and installs. Then when I go to AutoSync a clip (Gettysburg just to be sure) I get the following error:

    Instance of ASMontrealPhonemeDetectionModule couldn't be created. The the script class needs to derive from ScriptableObject.

    What's going on?
     
  4. wigglypuffs

    wigglypuffs

    Joined:
    Aug 10, 2015
    Posts:
    67
    Rtyper, I don't know if you knew this or if it breaks things.... But, the "Default.asset" has a json field in it called Module Settings: Element 0. In this field is the following:

    {"useAudioConversion":true,"lexiconPath":"G:\\Unity Projects\\LipSync\\LipSync Pro 1.X\\LipSync Pro 1.5X\\LipSync Pro 1.5 - Universal\\Assets\\Rogo Digital\\LipSync Pro\\AutoSync\\Editor\\Modules\\Montreal Forced Aligner\\montreal-forced-aligner\\pretrained_models\\librispeech-lexicon.txt","minLengthForSustain":100.0}


    That is hardcoded to a path on your computer. Also, even if it was relative, the path to the pretrained models is way off of where it actually is:

    Assets\Rogo Digital\LipSync Pro\AutoSync\Editor\Modules\Montreal Forced Aligner\Language Models\English\librispeech-lexicon.txt

    Could this be the source of everyone's woes?

    I tried to set this to the proper absolute location on my drive and still couldn't get autosync to work with my wav file however. I even tried the SoX trick mentioned by someone else earlier where the audio is manually converted in command line and then unticked the convert option.

    The auto sync setup wizard says the modules are all installed for montreal. And, I checked under the verify button in settings in the clip window as well and it verifies.

    Edit: I stepped through the C# code and determined that when the mfa_align tool is launched it is not being passed the -q option which causes an interactive prompt to appear "There were words not found in the dictionary. Would you like to abort to fix them? (Y/N)" And, because the C# code waits 20 seconds and doesn't see the program exit, it forces it to close and signals to the user that they must have failed to convert or encode their input audio properly. Which isn't the case.

    Running the command from the command line for mfa works properly. For testing purposes I copied the mfa and sox folder from my project to another folder and created a subdir called corpus.

    I followed Yannou's advice and ran sox as follows:
    sox YOUR_FILE_INPUT.wav -r 16000 -b 16 -c 1 YOUR_FILE_OUTPUT.wav

    With the sox converted mono .wav file and the txt file with the "transcript" renamed to the .lab extension inside the corpus folder. And, an empty output folder. I tested with some audio and typed:

    mfa_align .\corpus ".\Language Models\English\librispeech-lexicon.txt" ".\Language Models\English\english.zip" .\output -c -v -q

    Results:

    Setting up corpus information...
    Number of speakers in corpus: 1, average number of utterances per speaker: 1.0
    Creating dictionary information...
    Setting up training data...
    Calculating MFCCs...
    Calculating CMVN...
    Number of speakers in corpus: 1, average number of utterances per speaker: 1.0
    Done with setup.
    50%|██████████████████████████████████████████ | 1/2 [00:04<00:04, 4.35s/it]could not align ['onering']
    100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:08<00:00, 4.05s/it]
    Done! Everything took 47.13091826438904 seconds


    This produced in the output folder:

    07/26/2019 03:19 AM 28 oovs_found.txt
    07/26/2019 03:19 AM 36 utterance_oovs.txt

    So, I rewrote the .lab file to fix the unknown words and reran:

    Setting up corpus information...
    Number of speakers in corpus: 1, average number of utterances per speaker: 1.0
    Creating dictionary information...
    Setting up training data...
    Calculating MFCCs...
    Calculating CMVN...
    Number of speakers in corpus: 1, average number of utterances per speaker: 1.0
    Done with setup.
    100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:09<00:00, 4.58s/it]
    Done! Everything took 41.62572765350342 seconds


    This produced a proper .textgrid file.
    Hope this helps someone else out there!

    Edit 2:
    Autosync completed successfully via the user interface after changing the code:

    https://i.gyazo.com/5881d4578d39b68710ea2ff9ad117423.png

    In the file:
    \Assets\Rogo Digital\LipSync Pro\AutoSync\Editor\Modules\Montreal Forced Aligner\ASMontrealPhonemeDetectionModule.cs

    Line 139 (added -q to the options):
    process.StartInfo.Arguments = "\"" + corpusPath + "\" \"" + lexiconPath + "\" \"" + basePath + model.acousticModelPath + "\" \"" + outputPath + "\" -c -q";

    Line 146 (Extended the timeout to 60 seconds):
    process.WaitForExit(60000);

    Though, there has to be a better way than picking an arbitrary wait time on exit. Maybe they have an API that would be better to use directly instead. The errors should be more informative too. Treating an arbitrary failure in the tool as if the user must have failed to prepare their inputs is misleading and creates a support headache where there does not need to be one.
     
    Last edited: Jul 26, 2019
    haleler51 likes this.
  5. WadjetEye

    WadjetEye

    Joined:
    Jun 12, 2015
    Posts:
    3
    Hello! I have a lot of voiceover for my project and I was hoping to do a batch processing on it. I set everything up in the AutoSync settings tab, I add all my clips to the batch process window, and then hit "Start Batch Process."

    The result I'm hoping for is a bunch of *.asset files in my default save folder (which I can't seem to set in the settings tab). But instead the button generates an XML file. What do I do with this file? Can I use the XML file to generate the proper lip sync data files?

    Thanks in advance! Hopefully my question makes sense.

    -Dave
     
  6. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Thanks for pointing this out - I had made the same discovery about waiting for input and the --q argument just before I made my last post, and this is fixed in the latest update for the MFA module. All your other points are totally valid though - I had originally had a less specific error message for that final generic case but after some testing in a previous update I'd (mistakenly) come to the conclusion that the audio format was the only cause for it.

    That hardcoded path to the lexicon shouldn't be there, you're right - fortunately though it isn't causing a problem, the module uses the chosen language model to get the path to the lexicon, so that must be a hold-over from a pre-release version when I created the preset. I'll re-save it now to avoid any confusion in future.

    I'm currently both uploading the new MFA update and submitting a new LipSync update to the store, which improves on the error messaging and allows the MFA module to process even where there are out-of-vocab words found. I'll also be updating it later to allow extra words to be inserted into the lexicon per-project.

    Either bones or blendshapes will work - you can mix and match them too. If you're using a custom-made model, there may be some tools out there that can generate facial blendshapes, but I wouldn't expect them to do too good a job of it. Some character creators like Adobe Fuse or (I believe) Reallusion's Character Creator can also export characters with blendshapes pre-made.

    Your first error with the Legacy presets can be fixed by getting the latest version of the PocketSphinx module from the Extensions window - I always include the latest version when I do a full update to LipSync Pro, but if there are improvements between updates you can find them there.

    Your second issue sounds like the scripts aren't compiling though. Check the console and see if there are any compilation errors, after the MFA module downloads and installs, it forces a script recompile which should unlock the setup wizard once it completes. I can only assume some error is preventing it from completing.

    LipSync can use either .asset files or .xml files, so there's an checkbox in the batch processor for doing an xml export:
    upload_2019-7-27_13-30-48.png
    Make sure that's not checked, and if it's still doing it let me know! If you'd like to use the XML files instead, you can do - there's a form of LipSync.Play that takes a TextAsset and an AudioClip instead, though obviously that means you'd need to keep track of the audio clips yourself instead of the LipSyncData object doing that for you.
     
  7. WadjetEye

    WadjetEye

    Joined:
    Jun 12, 2015
    Posts:
    3
    That tickbox is definitely unchecked. What does the XML file actually do? Can I convert that xml file into traditional *.asset files?

    Also, I notice you wrote XML "files" instead of "file." When I hit the button, only one file is generated.
     
  8. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Hmm, that's strange - the xml file is simply a representation of the LipSyncData in xml format, so it's only one of the clips. The fact that it's only creating one is a known issue - check your email, I've sent you a patch for that problem. I'll have to look into it a bit more to figure out why it's creating an xml file instead of a .asset file though.
     
  9. wigglypuffs

    wigglypuffs

    Joined:
    Aug 10, 2015
    Posts:
    67
    Rtyper, eh it's just bugs it happens. Thank you for the great product.

    Also, when you put out the new version raise that 20 seconds timeout before killing the montreal process to something like 60 or more seconds. For example my short paragraph of text required 47 seconds to process.
     
  10. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    874
    Code (CSharp):
    1. Assets\Rogo Digital\LipSync Pro\AutoSync\Editor\Modules\Montreal Forced Aligner\ASMontrealPhonemeDetectionModule.cs(121,31): error CS1501: No overload for method 'StartConversion' takes 6 arguments

    I'm getting this error from the montreal forced aligner module. Using Unity 2018.4.4f1 on windows.

    This is using the latest version that seems to have been updated yesterday..

    EDIT: Oh i see, the issue is i needed to update PocketSphinix
     
    Last edited: Jul 28, 2019
  11. sacb0y

    sacb0y

    Joined:
    May 9, 2016
    Posts:
    874


    I'm having this strange issue with the expression going weird at the end of a voice line.

    Happens with :Keep expression when finished" is on or off with slightly different results. The pose itself doesn't have an expression like that, it looks like it's transitioning to the "Surprise" expression i set up.
     
  12. Jakub_Machowski

    Jakub_Machowski

    Joined:
    Mar 19, 2013
    Posts:
    647
    Here is another error when making Batch process of files:
    upload_2019-7-31_18-29-56.png
     
  13. Gord10

    Gord10

    Joined:
    Mar 27, 2013
    Posts:
    142
    My Lip Sync component looks like this. It doesn't happen in an empty project, only in my game. Can it be because it conflicts with another file in the project?

    Unity 2018.4.2f1, LipSync Pro 1.42

     
  14. ellenblomw

    ellenblomw

    Joined:
    Mar 4, 2018
    Posts:
    153
    Hello, is there someone who could tell me what I am doing wrong. I have a list of audiosources and a list of lipsyncdata on a gameobject and I want to load these onto the Lipsync I have on the main character. I can see that the AudioSource gets placed in the AudioSource on the Lipsync component. But the lipsync never starts. Only the audio. If I choose play on awake on the lipsync component I can see the lipsyncData clip gets loaded onto their too, but my character never starts moving the mouth.

    This is what I have on the audio Controller gameobjects start void:
    Code (CSharp):
    1.         girl.GetComponent<LipSync>().audioSource = audioF[1];
    2.         aud = girl.GetComponent<LipSync>().audioSource;
    3.         girl.GetComponent<LipSync>().defaultClip = lipsyncClipF[1];
    4.         audClip = girl.GetComponent<LipSync>().defaultClip;
    5.         aud.Play();
    6.         girl.GetComponent<LipSync>().Play(audClip);
     
  15. ellenblomw

    ellenblomw

    Joined:
    Mar 4, 2018
    Posts:
    153
    It seems I had to put another audio in the audiosource to make the first audio play. So problem solved :D
     
  16. Jakub_Machowski

    Jakub_Machowski

    Joined:
    Mar 19, 2013
    Posts:
    647
    Hello any fix for bathc problem error? Batch creating of phenomenes is stops after first file and then there is an error:

     
  17. JOKER_LD

    JOKER_LD

    Joined:
    Feb 7, 2018
    Posts:
    6
    Hi there, this plugin is great! I have been used it for a while and done well untill recently, I find that
    for some audios, the Phonemes does not generate completely. Could you give me some suggestions?
    I use LipSync Pro 1.5, Unity2018.1.0f

    And this is my audio file.
    https://drive.google.com/open?id=13qvTvaNqli7dqtUilPl0YKj0i0-MRHhS

    I will really appreciate if you could help me:)
     

    Attached Files:

  18. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    That's odd.. Does it happen if you move the end of the final emotion marker further in to the clip a bit? You could also try changing the character's emotion curve generation mode to "Tight" if it isn't already, or right click the last emotion in the editor and enable "Continuous variation", even if you set all the variation amount settings to 0, both of those things can help get rid of odd problems like this.

    Are you able to update to version 1.51 from the asset store? This bug was fixed several versions ago :)

    I'm not completely sure what the problem was, but you're massively overcomplicating playing a clip! The LipSync.Play method handles playing the audio and everything, so all you need to do is have a reference to the LipSyncData clip (presumably lipsyncClipF[1]), then pass that into the Play method:

    girl.GetComponent<LipSync>().Play(lipsyncClipF[1]);


    Though you probably should be caching a reference to the LipSync component in Start() - GetComponent<> is a very expensive method to be calling regularly.

    I've replied to your PM :) If anyone else is having this issue as well, just update to the latest version from the Asset Store.

    Sure, can you tell me what AutoSync preset you're using? If it's one of the Legacy ones, that system is quite old and isn't being updated any more. You can get much better results out of the Default (English) preset if you're able to get a transcript of your audio. Just make sure you've got the MFA module installed - if you haven't already, run the AutoSync Setup Wizard from Window > Rogo Digital > LipSync Pro to get everything set up and installed!
     
    JOKER_LD likes this.
  19. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    The new LipSync Pro 1.51 update is now live as well!

    **If you're having problems with the batch processor, or compile errors when importing either the PocketSphinx or Montreal Forced Aligner AutoSync modules, make sure you download this update as both those issues are fixed!**

    The full change list is below:


    Features
    • Added "Data Preprocessing" support.
      • New editor window that allows you to select a LipSync component and any number of LipSyncData clips, and process the animation for them ahead of time.
      • LipSync component will now check if LipSyncData clips contain pre-processed data, and bypass the animation generation step if they do, significantly improving runtime performance.
    • Rect selection, Select All and Invert Selection options in the Clip Editor now only act on markers that are visible with the current filter setting.
    • AutoSync modules now show a more informative error when the module compatibility check fails.
    • The Clip Editor can now also display a message when AutoSync runs successfully, for more info (such as ignored words).

    Fixes
    • Fixed ClipSettings window not scrolling with a long transcript.
    • Fixed bug that prevented batch processing mode from working correctly.
    • Fixed SetEmotion method producing incorrect animation curves for some bone poses.

    Changes
    • Simplified some old code to do with opening LipSyncData files in the Clip Editor.
    • Included fix to AutoSyncConversionUtility that has been distributed with previous PocketSphinx + MFA modules.
    • [PocketSphinx Module] Updated included version of PocketSphinx module to latest.
    • Updated minimum Unity version to 5.6.1
    NOTE: This will be the last version of LipSync Pro to support Unity 5, in line with Unity's new asset guidelines.
     
  20. Gord10

    Gord10

    Joined:
    Mar 27, 2013
    Posts:
    142
    The bug is fixed with 1.51, thank you! I thought I was using the latest version.
     
  21. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Just a heads up, I'm not able to access any @rogodigital.com email addresses at the moment. If you've sent an email to me and you need a reply, please either post here, or send me a PM. Cheers!
    EDIT: Email is back up now!
    EDIT 2: Email is on/off at the moment! I've discovered several emails seem to have been dropped over the last couple of weeks, so this thread is currently the best place for getting support.

    Great, no problem!
     
    Last edited: Sep 23, 2019
  22. JOKER_LD

    JOKER_LD

    Joined:
    Feb 7, 2018
    Posts:
    6
    Thanks for your reply, besides the issues I post below, I also find that when I process audio longer than 40 seconds using legacy presets, the unity editor will crash. So I use MFA module, but it failed to pass the check. Maybe I should upgrade to 1.5.1?
     
  23. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I'd definitely advise being on the latest version, but if you're seeing a "module failed compatibility check" error, that means your clip is missing some data that the module requires.
    I'd guess in the case of the MFA module, you don't have a transcript of your dialogue?

    Take a look at the documentation for more info.
     
  24. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    4,196
    I'm on 1.51 now, though I got the same error under 1.501. Most of the time, when processing an audio file, I get the following using the Legacy converter:

    I've restarted Unity, and my computer, and I don't have these audio files open in any other programs. Any ideas?
     
  25. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Could you paste the actual path here? It may be some character(s) in the file name
     
  26. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    4,196
    It looks like it must be some weird file permission issue/corruption/problem in general.

    I'm testing this on various audio files, all of which are in this folder:

    C:\Users\Dan\Documents\GitHub\Gravia\Assets\Scenes\Production\Earth\MilitaryWing\FinalAssessment\FinalAssessmentI\Audio\Speech

    One file which LipSyncPro handles fine is named "Final Assessment I - Flynn - Basic Awareness.wav", while one which gives the error is named "Final Assessment I - Flynn - Blindly.wav"

    However, in responding to this post, I found that for the files that LipSyncPro errors out on, I'm unable to copy the filename for those files within Unity. For example, I highlighted the file name until it changed into a Rename prompt, and tried to ctrl-c the text, and it just copies a blank string. So I'm going with the assumption that these files are just corrupted/broken (even though they play fine as audio files) and that this isn't LipSyncPro's fault.

    In short, nevermind, probably just a problem with the files, as Unity itself is showing odd behavior with them (despite them appearing identical to other files.) Sorry for the false alarm.
     
  27. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    @dgoyette Hmm, ok. Do let me know if anything shows up that might link the problem to LipSync Pro though!
     
  28. dgoyette

    dgoyette

    Joined:
    Jul 1, 2016
    Posts:
    4,196
    Nope, probably just some weird behavior when I converted MP3s to Wav outside of Unity using FFMPEG. I deleted all the audio files, and started over, and LipSyncPro works fine on them. Definitely some weirdness with the files themselves.
     
    Rtyper likes this.
  29. Yakuzza

    Yakuzza

    Joined:
    Mar 10, 2013
    Posts:
    8
    Hi, Rtyper! I've sent you an email but apparently you had a problem accessing it so i'll post it in here. I’m having difficulties with Rogo Eye controller. Whenever the character has the Look At target in sight, it’s eyes rotate to the side and ignore targets position completely. It doesn’t matter if Look At Target is set to Auto or to the specific target, as the results are the same.


    I’m using both Rogo lipsync and eye controller in Unity 2019.1.10f1. If you could give me any tips on where to start looking for the problem I would be very grateful!

    Thank you!
     
  30. Jakub_Machowski

    Jakub_Machowski

    Joined:
    Mar 19, 2013
    Posts:
    647
    Hello There is still problem with batch processing in 1.51 version, sometimes it work,s but more often not work. I sent You PM with more info
     
  31. StarsideStudios

    StarsideStudios

    Joined:
    Jul 15, 2013
    Posts:
    50
  32. WadjetEye

    WadjetEye

    Joined:
    Jun 12, 2015
    Posts:
    3
    Hello! As always, fantastic product. Sadly, I'm encountering a weird problem. I have added some very simple micro-expressions for my character. Quick, subtle facial animations that aren't attached to any particular voice file (i.e., waggling eyebrows, rolling eyes, shake of the head, etc). Since they are not played via voiceover data, I put these animations in my animator controller on a separate layer so I can call them whenever I want. And doing so completely breaks Rogo. After some experimenting, I figured out it was breaking in a *very* specific way.

    For example, if I only add my "waggling eyebrows" animation to the controller, than any rogo expression/phoneme/gesture that involves eyebrows no longer works. If I add my "sticking tongue out" animation to the controller, than any rogo expression/phoneme/gesture that involves the mouth or tongue stops working. These animations are just sitting in the controller and not being played.

    I've tried setting the animator layer to "additive" and tried using a mask. Nothing works.

    Any light-shedding is appreciated!
     
  33. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Sorry about the wait guys! Among other things, I've been busy recently on two new updates, 1.52 and 1.6. No big new features this time, but 1.6 will be a sort of "modernising update". Among the changes in it are recreating the entire editor UI in UIElements (which will mean better compatibility with the new editor theme in 2019.3), migrating the LipSyncProject settings and the settings screen in the clip editor over to Unity's unified settings windows, and removing our custom keyboard shortcuts screen in favour of Unity's new built-in shortcut manager. This is obviously quite a lot of work, but it has to be done in preparation for an LTS version of LipSync Pro 1.X that will continue to exist once version 2 is complete. As such, the minimum Unity version for this will be set at 2019.1.

    1.52 will be a bugfix update intended for users who won't be able to update to 1.6 due to being stuck in an older Unity version. I will be making both of these updates available on our website, and keeping the 1.5x branch up-to-date with bug fixes, even after 1.6 is out. Unfortunately, I won't be able to make these future 1.5x versions available on the Asset Store, but they will all be downloadable from the site.

    It's not an ideal situation, but I think this is the best way of making sure nobody is left with a sub-optimal version of LipSync Pro.

    Did you get anywhere with this? I'd try setting the "Eye Forward Axis" setting instead of using "Eye Look Offset" if you're needing offsets of 90 degrees or more. If you're still having trouble, do you think you could put together a small demo project of this and PM it to me? Even if I can't get it sorted, it'll give me more edge-cases to check the new Eye Controller 2 component I'm working on against. The new one's coming along pretty well and it solves a whole load of these issues (along with adding a lot of more realistic eye movement features). I'm expecting it to be ready by mid-late October at the latest.

    I've never used any of Invector's assets myself - this is the Third Person Controller, right? Does the twitching still happen if either their controller component or the Animator Controller is disabled? I'd also check if any of the bones used in your normal animations are used in your LipSync Phoneme or Emotion poses. If they are, you may want to remove them, or try toggling the "Account for Animation" setting on your LipSync component. Basically what the problem looks like to me, is some bone being animated out of its default position by your animations, then LipSync snapping it back into its default as part of the LipSync animation when it starts.

    This is interesting, because it seems to be breaking in the opposite way I'd expect! LipSync does all its work in the LateUpdate() method, which is supposed to run after the Animator does its thing, so I'd expect any bones used in both to not animate when LipSync is playing a clip, but not the other way around.
    The Animator sets the position/rotation/scale of every transform in its graph at the start of each frame, so this is potentially a case for using the "Account for Animation" checkbox in LipSync. This comes with the caveat though that any bones you put in poses, and which aren't updated by the Animator (e.g. aren't used in any of your animation files) will basically start flying off into space as they're expecting their position to be reset each frame and they won't be!

    If that doesn't work (or isn't usable for the above reason), you could potentially try moving LipSync's position in the script execution order - Placing it later than the default time may allow it to update correctly. Finally, the last resort would be as I said to Yakuzza: if you're able to put together a small repro project and PM it to me so I can take a look personally, that would be invaluable!
     
  34. Alessandro-Previti

    Alessandro-Previti

    Joined:
    Nov 1, 2014
    Posts:
    30
    Hello, I am on OSX and I finally managed to get a more descriptive error.
    Lipsync has no way to work on my OSX computer, I have it working on my windows (that is not my working computer).
    On win I use sphinx, but with Montreal on OSX I always get the same error about MFA.

    I am testing using custom audio and the demo audio given with the asset, text files are properly assigned. Everything is to the current latest version.
    I have no idea what is happening.

    Any clue?

    AutoSync Failed: MFA Application Failed. Check your audio encoding or enable conversion.
    UnityEngine.Debug:LogFormat(String, Object[])
    AutoSyncWindow:FinishedProcessingSingle(LipSyncData, ASProcessDelegateData) (at Assets/Rogo Digital/LipSync Pro/Editor/Modals/AutoSyncWindow.cs:552)
    RogoDigital.Lipsync.AutoSync.AutoSync:processNext(LipSyncData, ASProcessDelegateData) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/AutoSync.cs:110)
    RogoDigital.Lipsync.AutoSync.ASMontrealPhonemeDetectionModule:process(LipSyncData, ASProcessDelegate) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/Modules/Montreal Forced Aligner/ASMontrealPhonemeDetectionModule.cs:201)
    RogoDigital.Lipsync.AutoSync.AutoSync:RunModuleSafely(AutoSyncModule, LipSyncData, ASProcessDelegate, Boolean) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/AutoSync.cs:56)
    RogoDigital.Lipsync.AutoSync.AutoSync:RunSequence(AutoSyncModule[], ASProcessDelegate, LipSyncData, Boolean) (at Assets/Rogo Digital/LipSync Pro/AutoSync/Editor/AutoSync.cs:29)
    AutoSyncWindow:OnGUI() (at Assets/Rogo Digital/LipSync Pro/Editor/Modals/AutoSyncWindow.cs:307)
    UnityEngine.GUIUtility:processEvent(Int32, IntPtr)
     
  35. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Could you send me the audio file (and transcript) you're using please? A PM is fine if you want to keep it private. I should be able to figure out why it's not working after trying it myself.
     
  36. SuperNinjaWaffleBagel

    SuperNinjaWaffleBagel

    Joined:
    Apr 6, 2014
    Posts:
    5
    Hi, I'm currently running into the problem with autosync crashing every time. It's a long clip (~20 minutes) so it's probably the same as what's happening in this thread before. Since the clip is long, I'd really prefer to not have to transcribe the whole thing for the montreal module. Is there a work around for the pocketsphinx without cutting it into 7 clips?
     
  37. jsleek

    jsleek

    Joined:
    Oct 3, 2014
    Posts:
    61
    @Rtyper How do I get LipSync Pro working with ARCore?

    I have a LipSyncPro character appear after an Augmented Image is scanned, but the Lip Sync does not trigger after the character appears.

    @davidosullivan Had the same problem back in 2017 (check comments #612 and #613) but you never got back to him with a solution.

    I really like your plugin, but it's really crucial for this to work in AR for my project.

    Do you have a solution?

    Cheers.
     
  38. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I implemented that clip-splitting feature a little while after that post, though I ended up not re-implementing it in the AutoSync module version. If your source audio clip is 20 minutes(!) long though, it would have been split into about 40 sections, then each would have been processed individually. It would have taken very long time, and the results wouldn't have been great.

    I'm struggling to think of why you would need a single 20 minute clip (I'm guessing this isn't a standard game project?), but if you really need to keep it all together, your best bet would be splitting the clip up (either in another program, or inside the clip editor using the "Start + End Times" mode in the clip settings window), making sure to avoid splitting in the middle of words. I'll put together a quick utility that you can use to merge the resulting clips back together. It should be pretty simple, and I'll PM you the script today or tomorrow.

    There is a pre-processing function in LipSync Pro now which should hopefully do the trick. You can find the utility in Window/Rogo Digital/LipSync Pro/Preprocess Data. You'll just need to choose your LipSync component from the scene, then add all the clips you want pre-processed. Bear in mind though that a pre-processed clip can only play on the specific LipSync component it was pre-processed for, so if you have multiple characters that can speak the same lines, you'll need to make separate copies of the LipSyncData clips.
     
  39. jsleek

    jsleek

    Joined:
    Oct 3, 2014
    Posts:
    61
    @Rtyper Thanks for the quick response. However, after processing the Lip Sync Data, the animation still doesn't trigger.


    AR Core objects are instantiated at runtime (when the Augmented Image is detected), then destroyed when the Image is well out of view. Since I will be Preprocessing Lip Sync Data from a Prefab, and not an object always in the scene, can I get this to work?
     
    Last edited: Sep 21, 2019
  40. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    It should still work - make sure you have "Play on Awake" checked and your clip chosen on your prefab, and try putting your character prefab in the scene first and pre-processing using that version of the LipSync component (the instance can be deleted after).

    If it's still not working, check if there are any errors or warnings in the console - either when pre-processing, or when your prefab gets instantiated at runtime.
     
  41. jsleek

    jsleek

    Joined:
    Oct 3, 2014
    Posts:
    61
    Thanks again. It now works, but I had to set the AudioSource to Play on Awake, as the LipSyncPro's Play on Awake does not trigger it.
     
  42. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Huh - glad it's working, but could you please send a screenshot of your LipSync component settings? There shouldn't be any need to change the AudioSource settings (beyond getting the falloff, spacial blend etc. how you want it), as LipSync Pro calls .Play on the AudioSource itself..
     
  43. jsleek

    jsleek

    Joined:
    Oct 3, 2014
    Posts:
    61
    Here you go
     

    Attached Files:

    Rtyper likes this.
  44. Jakub_Machowski

    Jakub_Machowski

    Joined:
    Mar 19, 2013
    Posts:
    647
    Hello
    I have one question, is there option to make batch process but keep emotions that are already in files? IT is very important for us to be able to make different language voicover version of our game with recalculating phenomenes but keep old emotions in the same place? It is imoportant for us cause we are making dubbing and emotions should stay but we would to recalculate only phenomenes? Is this possible?
     
  45. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Not exactly, as the batch processor doesn't do anything about loading in existing LipSyncData files, it just creates new ones tied to the audioclips. I'll have a look tomorrow at how feasible it would be to add a simple check for existing LipSyncData files.
    If it looks like a simple addition, I'll sort it out and send it to you. If not, I'll put the idea of improving the batch processor's flexibility on my list, but in the short-term you may be better off using some other automation tools to do the job of running the standard (non-batch mode) AutoSync process on your existing clips.
     
  46. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    The first two updated tutorials are now online! These first few are back to basics, covering plugin setup and character setup as it is now in 1.51. I've also got a new tutorial on the basics of clip creation which I'll be uploading today, then episodes on advanced features of the Clip Editor, and some of the lesser-used features of LipSync Pro.



    @donutsorelse I replied to your Asset Store review at the weekend, could you please change your forum settings or send us a Facebook message, so I can get this clip stitching patch sent to you as the forum won't let me PM you.
     
    Jakub_Machowski and ftejada like this.
  47. Jakub_Machowski

    Jakub_Machowski

    Joined:
    Mar 19, 2013
    Posts:
    647
    What do you mean by aut Sync Process on existing clips? There is no tool to make automatic emotion cause thats impossible for now I think, SO let me know as fas as possible if you are able to do that, to for example copy emotions or keep emotions, in batch process, cause for now is like impossible to make many Language version support? making again and again emotions for every single language version is like totally impossible to do. How until now did you made language version of many audio, cause I udnerstand that phenomenes Have to be recalculated and it is great, but emotions should stay the same :) Waiting for tou reply, We need to now what we are standing on ;) best Regards thanks for reply ;)
     
  48. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I understand, don't worry - I just mean that the way the batch process works is by running AutoSync on each of your AudioClips, then saving a new LipSyncData file for each - if there's an existing LipSyncData file, it just overwrites it. There's no point in the process in which Emotion markers already exist :)

    I think you're right though, a specific tool for localisation would be a great idea. I'll see what I can get done today.
     
  49. Jakub_Machowski

    Jakub_Machowski

    Joined:
    Mar 19, 2013
    Posts:
    647
    Thanks! Awesome :) Waiting for what you could done :) Our situation is simpler cause all our languages have the same or very simillar audio lenght (dubbing voicover like in movies) So emotions if stay will be in correct places :) and phenomes will be recalcualted to every language so everything if work this way would be great :) For now I cant made it even clip by clip :) I think it will be very very helpful for many developers that have even 2 langauge version voicovers :) Waiting for that ! :)
     
  50. Alvarezmd90

    Alvarezmd90

    Joined:
    Jul 21, 2016
    Posts:
    151
    Not sure if it's just me.. but why does the rest keyframe glitch out when used aside an emotion transition? The mouth flickers up and down when they're both active. 0__o

    On side note, I'd also like to be able to rename an emotion without having to recreate it from scratch again.. :D