Search Unity

  1. Unity 2018.3 is now released.
    Dismiss Notice
  2. The Unity Pro & Visual Studio Professional Bundle gives you the tools you need to develop faster & collaborate more efficiently. Learn more.
    Dismiss Notice
  3. Want more efficiency in your development work? Sign up to receive weekly tech and creative know-how from Unity experts.
    Dismiss Notice
  4. Nominations have been announced for this years Unity Awards. Celebrate the wonderful projects made by your peers this year and get voting! Vote here!
    Dismiss Notice
  5. Want to provide direct feedback to the Unity team? Join the Unity Advisory Panel.
    Dismiss Notice
  6. Improve your Unity skills with a certified instructor in a private, interactive classroom. Watch the overview now.
    Dismiss Notice

[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools

Discussion in 'Assets and Asset Store' started by Rtyper, Mar 11, 2015.

  1. wetcircuit

    wetcircuit

    Joined:
    Jul 17, 2012
    Posts:
    845
    I am convinced! :cool:

    I have one for the wishlist... Since there are several issues with Unity's blendshapes (like being dependent on other software for starters), please consider maybe supporting MegaFier's Morph (assuming it would be possible)....
     
  2. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hmm... that looks interesting - it looks fairly simple to integrate (though I don't own MegaFiers and it's a bit out of my price range atm!). I'll put it on the features list, as it would definitely help to not rely on blendshapes as the only option.
     
    wetcircuit likes this.
  3. bluemoon

    bluemoon

    Joined:
    Dec 14, 2012
    Posts:
    84
    Love this system.
    Did I read the next update will come with a blend shape reference for all the phonemes?
    I've always used Visimes from Jason Osipa's book "Stop Staring" and I haven't found a good reference for creating my Phonemes yet.
     
  4. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks!
    No, it won't. I haven't decided on quite how I want to do this yet, and when I do I will need to find/make some illustrations of some kind for it :p It's on the roadmap though, probably for the version after 0.3. The new feature regarding blendshapes in 0.3 is the preset system, and if you're using Mixamo Fuse for your characters, then you can literally click 2 buttons on the component and have the phonemes set up for you, but otherwise you still need to do them yourself (at least once, if you set up the same blendshapes on your different character models, then you can create a preset to use in future)

    As far as phoneme references go, this is the one I used when creating the system: Lip-Synching For Animation, it's not perfect but I found it useful!
     
  5. bluemoon

    bluemoon

    Joined:
    Dec 14, 2012
    Posts:
    84
    That will work! I can make the blend shapes myself no problem I just didn't know what shapes I would need. I may end up grabbing a Fuse character just to take a look at what shapes they use. HAHA

    Thanks
    Travis
     
  6. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    No problem, glad it helped :) You could also look at the Lincoln demo scene included, the phoneme poses set up in that demo are more or less the same as the ones included in the new preset, so you don't even need to wait for the update xD
     
  7. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Beta 0.3 has been submitted to the asset store.
    The store doesn't give me any way to actually schedule a release date for a new version, so I've left it until today to submit to reduce the chance of it going live before the 1st of July. I've seen it take anywhere between three days and a week and a half before, so don't worry if the update isn't online exactly on time - the ball's in Unity's court now ;)

    Cheers!
     
  8. Zyxil

    Zyxil

    Joined:
    Nov 23, 2009
    Posts:
    111
    Purchased! Looking forward to digging in!
     
  9. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks, if you've got any questions let me know :)

    In other news, Beta 0.3 has just gone live on the store. It's a day early, but unfortunately there's nothing I can do to schedule more accurately. My apologies to anyone who was planning on purchasing before the price increase - If you buy it on June the 29th, 2015 (the intended last day at the alpha price), email your invoice number to contact(at)rogodigital.com and I will reimburse you the extra $5.

    If you already own LipSync, the update should now be available on your downloads page.

    I've also put together a new trailer and refreshed the store page a bit - as always any feedback is welcome!
     
    Last edited: Jun 30, 2015
  10. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Sorry for bumping so soon, but this board does move fast, and I made that last post in the middle of the night - just trying to make sure anyone who was planning on buying LipSync today but saw the early price increase sees this,
    Don't want people to miss out on the offer!
     
  11. Recon03

    Recon03

    Joined:
    Aug 5, 2013
    Posts:
    423
    How well does this work with your own models, blend shapes, etc? I do not use Fuse or have plans to, so I was wondering bout this, ,

    Thanks for any info.
     
  12. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hi Recon, LipSync works just as well with your own models as with Fuse ones, though obviously the quality of the end result is a bit more dependent on you making good quality blendshapes on your model.
    The ability to add multiple blendshapes for each phoneme makes it great for models from software like Fuse, but there's no reason you couldn't just add a single blendshape for each if you'd rather, it's very flexible!

    Hope that helps :)
     
  13. Nyong

    Nyong

    Joined:
    Jul 7, 2015
    Posts:
    4
    Autosync doesn't work. I set the Audio file and click 'Process Audio' button, but nothing happend. .... I'm using Unity 5.1.0f3 in Windows7.
     

    Attached Files:

  14. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    That's strange Nyong, are you getting any errors in the console when you click the button? You could also try going into the folder Rogo Digital/LipSync/AutoSync Windows/SAPI and running sapi_lipsync.exe, to check if it opens correctly (it should just show up as a command line window that closes straight after). It may be that your antivirus or something (maybe Windows' UAC settings) is blocking the application from running.

    Let me know if any of that helps!
     
  15. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,069
    I don't know if it is significant, but the sapi_lipsync application is geared toward using WAV files encoded using PCM. I'm not certain how well it will work with compressed files such as MP3s or OGGs. Again, I'm not certain if that is the problem being encountered, but it's a possibility.
     
  16. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    You're right - though from his gif it looks like it's happening when using the included gettysburg.wav file, which I've tested with it already.
     
  17. Nyong

    Nyong

    Joined:
    Jul 7, 2015
    Posts:
    4
    Yes. I'am testing via included "Gettysburg.wav" file .

    I turned off anitivirus, and UAC set is off. but still has problem.
    -- I can't find any error message.

    I tried to open SAPI_LIPSYNC.exe and use it. but It didn't work too.

    Cap 2015-07-08 15-50-23-857.png
     
    Last edited: Jul 8, 2015
  18. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,069
    What operating system are you running it on?
     
  19. Nyong

    Nyong

    Joined:
    Jul 7, 2015
    Posts:
    4
    Windows 7 Professional. Service Pack 1. I'm using Korean Language...
     
  20. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    After doing a bit of googling, I'm pretty sure the language pack is the issue, though im not totally sure how to fix it.
    Do you have the english language pack installed as well? SAPI seems to have trouble with other system languages.
     
  21. Nyong

    Nyong

    Joined:
    Jul 7, 2015
    Posts:
    4
    I've got the same idea. and "Problem" solved!

    Have to install the en-US windows language pack.

    Thank you for your concern.
     
  22. testerthetester1234

    testerthetester1234

    Joined:
    Jun 8, 2015
    Posts:
    3
    Is there a way to use this "on the fly" if I have a piece of audio that is being generated, or does the audio have to be pre- processed before running the app
     
  23. siblingrivalry

    siblingrivalry

    Joined:
    Nov 25, 2014
    Posts:
    385
    Hi can this run in realtime?

    Can it be used with UMA2?

    Thanks
     
  24. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    I'm afraid not, syncing the phonemes to the audio has to be done in the LipSync editor window in Unity (either manually or using AutoSync). It may be possible to have AutoSync work at runtime - but it would take a fair bit of modification to the current system.

    I will look into how complex that might be, and see if there's enough demand for it.

    What do you mean by realtime, sorry? The component works in realtime to play back the lipsync data (they're not standard Unity animations so that they don't interfere with any other animations currently playing), but the data itself has to be generated beforehand in the editor.

    I haven't used UMA before, but I just did a quick test and I don't think it can. LipSync requires blendshapes on the mesh, and (to be used easily) the mesh needs to exist before the game starts playing. UMA appears to have some kind of expressions system built in, but the meshes it generates don't have blendshapes included.

    I may be wrong though, I'm not too familiar with how UMA 2 works!
     
  25. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,069
    I keep hearing requests for on-the-fly processing. But I'm not sure why it keeps coming up. I can understand the appeal, but there is considerably less need for high-quality lip-syncing in most on-the-fly applications. Normally, quality lip syncing is needed for pre-rendered and pre-scripted sequences. And there's no need for on-the-fly syncing for those kinds of use-cases. In fact, that type of syncing would be detrimental, as it would prevent the user from adding to or tweaking the animation for a more nuanced performance. Certain details can only be added in pre-production.

    While I appreciate the desire to simply have the computer do everything for you, there are limits to what a machine can handle. The subtlety and nuance of human facial expressions requires a bit more care in order to be believable. The degree to which scripts such as this one automate the process is already a huge boon to productivity.
     
    chiapet1021 likes this.
  26. ecurtz

    ecurtz

    Joined:
    May 13, 2009
    Posts:
    558
    You could easily use the channels exposed by the UMAExpressionPlayer, since those are basically blend shapes (although they're implemented with bones behind the scenes, that's invisible from the outside.) The list of poses is in the file Standard Assets:UMA:Core:Scripts:ExpressionPlayer.cs - most of them have two extreme poses per channel e.g. neckUp_Down will be up at a value of 1.0 and down at a value of -1.0, although they can be over driven beyond that. The bones are moved in LateUpdate() so if you set the pose values in Update() you should be fine (remember to set the flags to override Mecanim if you want to override the head and neck.)
     
  27. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Ah, OK - thanks for the info! We've put together a roadmap for features up until version 1.0 now, and one of the features on there is separating the code that deals with blend shapes from the actual LipSync component logic. This will let us (or anyone else) create modules for integrating any other animation system with LipSync, so blendshapes could be easily swapped out for UMAExpressions or Megafiers Morph components.

    It's probably not going to be available in the very next update, but it's on the horizon!
     
    ArthurT and wetcircuit like this.
  28. testerthetester1234

    testerthetester1234

    Joined:
    Jun 8, 2015
    Posts:
    3
    Just purchased thinking that the Mac Version had some sort of auto sync capability, but it appears it does not ... is this something that is close to working for mac? even in a crude state? thanks
     
  29. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    I'm afraid it's not just yet - AutoSync for mac will need to use an entirely different system for recognising phonemes in audio. This is more difficult for OSX, as there seems to be less support for offline speech recognition.

    We are looking into it though, it's on our roadmap for beta 0.5 so with a bit of luck it should be available in a couple of months, and in the meantime, manual LipSyncing is still an option on mac (and usually not as time-consuming as it initially sounds!)
     
    wetcircuit likes this.
  30. adamz

    adamz

    Joined:
    Jul 18, 2007
    Posts:
    859
    Is it possible to get this to work with Text-to-speech?
     
  31. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hi Adamz,
    That depends on when the text-to-speech is done. If you're doing dialogue using TTS and it's prepared beforehand (eg. outside of Unity), then yes - LipSync can be used to create lipsyncing/facial animations to go along with it.

    If you're generating your audio through TTS at runtime in your game, then no, as LipSync requires some setup per audioclip in the editor.
     
  32. julianr

    julianr

    Joined:
    Jun 5, 2014
    Posts:
    1,061
    Just purchased! Looking forward to using this soon, by then there will no doubt be more goodness added :)
     
  33. adamz

    adamz

    Joined:
    Jul 18, 2007
    Posts:
    859
    If I can use Easy Voice to generate my audio file, is it possible to generate lip sync using your asset at run-time? I don't want to have to go in manually place key-frames, I would like it to be automatic. What Im trying to do is text dynamic text that's outputted to a text field, convert that to audio, and then play the audio with the lip sync as fast as possible...
     
  34. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks, Julianr!

    It looks as if EasyVoice creates an AudioClip in the editor, is this right? If it does, then it's possible to use AutoSync (only on Windows at the moment) to generate a LipSync file for it, and play that back at runtime, without manually placing phonemes.
    What you can't do (at least not currently) is generate the LipSync file at runtime. So LipSync will work fine for things where you have your audio beforehand (whether it's text-to-speech or not), but not if the audio is generated on the fly.

    Hope that's cleared things up. :)
     
  35. EvilEliot17

    EvilEliot17

    Joined:
    Apr 12, 2014
    Posts:
    9
    hi, i have the lip sync beta 0.3, when i try the auto lip sync with the Gettysburg wav file that you provide in the examples, then everything works fine, then i try a wav file i made in audacity (basically i open a song and remove the music, only isolate the vocals) save it as wav and import in unity (23Mb) but the auto lip sync crash and don't send any message, then i use a shorter version of 5 Mb and crash again, finally i try the original song in mp3 and crash again.

    do you have a recommendation on the wav or mp3 file formats i need to make the auto syng work.

    Thanks .
     
  36. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hi EvilEliot,
    At present, AutoSync will only work with uncompressed .wav files. I've found the best settings in Audacity are to select "Other uncompressed files" in the export window, then click the options button and set them like this:

    AudacitySettingsLipSync.png
    Hope that helps!
     
  37. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Update time! LipSync Beta 0.31 is now live on the Asset Store. It's a relatively minor update on the whole (no new features), but it does include a big fix for in-editor audio playback in Unity 5.x, which now works just as well as in 4.x!

    Here's the changelist:
    - Replaced AudioUtility class in LipSync for 5.x, brings editor audio playback into line with 4.x.
    - Fixed null reference exception when adding new emotions.
    - Fixed extra mesh blendshapes not properly resetting in the editor.

    You can download this update from your downloads section, as per usual.

    Cheers!
     
  38. EvilEliot17

    EvilEliot17

    Joined:
    Apr 12, 2014
    Posts:
    9
    ok thanxs, i will make a couple of test.

    apparently i works fine whit clean dialog wavs or mp3, but it doesn't work fine whit background noise or music.

    i notice i can export to xml, can i import a xml ? , i can't find the option to import from an xml.

    Cheers
     
  39. EvilEliot17

    EvilEliot17

    Joined:
    Apr 12, 2014
    Posts:
    9
    Oooo

    i have it, i read your documentation and i have it working with the XML, but there is a way to load the XML data into the configure lip sync audio window, that could be very helpful.

    thanks a lot, great plugin by the way.
     
  40. Slowbud

    Slowbud

    Joined:
    Jul 19, 2015
    Posts:
    53
    AutoSync didn't work!
    Sound file name issue and workaround;
    bought LipSync but AutoSync didn't work at all. The analyzing window just popped up for half a second.
    Got deeper in it and found that sapi_lipsync,exe doesn't take sound files with a space in it's name (allowed in Windows). It takes the part after the space as a second parameter. Renaming the soundfile solved the problem.
    I suggest to fix this or at least point it out, so that others don't run in this issue and loose a lot of time.
    Apart of that, LipSync works fine for me.
     
  41. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Sure, I didn't include this originally as the XML export was designed for cases where you needed to do processing on the file between creating and loading, so I didn't see much need to load them back in. I'll add it for the next update though.

    This is very good to know! Thanks for finding it, I think I may have run into this issue myself and not known what caused it! We are considering replacing the Annosoft SAPI program with some other system, as it doesn't seem to be as reliable as we'd hoped, but until then I'll add a fix for that!
     
  42. EvilEliot17

    EvilEliot17

    Joined:
    Apr 12, 2014
    Posts:
    9
    oh yeah i explain what i'm doing, when i try to synchronize normal dialogs it works fine, but when i try to synchronize songs it don't work very well, so i do the next, i download the song and the subtitles, then i read the subtitle srt file and build a XML like the one you use, but the problem is that de subtitles include more less 7 words by keyframe, so the vocals are not exactly synchronize with the audio, so if i can load the new XML that i create on your lip Sync Window, i can easily fix the vocal synchronization. at least at this point i have all the correct vocals in an approximated correct position.

    Cheers
     
  43. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Ahh, I see. That should be fine, as long as the XML file matches the format of the ones the editor generates. If you send an email to contact(at)rogodigital.com, I can send you a replacement file so you can load them back in. (I'll put it in the next update too, but that won't be out for a couple of weeks).
     
  44. DGordon

    DGordon

    Joined:
    Dec 8, 2013
    Posts:
    342
    Thanks for the great product! I started to build my own version of this (blendshape data for phenomes and emotions) ... and then I saw you already created this ... and it comes with presets for Mixamo characters :D! You saved me a lot of time, and gave this a level of polish I wouldn't have been able to do in the time I can allot for this. Awesome work!

    Have you thought about extending this to work for 2D characters as well? The company I work for has been doing 2D flash adventure games, and we have a body + separate mouth states that change based on the audio file. However, it would be awesome if we could use this for 2D as well when we officially make the switch to Unity (we make things for schools so their tech is a bit behind the times ...).

    Would it be very hard to create a component that doesn't mess with blendshapes, but just dispatches an event whenever a phenome/emotion changes? If so, it would open up the doors to anyone who just wants to use your component to drive their own visuals.

    Thanks again for a great product ... well worth the money.
     
  45. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks DGordon! I'm glad you're finding it useful.

    I have thought about that, but I wasn't sure how to handle blending etc. I think the approach you suggested might be the best idea, though there are a number of ideas I have at the moment that would work better if the LipSync component was pretty heavily re-structured so 2D support might come about later (probably at or after 1.0 though).
     
  46. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hello everyone, I'm still hard at work on the next update - hopefully I'll have some more definite news on that in the next week or so.

    In the meantime, I've been finding it's taking up a lot of my free time developing and supporting this and, although I enjoy it, I have a full time job too which takes up most of the rest of my time! Because of this, I wanted to get some feedback from both customers and non-customers alike about LipSync to judge how best to go forward (in terms of price, time spent developing, what features to prioritise etc.) - rest assured, I'm not going to stop development or support for LipSync at all. I just need to get a better idea of what people want.

    I've created a survey using Google form, it should only take about 5 minutes or less to complete. I'd really appreciate anyone's honest feedback on there!

     
    julianr and chiapet1021 like this.
  47. LouisHong

    LouisHong

    Joined:
    Nov 11, 2014
    Posts:
    43
    From /r/unityassets, what an awesome asset! Thanks for making the asset and sharing it on the asset store!
     
  48. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks Iam!

    This is just to let people know, if you have a copy of Unity 5 pro, or a pro subscription, you can get LipSync for 70% off for the rest of the month!

    We're featured in this month's Level 11 deals, so if you have access to them, just head over to the Level 11 page now to get LipSync for just $6, or your regional equivalent.

    Cheers!​
     
    LouisHong likes this.
  49. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    712
    Hey, does this work in realtime on mobile?
    Will it work from audio being streamed in?
     
  50. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hi, Ina
    No, LipSync works with audio that's preprocessed in the editor to detect phonemes and/or add emotion markers. It will run on mobile, but it doesn't support entirely real-time lipsyncing.