Search Unity

  1. Unity 2018.3 is now released.
    Dismiss Notice
  2. The Unity Pro & Visual Studio Professional Bundle gives you the tools you need to develop faster & collaborate more efficiently. Learn more.
    Dismiss Notice
  3. Want more efficiency in your development work? Sign up to receive weekly tech and creative know-how from Unity experts.
    Dismiss Notice
  4. Build games and experiences that can load instantly and without install. Explore the Project Tiny Preview today!
    Dismiss Notice
  5. Nominations have been announced for this years Unity Awards. Celebrate the wonderful projects made by your peers this year and get voting! Vote here!
    Dismiss Notice
  6. Want to provide direct feedback to the Unity team? Join the Unity Advisory Panel.
    Dismiss Notice
  7. Improve your Unity skills with a certified instructor in a private, interactive classroom. Watch the overview now.
    Dismiss Notice

[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools

Discussion in 'Assets and Asset Store' started by Rtyper, Mar 11, 2015.

  1. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322

    Rogo Digital's LipSync Pro - Phoneme based lipsyncing system

    LipSync Pro 1.4 is now available in the Asset store.

    LipSync Pro is a high-quality, easy-to-use system for creating phoneme-based lipsyncing and facial animation within Unity. It allows you to easily set up complex facial poses for each phoneme and emotion, consisting of multiple blendshapes, bones or more, and synchronise phoneme timings to audio, all inside the Unity editor. Animations can then be played back on any character with a LipSync component attached with no additional work!

    LipSync Pro can also process audio automatically and show a real-time preview in the editor, making it quicker and easier than before to synchronise any audio.


    Features
    • AutoSync - Automatic phoneme detection, using only the audioclip itself.
    • Preset System - Save and Load preset pose setups for your characters, or use a built-in one.
    • Easy-to-use editors for setting up poses and syncing audio
    • Emotions. Set up emotion poses on characters, and set blends into and out of them alongside phonemes for complete facial animation.
    • Gestures. Cue full-body Mecanim animations to be triggered as part of your LipSync animations.
    • Create a mesh with blend shapes inside LipSync Pro, from two or more separate meshes.
    • Marker Filtering. Show/Hide certain phoneme markers in the editor to make it easier to move/edit the one you want.
    • Bone-based animation - Support for adding bone transforms to phoneme and emotion poses alongside or instead of blendshapes, allowing LipSync Pro to be used on a wider range of character models.
    • Real-time animation preview when synchronising audio clips.
    • Pose Guides - Illustrations of how each phoneme pose should look in the component editor.
    • BlendSystems allow for custom support for other character systems.
    • The fastest workflow for Adobe Fuse characters - built-in presets and AutoSync allow you to get a character talking in less than a minute.
    • AutoSync batch processing.
    • NEW - Phoneme set can be fully customised. You can now use any number of phonemes with custom names in place of the default Preston Blair set.
    • NEW - Emotion Mixer. Easily create more nuanced expressions by blending multiple Emotions together.
    Currently features built-in or downloadable integration with the following 3rd party assets:
    - Adventure Creator [Native]
    - Cinema Director [Downloadable]
    - Cinematic Sequencer - SLATE [Downloadable]
    - Dialogue System for Unity [Native]
    - Flux [Downloadable]
    - GRML Base Models [Native]
    - iClone Characters [Native]
    - Morph3D [Downloadable]
    - Mixamo (now Adobe) Fuse [Native]
    - NodeCanvas [Downloadable]
    - Playmaker [Downloadable]
    - PolyMorpher [Downloadable]
    - Quest System Pro [Native]
    - RT-Voice (and Pro) [Native*]
    - UMA 2 [Downloadable]
    - uSequencer [Downloadable]

    Please Note: Due to changes made in the operating system, the AutoSync feature is currently incompatible with Mac OS Sierra. A fix for this is in the works.



    If you have any suggestions/comments/questions, I'd love to hear them!

    Cheers,
    Rhys.

    * RT-Voice exports to .wav natively.
     
    Last edited: Sep 10, 2017
    wjdausrjf likes this.
  2. Mikeedee

    Mikeedee

    Joined:
    Jan 5, 2015
    Posts:
    42
    This looks pretty nice, do you have any videos or web demo we can take a look at ?
     
  3. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Yes, sorry - I just forgot to include it in with the first post! I've added the links now. :)
     
  4. steveR

    steveR

    Joined:
    Jul 14, 2013
    Posts:
    33
    Looks interesting -

    How about compatibility with Adventure Creator?
     
  5. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    I don't use Adventure Creator myself, but I just took a quick look at their documentation - it seems like they have a number of options built in for lipsyncing but they are hardcoded, so this alpha version isn't supported by Adventure Creator.
    I will look into offering exporting to other formats in the next version though, which should provide compatibility with assets like Adventure Creator :)

    I also plan on adding PlayMaker support in the next version.
     
    Zyxil likes this.
  6. Mikeedee

    Mikeedee

    Joined:
    Jan 5, 2015
    Posts:
    42
    Looks really interesting, I'm pretty sure that with all the features that you plan on implementing, this will be the definite solution for lip-sync for unity.
    Good luck, I'll certainly be watching this one :)
     
  7. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks Milkeedee!

    I was hoping the asset store page would be up by now, but unfortunately it's not. I am working on the next update already though, this will include some improvements to the editors, and support for emotion markers.

    Hopefully I'll have it available soon!
     
  8. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,065
    Ahhhh, competition. I had been wondering when someone else would get around to providing a lip-sync solution. What you've got so far is decent. Keep working on the improvements, though. The next upgrade for Cheshire is on its way.
     
  9. fis7157

    fis7157

    Joined:
    Mar 16, 2015
    Posts:
    1
    i was thinking of buying but im not sure if it will wrk with characters created in daz 4.7 and i also got a message stating some features may not be inclued in unity 5. when will this be compatiable with unity 5?
     
  10. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    LipSync is now available in the asset store! I'll hopefully be putting out v0.2 in a week or so too, so stay tuned for more updates.

    Thanks! I looked a Cheshire a while ago and it looked very good - and competition's almost always a good thing! :p

    If the character models export with blendshapes included then they'll work with LipSync - import one into your project first and see if there's a "BlendShapes" section on any of the mesh renderer components. If there are blendshapes for facial shapes/poses in there then it'll work fine. You may need to change some export settings in Daz though.

    As for Unity5 - the next update will support it properly, though as far as I know the current one should work with Unity 5's automatic script updater. I'll test it out and get back to you.
     
  11. EduardasFunka

    EduardasFunka

    Joined:
    Oct 23, 2012
    Posts:
    394
    Brought it! thanks now I dont need motion builder ;)
     
  12. BigDaz

    BigDaz

    Joined:
    Apr 8, 2013
    Posts:
    53
    Excellent stuff. Glad it works with Fuse.

    Valve's original Half-Life game used a system where the characters mouth opened based on the volume of the audio. The louder the sound, the more the mouth opened. It was inaccurate but saved a lot of work. I don't know if it might be an alternative option for this system.
     
  13. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks Ukvedys - glad you like it.

    Thanks! That was actually the previous system I was using, and created this to replace xD It looks OK on the kind of old-school low poly models Half Life 1 used, but often looks pretty strange on more modern characters.

    I'm aiming more for high quality lip syncing with this, though you're right about it being quite a lot of work, especially if you have a lot of dialogue. The automatic syncing I'm adding in will hopefully help with that though. :)
     
    Last edited: Mar 21, 2015
  14. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,065
    I believe there is actually another plugin currently in the Asset Store that does something similar to what you are describing. (plays animations based on peaks and valleys of an audio file, mainly opening and closing a mouth) You could go that route if you wanted to.

    Personally, I'm leaning more toward the combined blend-shape approach. The system you're thinking of would be faster, and considerably more automatic. But it also wouldn't result in nearly as satisfying results when it came to the animations. Who knows, maybe the flapping-jaw approach is exactly what you need for your game. Maybe you are specifically attempting to emulate that style. (more like puppetry than full-on lip sync)

    The advantage of a blend-shape focused approach to the problem is a much more nuanced performance. Half-Life 1 went with the flapping jaw approach. But Half-Life 2 used a system very similar to what Rtyper is doing with his LipSync script. (blend shapes combined and animated together for different results) And Half-Life 2's lip-syncing solution is still considered to be one of the best in the industry.
     
  15. IFL

    IFL

    Joined:
    Apr 13, 2013
    Posts:
    408
    I'll probably get this tool, but it would be awesome to have speech to text built in to the editor.
     
  16. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks for the interest. Speech to text is something I've looked at quite a lot with regards to this, I agree it'd be a very useful addition to the tool, but there's a surprising lack of offline, cross-platform APIs for it. I do have some ideas about it though, but I can't promise 100% it will be included.
     
    IFL likes this.
  17. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Here's a quick preview of how the emotion markers work in the new alpha. Still working on other features for it, but I expect it should be finished by the end of the week.
    upload_2015-3-23_13-12-5.png
     
  18. IFL

    IFL

    Joined:
    Apr 13, 2013
    Posts:
    408
    That's really handy. From the short time that I've used LipSync, I really like it.

    It is a bit of a challenge to create all of the phoneme markers, but that's not unusual in any lip syncing software. I'd like to be able to scrub through the track without multiple unstoppable plays, but that's not a show stopper.

    Also, if you try to move a phoneme marker after you've stopped the clip, it doesn't show a change until the clip is played again. And that happens with freshly created markers when the clip is stopped too. Again, it's not a show stopper and can easily be fixed by quickly double tapping the play button to activate the clip.

    All that said, I'm very happy with the current state and future direction of this tool. Thanks for making it!
     
  19. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,065
    Feedback like this is very much appreciated. When developing software, it isn't always possible for the designer to catch every little quirk that crops up. Nothing is better for testing than having the software in the hands of the end-user. Somehow, end-users always seem to find every permutation and use-case that a piece of software can possibly be put through.
     
    Last edited: Mar 23, 2015
  20. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks for all the feedback IFL! As Richard said, users are almost always better at testing software than the developer is - I suppose we probably subconsciously avoid things that might not work when developing :p

    Yeah, this is probably the biggest thing I want to change about it - it can be very time consuming right now.

    This is interesting, are you using Unity5? I've found that in Unity5 scrubbing will start the entire clip playing and I still haven't found a solution for it. In 4 it correctly only plays a small portion of the clip.

    Yes, I'd come across this - it's fixed in alpha 0.2.

    Thank you - feedback like this really is incredibly useful!
     
  21. twobob

    twobob

    Joined:
    Jun 28, 2014
    Posts:
    1,757
    I did something just like this - by hand - last year. With a visual audio editor. Your interface is very similar to what I came up with :) Will keep an eye out on this then. Looks like fun. Nice one
     
  22. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks Twobob :)

    Just a quick progress report, The Unity5 scrubbing bug has been fixed now, so alpha 0.2 will have full Unity5 support now.

    I've also completely re-written the runtime component of the package, it's much more stable now, far better documented and produces better end-results in some situations. It also has support for the new emotion poses built in. I'll be creating a new tutorial video to show how to use these when it's finished. I know I'm slightly behind my original estimate, but I'm looking to release this by the end of this week now :p

    I'm interested in whether people think it's in the right category in the Asset Store too. I put it in Editor Extensions/Audio originally, but I've noticed other similar packages in Editor Extensions/Animation. Do you think I should move it or does it not really matter?

    Thanks again to everyone who's bought it!
     
    twobob likes this.
  23. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,065
    Well, it's hard to say. I put my Asset in the Editor Extensions/Animation category. But I was very focused on the animation aspects of the plug-in, as opposed to the audio elements, so that just made sense to me. Your asset has much more robust audio support, down to real-time scrubbing through the audio while assigning animation keys. So there is some justification in putting it in the Editor Extensions/Audio section. And you aren't the only lip-sync solution in that section, either.

    Personally, I think switching it over to the Animations category would be a good idea, as that is ultimately what your asset is producing. While audio is an important element of your asset, it isn't the ultimate goal of what your asset creates. That would be animation. That is just my opinion though. If you think more people would go to the Audio category when looking for lip-sync related assets, you should keep it there.
     
  24. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Yeah, that was what I was thinking. I think you're right - I'll probably move it with the next release.
     
  25. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Sorry for the double post, but I wanted to show off this new demo - It's a side-by-side comparison of the old LipSync runtime, and the new one, rewritten from scratch for alpha 0.2, using the Lincoln demo. The two heads are using the exact same LipSyncData file, and the same settings where they exist on both the old and new components - I think the difference is pretty considerable!


    Cheers!
     
  26. twobob

    twobob

    Joined:
    Jun 28, 2014
    Posts:
    1,757
    That is excellent. truly.

    Would I have to set up all the blendshapes manually?
     
  27. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks! The blendshapes have to be set on the mesh beforehand, but how many you create is up to you, the minimum is that you have one for each phoneme (or fewer if you can combine them to get the right effect).

    The actual phonemes for the audio clip do also have to be set up manually at the moment - though this is done inside Unity in the custom editor, and generally doesn't take too much time. It is my plan to introduce at least a basic level of automatic phoneme set-up before I release 1.0 though. :)
     
  28. twobob

    twobob

    Joined:
    Jun 28, 2014
    Posts:
    1,757
    I have created a full list of blend-shapes in the past for exactly such a task (as I intimated before) how many phonemes you you require? *reads the top post again* looks like 9. Seems quite low. Which is obviously a good thing. Especially given the intricacy of higher density solutions.
    Thanks for the feedback, watching with interest.
     
  29. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    No problem - Yes it is 9. They're taken from the Preston Blair phoneme series. I believe he was an animator at Disney, and this is the basic set of mouth shapes that he came up with to recreate most sounds accurately.
    It's not 100% perfect, as there are some sounds (Th and R for example) that fall under the same phoneme in this set but require slightly different tongue positions in real life, but it's almost always close enough for animation!
     
    twobob likes this.
  30. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,065
    This is the lowest necessary for a standard Preston-Blair phoneme set-up. At least with blend shapes. The actual Preston-Blair phoneme set has 10 shapes, but one of them is the rest shape. For a standard blend shape model in Unity, the models default state is usually considered the rest state.

    You're right, it's not perfect. But it's fast, efficient, easier to make animations for, and the end result looks quite good. Also, congratulations on the upgrade so far! The upgraded performance on the demo is quite good. It looks a lot more convincing now.
     
    twobob likes this.
  31. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Here's probably the last update on here before alpha 0.2 goes up. I delayed myself a bit longer (again!) because I wasn't happy with how the emotion markers worked. I've got a much better system now that I'm happy with, so look out for the update in the Asset Store in the next few days.

    Here's what the previous emotion marker editing looked like:
    Lipsync screenshot5.png

    The handles closer to the middle of each marker were for setting the blend time for each emotion, but this didn't give the user that much control over how it worked, all emotions had to blend back to neutral before blending into the next one and the editor just looked pretty ugly too!

    So this is the new, improved version:You can insert markers the same as before, and resize/move them, but now multiple markers can be snapped together into a single one and blended together, like this: Lipsync_new_emotion.gif
     
    wetcircuit likes this.
  32. micuccio

    micuccio

    Joined:
    Jan 26, 2014
    Posts:
    110
    Hi there,

    Very interesting software! before to buy it (today or as soon I get back home) I would like to ask you a question.

    I would like to know if is possible to integrate this software with the expressions obtained with Mixamo Faceplus.
    I ask that because your software support the Mixamo Fuse Characters.

    In alternative my idea would be to :

    1)Record the voice and the expressions with Mixamo Face Plus (1st clip animation)
    2)use the sound to get a "emotionless" clip (2nd clip animation)

    3)Use Mecanim to "blend the two effects"

    Could you please tell me if one of the two options is possible?

    Thanks in advance,
     
  33. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hi Micuccio, thanks for the interest!
    I'm not entirely sure to be honest. I believe Faceplus creates an animation file? LipSync uses a custom file to store the data about phonemes and emotions, then plays them back with a runtime component - this allows you to easily play back the same line of dialogue on any other character in your game, with no extra work. Because of this, it may be possible to play an animation from Faceplus on a character at the same time as LipSync, but I haven't tested it, so I can't guarantee that that it will work.

    I believe LipSync will be able to override the blendshapes in the animation with a fairly small change to the code, but I think you may get better results by just using one or the other.
     
  34. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Alpha 0.2 is now out. Be sure to get the update from your downloads page in the asset store if you've already bought it!

    This update includes:
    • Emotion Markers.
    • Much better Unity 5 compatibility.
    • Completely re-written runtime. (See the demo for a comparison)
    • Many editor improvements such as:
      • Phoneme marker filtering.
      • Cleaner UI graphics.
      • Warnings when closing to prevent data loss.
      • Component editor now uses all assigned mesh renderers.
      • Fixed several editor bugs.#
    • Probably more things I can't remember!
    This update includes some refactoring, you may need to remove the LipSync folder from your project first before importing the new version.
     
  35. djkr

    djkr

    Joined:
    Apr 24, 2013
    Posts:
    2
  36. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks djkr,
    Yes, I'm sure LipSync is compatible with Cinema Mo Cap - Mo Cap creates standard Unity animations using only the bones in a mesh for whole-body movements. LipSync uses only blend shapes, so the two can run side-by-side without interfering with each other. As long as your mesh has blend shapes (or you have the wherewithal to create them) LipSync will work just fine for you.
     
  37. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hello! quick update:

    It's a pretty minor update, it fixes a few bugs (mostly in Unity 5) and adds a Play On Awake checkbox to the LipSync component, so you can have characters talking at the start of a scene without any coding.

    Cheers!
     
  38. Julinoleum

    Julinoleum

    Joined:
    Aug 31, 2011
    Posts:
    36
    It would be nice to have a video showing the result. It's nice to have tutorials but it's also nice to see the final result.
     
  39. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Here you go: Playing Clips Tutorial - it contains the final result at the end :)

    I'll put together a more complete video showing off features for the next update, which will also finally contain the automatic lipsync system. It will only work on Windows at first, but I plan on adding Mac and Linux versions later on.
     
  40. nuverian

    nuverian

    Joined:
    Oct 3, 2011
    Posts:
    1,995
    Hey Rhys,

    I love your tool. You have done a great job.
    I've also went ahead and created couple of tasks for using LipSync from NodeCanvas if anyone is interested to use, which you can download here. If you think we can add more tasks let me know :)
    I am also looking at integrating LipSync in the Dialogue Trees part, which is of course a great match ;)

    Cheers!
     
  41. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks a lot man - NodeCanvas is awesome too!

    You could add Pause and Resume actions too if you wanted (though looking at my code just now, I realised I never actually put the logic for those methods back in after re-writing the LipSync class, oops! I'll fix that ASAP :p) .
    Looking forward to seeing it integrated with dialogue trees too, I agree they'll go well together :)

    Thanks again!
     
  42. nuverian

    nuverian

    Joined:
    Oct 3, 2011
    Posts:
    1,995
    Thanks as well! :)

    Yeah, I've noticed that they were not implemented and left them out. Will add those two as soon as you get them in there :)
    I will try and get the dialogue integration finished soon as well!

    Cheers and thanks Rhys!
     
  43. kilik128

    kilik128

    Joined:
    Jul 15, 2013
    Posts:
    759
    Hi Two Question please

    characters exported from Mixamo Fuse is autorig ?

    possible to get real time on mobile ?
     
  44. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Hi Kilik,

    I'm not completely sure what you mean, sorry? If you tick "Enable Facial Blendshapes" in the autorigger, then you can use those blendshapes with LipSync (this is what I've used in all the tutorials).

    Yes, it is - I haven't directly optimized it for mobile yet, but the most phones shouldn't have any problem with a few characters animating using LipSync.

    Thanks!
     
  45. kilik128

    kilik128

    Joined:
    Jul 15, 2013
    Posts:
    759
    nice we can use micro of phone for make caracthere animation

    i mean auto setup facial blendshapes for Mixamo Fuse or sample
     
  46. Zyxil

    Zyxil

    Joined:
    Nov 23, 2009
    Posts:
    111
    Looks excellent.

    Can you ship a standard Fuse character blendshapes asset? Then, with the auto lipsync it'll be no hassle at all to tweak and go!
     
  47. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Thanks Zyxil!

    I think you're both covering the same sort of point here - that's something I've been thinking about doing, I've got a little list of features for the next version, and I think a basic preset system would be a good idea. I'd ship a few presets for Mixamo Fuse (realistic, cartoonish/over-the-top) in with the extension and allow devs to save and load their own easily.

    I've been having some trouble with getting the auto-sync to work consistently across different machines, so I think I will be releasing a kind of private beta of it first to get some feedback on how well it works for different people - not totally sure how I'll do that yet so watch this space!
     
  48. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Had a busy day of development yesterday, and managed to add a whole bunch of new features making Beta 0.3 almost complete!

    Beta 0.3 will launch on July 1st, 2015.

    As this is the first version in beta, I will be adding the first small price increase, from $15 to $20. Of course, anyone who has already bought it, or buys it before the update launches will get all future versions free of charge. I plan for the final 1.0 release to cost $35, so now's your chance to pick it up slightly cheaper!

    New features in Beta 0.3:

    • AutoSync - Automatic phoneme detection in just 2 clicks! (Windows Only)
    • Presets - Create and load character presets to easily set up a new character's blendshapes.
    • XML Support - The LipSync component can now load from an XML format as an alternative to the LipSyncData file, and the clip editor has support for exporting XML files too.
    • Bug Fixes - Including fixing the Pause() and Resume() functions! (oops)
    With these new features, I think LipSync has probably the fastest workflow for adding lip sync to characters exported from Mixamo Fuse - using the presets and AutoSync, you can import a character model and an AudioClip and have them talking in less than a minute with zero scripting.

    I will be putting out some new video demos/tutorials in the next few days to show it all off!
     
    Last edited: Jan 26, 2016
    nuverian likes this.
  49. wetcircuit

    wetcircuit

    Joined:
    Jul 17, 2012
    Posts:
    843
    Well, it's been sitting in my cart over a week waiting for me to pull the trigger.... Saving $5 is as good an excuse as any...

    Since I'm on a Mac I have the inevitable question: will you still pursue an AutoSync function for osX?
     
  50. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    322
    Haha, I'm sure you won't regret it! Yes, I definitely plan on adding AutoSync for OSX, I just need to find a good system for it, as it currently uses the Microsoft Speech API on Windows. It won't be in the 0.3 update, but it hopefully won't be too far afterwards :)