Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

[RELEASED] LipSync Pro and Eye Controller - Lipsyncing and Facial Animation Tools

Discussion in 'Assets and Asset Store' started by Rtyper, Mar 11, 2015.

  1. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,080
    is there a way you can feed in a phoneme string (generated from speech to text) for mobile realtime lipsync?
     
  2. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I'm not completely sure what you mean, sorry. When you say realtime, are you talking about the actual rendering/playback of the animation? If so, yes, that's realtime. AutoSync should be able to handle text-to-speech fairly well, and you can adjust the phonemes yourself if it isn't quite right.

    What LipSync can't do is take audio at runtime (from text-to-speech or otherwise) and create the lipsync animation there. The audio needs to be processed in the editor beforehand.
     
  3. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,080
    Can you take text in realtime to process lipsync?
     
  4. immFX

    immFX

    Joined:
    Mar 20, 2010
    Posts:
    110
    Hello,

    I purchased this a week ago and I must say congrats to the developer, it is as good as it sounds! I am also very happy to hear that the developer is committed to further updating the product.

    One thing that I would really like to be included in a future update is a callback method in the api. Currently, I can't think of a way to tell when a clip will be over (so a next one may start) - other than call a WaitForSeconds function... Any other ideas?
     
  5. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    No, sorry - this product is only for preprocessed audio. It doesn't do anything other than play back animations in real time.

    Thanks a lot! It's funny you should mention callbacks actually, that's one of the features that I just implemented yesterday.

    upload_2015-9-10_18-41-46.png
    If you really need it urgently, it's not too difficult to add yourself if you don't mind digging into the code a bit! Or if you want, email us at contact@rogodigital.com and I'll sort you out with some replacement files to add the callback in.

    We're nearly done on the 0.4 update now, which is possibly the largest yet - it contains several new features, improvements to the editor and bugfixes. I'll put a more in-depth rundown of everything in the update in a couple of days, and it should be available within the next week or so.

    Cheers!
     
  6. immFX

    immFX

    Joined:
    Mar 20, 2010
    Posts:
    110
    Splendid!

    I think I'll wait for the new release (or will contact you if the 0.4 release delays longer than expected ;) )
     
  7. Molla

    Molla

    Joined:
    May 7, 2014
    Posts:
    7
    Is there any way to get these animations to work with Adventure Creator?
     
  8. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Not directly right now, but watch this space - there may be some progress on that in the very near future.
     
  9. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Good news Molla, and anyone else who was wondering about Adventure Creator support: Adventure Creator now has built-in support for characters using LipSync!

    Many thanks to Icebox for doing the work on this one (I can't take any credit), so if you're wanting to use the two assets together, just upgrade to the latest version of Adventure Creator. The updated manual also includes info on how to use LipSync on Adventure Creator characters in the lipsynching section.
     
  10. jaelove

    jaelove

    Joined:
    Jul 5, 2012
    Posts:
    302
    Would love to see bone base facial rigs supported. It would be an instant purchase for me.
     
  11. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Yep, that's coming in this next update!


    In other news, I have a rundown of all the main features, and a big request to make! I need a new demo model!

    The Lincoln model is fine for demoing blend shapes and Fuse integration, but I need to put together a demo scene for bone-based animation, and I'm not a great character modeller - and I'm even worse at rigging! If anyone either has a model with a bone-based facial rig that they'd be happy for me to include in the package, or even better would be up for creating one (you would be paid for creating one, and in both cases would get credit for the model) then please email me! (contact@rogodigital.com). The sooner I can get a demo made the better, as I'd prefer not to have to release the update without a new demo scene!


    On to the new features!
    • Bone transforms in poses
      You can now mix "Bone shapes" alongside or instead of blend shapes in your poses. These are set in the editor just like blend shapes.
      bone transform.PNG

    • Text-based Autosync option
      Autosync can now be used in either textless (as with 0.3) or text-based mode, using a transcript either directly typed in, or from a text file.
      text-based autosync.PNG

    • Zoom and Scroll in the clip editor
      The clip editor can now scale its viewport to zoom in on a clip, to make long audio files easier to work with.

    • Pose guides in the scene view
      The scene view now shows an illustration of what each phoneme should roughly look like when selected in the inspector, to help with picking the right combination of blend shapes.
      pose guide.PNG
    Those are just some of the new features, this update is the biggest one yet, by far! I've put the full changelist at the bottom of this post.

    And thanks to everyone who's bought it so far - it's thanks to you that I'm able to keep working on it!


    Changelist:
    - Added support for bone-based facial rigs
    - Allowed AudioSource to be on a seperate GameObject.
    - Added "Rest" Phoneme. (**Select existing characters in the editor to update**)
    - Added optional audio delay parameter to the Play function. (can improve results from AutoSync)
    - Added OnFinishedPlaying Callback.
    - Made LipSyncData files with no phonemes playable.
    - Added defaultDelay variable for play on awake.
    - Made phonemeBlendState and emotionBlendState variables accessible to other scripts. (For extending LipSync)
    - Added phoneme pose guides in the scene view.
    - Added new component icons
    - Tweaked editor styling and fixed alignments
    - Fixed errors with resetting blendshapes when deleting/changing phoneme poses
    - Made Hide Wireframe button affect extra renderers
    - Added animation to editor foldouts

    - Made many improvements to the EyeController script.
    - Added branding for EyeController and moved to a more prominent location (from Examples/Scripts to Components)

    - Added zoom/time scaling feature to improve ease of use on longer clips.
    - Added text-based AutoSync option.
    - Made Clip Editor use the entire size of its window.
    - Added timestamp markers below timeline.
    - Changed window minimum size, adjusted layout to work better at smaller sizes.
    - Fixed potential memory leak when regenerating waveform images.

    - Double clicking LipSyncData files in the project view will now open the Clip Editor.
    - Improved the default animation options on the LipSync component.
     
  12. chiapet1021

    chiapet1021

    Joined:
    Jun 5, 2013
    Posts:
    605
    Hey @Rtyper, something to potentially put on your radar. A new avatar/character system has recently launched: http://forum.unity3d.com/threads/released-morph-character-system-mcs-male-and-female.355675/

    The models are sharp-looking with lots of potential for customization (even more so if/when custom content tools are released). It looks like there are phoneme blendshapes in there as well. I would love to see LipSync integrate with this system at some point in the nearish future. :)

    Keep up the awesome work!
     
  13. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    @chiapet1021
    Thanks! That looks interesting, I'll definitely keep an eye on it.
    Part of our plan for version 0.5 of LipSync is a new BlendSystem base class, which will abstract the actual control of the mesh out of the LipSync component. LipSync will ship with a BlendSystem for blend shapes built in, but the idea is that it should be easy enough to write other systems for things like UMA, Megafiers or Morph3D that will just plug in to LipSync without any modifications.
     
    chiapet1021 likes this.
  14. chiapet1021

    chiapet1021

    Joined:
    Jun 5, 2013
    Posts:
    605
    Oh nice! That sounds like a more flexible and elegant solution. Looking forward to that update. The Morph3D folks seem very friendly and responsive as well, so they might be interested in helping directly with the integration, especially once your have the BlendSystem in place.
     
  15. immFX

    immFX

    Joined:
    Mar 20, 2010
    Posts:
    110
    That's a great set of new features! I can't wait for the new version.

    As for your request for a new demo model, why not export a Mixamo character from the (free) Mixamo Fuse with all blendshapes included? After all, LipSync and Mixamo work very well together.
     
  16. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Thanks!

    They do, which is why I'm using a Fuse model for the blendshape demo (the Lincoln model) - but Fuse models only have blendshapes for facial animation, and I need a model for demoing the new bone-based animation in 0.4.
     
  17. immFX

    immFX

    Joined:
    Mar 20, 2010
    Posts:
    110
    Which brings me to my next question: why not use skeleton bone transforms (I assume that's what "bone shapes" means) that will be added in the clip editor just like phonemes or emotions? That would help a lot in synchronizing gestures with speech - imagine for example a character saying "over there" and the character's hand pointing to the direction at the same time.
     
  18. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    @immFX Yeah, BoneShapes are just a way of representing a bone transform, and the position and rotation it should end up at, and blending to it with a percentage like a blend shape.

    We've got gestures on our list of features already, though in the form of animations to trigger when a marker is reached in the clip. Using bone transforms for this probably won't happen, as you run into issues with how to blend between them in a realistic way (pointing for example, a simple lerp would cause the arm to distort in weird ways) without adding in IK or other complex systems.

    But we do now have a model for the demo, thanks to @jaelove, who has kindly allowed us to use part of a model from his game. 0.4 will be out before the end of this week now, sorry for the delays!
     
  19. mbbmbbmm

    mbbmbbmm

    Joined:
    Dec 28, 2013
    Posts:
    59
    Hello! This looks like a brilliant extension for unity. However I need some help:
    I am trying to get LipSync to run with the Oculus Audio SDK, but that does not seem to be working. I think the problem might be that the OSP Audio Source as well as LipSync want to play the audio file... Is there a way to trigger just the blend shapes animation without syncing the audio clip, so I could do that manually?
    Also, I don't know if it's related, but I can only see the blend shape poses when setting them up, they are not played back in play mode. Thanks!
     
  20. mbbmbbmm

    mbbmbbmm

    Joined:
    Dec 28, 2013
    Posts:
    59
    Ok, I've been thinking about it and I guess I could just make another object with an AudioSource with volume on zero and remote-control the Skinned Mesh Renderer from there. That should work, right?
     
  21. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Thanks!

    Yeah, you could do it how you said in your other post, but if you don't mind going into the code the simplest way would be to comment out the line in LipSync.cs that plays the audio. I'm not at my PC right now, so I can't get you the line number, but if you open that script and find the Play() function, and then comment out the line audio.Play() in that function.
     
  22. mbbmbbmm

    mbbmbbmm

    Joined:
    Dec 28, 2013
    Posts:
    59
    Yup, found it! I think I was in artist mode yesterday ;-D
    Thank you!
     
  23. immFX

    immFX

    Joined:
    Mar 20, 2010
    Posts:
    110
    Any update on the ETA of the new version?
     
  24. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Yes! Sorry, I know I said it would be the week before last, but I've been super busy at work and trying to sort out a pretty major bug with the bone animations. I think I've worked out what's causing the bug though, so I'm hoping to get the update submitted to the store tomorrow.

    Again, sorry to everyone about the delay on this update, I promise it will be worth the wait!
     
  25. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Just a quick post to keep everyone in the loop here, I did eventually manage to fix that issue with phoneme blending using bones, so that works completely as expected now. The only other thing holding the update back is getting bone-based emotion blending to work properly on top of that.

    I'm in two minds about whether to release it as-is and just disable emotion playback with bones until the next update, or to delay this one further. I think I'll go with the former, unless you guys say otherwise!

    Thanks again for being so patient!
     
  26. jaelove

    jaelove

    Joined:
    Jul 5, 2012
    Posts:
    302
    I'd say release it as is and release the update later
     
    Rtyper likes this.
  27. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    LipSync Beta 0.4 has now been submitted to the Asset Store.
    So after many delays, 0.4 is finally in the review process. Big apology to everyone who has been waiting for it for the last 2 months, and a massive thank you for being patient too! For a list of new features, check a couple of posts up.

    There are a couple of things missing from this that will come in smaller updates over the next month, including the completed support for bone-based animation, the ability to set emotions outside of animations and much more. I'm planning to move to do more regular, smaller updates from now on! Once we've implemented all the planned features on the list, that will be the 1.0 release, and then there will be some major performance improvements - more details on this a bit later.

    This update has a few more requirements for updating than previously, as there was some refactoring. Make sure you read the update guide if you plan on importing it into an existing project using LipSync.
     
    immFX and chiapet1021 like this.
  28. wetcircuit

    wetcircuit

    Joined:
    Jul 17, 2012
    Posts:
    1,409
    Any news on AutoSync for osX?
     
  29. f1chris

    f1chris

    Joined:
    Sep 21, 2013
    Posts:
    335
    +1
     
  30. JakeT

    JakeT

    Joined:
    Nov 14, 2010
    Posts:
    34
    Any update on AutoSync for Mac OSX? I will purchase the second this is out.

    Also, micuccio asked about integrating with Mixamo FacePlus. I would be interested in exactly what he asked about, i.e. using FacePlus to record the animation clip video/audio, then using LipSync’s phoneme animation blended with FacePlus’ expressions. Has anyone been successful with this?
     
  31. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,670
    Just dropping a quick note that the Dialogue System for Unity 1.5.6.1 is now live on the Asset Store with built-in support for LipSync. The integration was tested with LipSync 0.31. As soon as 0.4 is on the Store, I'll make any necessary updates to the integration package.
     
    theANMATOR2b and trasher258 like this.
  32. immFX

    immFX

    Joined:
    Mar 20, 2010
    Posts:
    110
    Congrats on the release!

    Can't wait to get my hands dirty with it! :cool:
     
  33. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Awesome, thanks! There shouldn't be any problem with LipSync 0.4 and Dialogue System, the scripting side of things is more or less the same as 0.31. If you like, I'll send you the .unitypackage to test out beforehand.

    Sorry for the delay on this front! I haven't yet got a proper answer for this, though (with some external help) I've been looking into the CMU Sphinx library. It's less lightweight than SAPI, but it's in active development, it's cross-platform and it's much more powerful. We don't have anything concrete worked out just yet, but if this works out, we'll have a consistent version of AutoSync for Windows, OSX and the new Linux editor.

    On that note, I also have a number of improvements planned for AutoSync itself, which should improve the quality of the files it generates, even with the current API.

    That's coming in the next patch (0.41), which won't be far off, I promise! I'm gonna be putting out more patches more often from now on.

    Thanks!
     
  34. Pandur1982

    Pandur1982

    Joined:
    Jun 16, 2015
    Posts:
    275
    One Question,will you bring in the next update the intergration with playmaker?
     
  35. JakeT

    JakeT

    Joined:
    Nov 14, 2010
    Posts:
    34
    OK, I'm excited about Dialogue System for Unity integration - I'm working on incorporating that right now.

    Please keep me posted on MacOS and FacePlus integration. Those two features make this perfect for me.
     
  36. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    LipSync beta 0.4 is now live in the Asset Store!

    Get the update from your downloads page if you already own it. Make sure you read the update guide if you plan on importing it into an existing project using LipSync, due to some refactoring done in this update.

    Here's the features list again in case you missed it:




      • Bone transforms in poses
        You can now mix "Bone shapes" alongside or instead of blend shapes in your phoneme poses. (Emotion pose support coming soon) These are set in the editor just like blend shapes.
      • Text-based Autosync option
        Autosync can now be used in either textless (as with 0.3) or text-based mode, using a transcript either directly typed in, or from a text file.
      • Zoom and Scroll in the clip editor
        The clip editor can now scale its viewport to zoom in on a clip, to make long audio files easier to work with.
      • Pose guides in the scene view
        The scene view now shows an illustration of what each phoneme should roughly look like when selected in the inspector, to help with picking the right combination of blend shapes.
    Those are just some of the new features, this update is the biggest one yet, by far! I've put the full changelist at the bottom of this post.

    And thanks to everyone who's bought it so far - it's thanks to you that I'm able to keep working on it!


    Changelist:
    - Added support for bone-based facial rigs
    - Allowed AudioSource to be on a seperate GameObject.
    - Added "Rest" Phoneme. (**Select existing characters in the editor to update**)
    - Added optional audio delay parameter to the Play function. (can improve results from AutoSync)
    - Added OnFinishedPlaying Callback.
    - Made LipSyncData files with no phonemes playable.
    - Added defaultDelay variable for play on awake.
    - Made phonemeBlendState and emotionBlendState variables accessible to other scripts. (For extending LipSync)
    - Added phoneme pose guides in the scene view.
    - Added new component icons
    - Tweaked editor styling and fixed alignments
    - Fixed errors with resetting blendshapes when deleting/changing phoneme poses
    - Made Hide Wireframe button affect extra renderers
    - Added animation to editor foldouts

    - Made many improvements to the EyeController script.
    - Added branding for EyeController and moved to a more prominent location (from Examples/Scripts to Components)

    - Added zoom/time scaling feature to improve ease of use on longer clips.
    - Added text-based AutoSync option.
    - Made Clip Editor use the entire size of its window.
    - Added timestamp markers below timeline.
    - Changed window minimum size, adjusted layout to work better at smaller sizes.
    - Fixed potential memory leak when regenerating waveform images.

    - Double clicking LipSyncData files in the project view will now open the Clip Editor.
    - Improved the default animation options on the LipSync component.

    This update comes with the final price increase before 1.0, to $25. As always, updates through 1.x will be free when you own it, regardless of when you buy. The final price for the asset (based on user feedback) will be $35
    , and won't go up any further once 1.0 is released.

    Thanks again to everyone for all the support!
     
    Last edited: Mar 11, 2016
    nuverian likes this.
  37. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Sorry, missed your posts back there!

    Definitely soon now. It may not be in 4.1, but I'm going to update far more often from now on, so don't worry, there won't be long to wait!

    Will do. I can't make any promises in terms of when they'll happen, because I don't know for sure exactly how they'll work just yet, but I'll post any progress here.
     
  38. ikazrima

    ikazrima

    Joined:
    Feb 11, 2014
    Posts:
    320
    I want to checkout the web player demo, but it fails to load half way every time. Both IE and Firefox.
     
  39. silentslack

    silentslack

    Joined:
    Apr 5, 2013
    Posts:
    391
    Hi,

    Have just purchased and does look like a really nice plugin. However, I seem to get an error when attempting a AutoSync:

    SAPI error: Phoneme label not found in phoneme mapper

    Any idea of what this is and how to solve!?

    Thanks
     
  40. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Seems to work fine for me! Could you try again?

    Thanks for your purchase! Unfortunately that error comes from the SAPI application that AutoSync uses. This is actually only the second time I've seen that error, and the first was on one of my own PCs! I was hoping it wouldn't show up elsewhere.
    I still haven't found a solution to it - SAPI seems quite unreliable at times, it seems to produce different results on different machines. I've been saying for a while that I plan to replace it, and we're working on a new solution using CMU Sphinx instead that will be more consistent and also cross-platform.

    If you can't get by without autosync until then, I'd advise trying it on a different PC if you have access to one - if there's still no luck, send me an email at contact@rogodigital.com and we'll see if we can work something else out.
     
  41. silentslack

    silentslack

    Joined:
    Apr 5, 2013
    Posts:
    391
    Oh no that's not good! I work at home and this is my only PC but perhaps I could sort something out - not really ideal. Is there no way round that? Any idea what is causing the problem?

    The auto feature was definitely a feature I would want to make use of as I'm going to have a lot of lines and know the pain of doing this stuff completely by hand.
     
  42. silentslack

    silentslack

    Joined:
    Apr 5, 2013
    Posts:
    391
    Oh, I always seem to generate this error in the editor, perhaps it's related?

    System.IO.IOException: E:/Unity/PrivateEye/PrivateEyeUnity/Assets/Gizmos already exists.
    at System.IO.Directory.Move (System.String sourceDirName, System.String destDirName) [0x000b2] in /Users/builduser/buildslave/mono-runtime-and-classlibs/build/mcs/class/corlib/System.IO/Directory.cs:396
    at RogoDigital.ImportControl..cctor () [0x00019] in E:\Unity\PrivateEye\PrivateEyeUnity\Assets\Rogo Digital\Shared\Editor\ImportControl.cs:12
    UnityEditor.EditorAssemblies:SetLoadedEditorAssemblies(Assembly[])
     
  43. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    I'm afraid not - the SAPI application is very old and isn't actively supported anymore. With hindsight, it obviously wasn't a great idea to include it in a commercial plugin, but LipSync wasn't very popular at the time and I didn't think too much about it! Which is one of the main reasons we're replacing it.

    Send me an email (with your asset store invoice #), If you're interested I'd be happy to run some of your audio through AutoSync myself and send you the output - I realise you've probably bought this on the strength of a feature you can't yet use!

    That error will be fixed in the patch I submitted to the store today, it's not related unfortunately. You can fix it manually by dragging the contents of Rogo Digital/Gizmos into your existing Gizmos folder in your assets folder.
     
  44. Javernandez

    Javernandez

    Joined:
    Feb 4, 2014
    Posts:
    5
    Hi, I dont know if this is the best place to post a question but i dont find any solution and i hope someone can help me :)

    I recently bought the plugin for lipsync and i was very happy with the results until i tryed to mix Facial mocap blendshapes with lipsync audio generated blendshapes animated trought you plugin.

    I'm sending an example of my problem with video.

    When i try to "mix" facial blendshapes like blinking from animations the lipsyn is reseting and the output is very unrealistic.
    I tryed separating the facial blends into another animator layer and set it up on additive and it gives the same error.

    there is any way to "mix" the values generated from the script into the values of an actual facial mocap keyframed animation?
     
  45. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Hi Lolitow, I replied to your email yesterday, but I've copied my response here in case you didn't get it :)

    This is an issue with the current version of LipSync - part of the problem can be solved by removing the blendshapes used by LipSync from your mocap animations (this can be done in Unity's animation window).
    I realise this isn't an ideal solution though! I'm currently working on fixing this properly, the fix will be included in the next update (0.402).
     
  46. Hamesh81

    Hamesh81

    Joined:
    Mar 9, 2012
    Posts:
    405
    Hi Rtyper, I purchased this asset today and it seems like some fantastic functionality so well done.

    I am having trouble getting the autosync to work with my own audio files (wav & mp3). Are there specific settings needed for the audio files to work or can simply any audio file be used with autosync? When I try to do an autosync the following pops up briefly and then nothing happens:
    Screenshot - 07 11 2015.png

    Similarly if I try to use the autosync + text with custom audio, I simply get the progress bar briefly and then nothing happens. Strangely enough, if I try to use the supplied gettysburg or boneanimation audio it works as per the video. I cannot see what is different about these files compared to my own audio.

    I would appreciate some help since the autosync is my main reason for purchase. I look forward to your reply.
     
  47. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    Hi Hamesh81,

    AutoSync only works with uncompressed .wav files - it's a limitation of the SAPI application it uses for phoneme detection. It's also sometimes picky about spaces in file names, so I'd advise changing the file names if they contain spaces.

    Here are the settings I used for exporting the Gettysburg.wav file from Audacity:

    Hope that helps!
     
    Hamesh81 likes this.
  48. Hamesh81

    Hamesh81

    Joined:
    Mar 9, 2012
    Posts:
    405
    Ok I will try to do some recording tomorrow and post back. For the autosync + text, is this supposed to work with only a text file or are both a wav and text file needed?
     
  49. Rtyper

    Rtyper

    Joined:
    Aug 7, 2010
    Posts:
    452
    An audio file is always required. The text transcript is optional and can improve the quality of the results, but it still needs audio.
     
    Hamesh81 likes this.
  50. Brad-Newman

    Brad-Newman

    Joined:
    Feb 7, 2013
    Posts:
    185
    Is there documentation anywhere? Would be great to see the AutoSync WAV file requirement listed more prominently.