Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

SALSA Lipsync Suite - lip-sync, emote, head, eye, and eyelid control system.

Discussion in 'Assets and Asset Store' started by Crazy-Minnow-Studio, Apr 23, 2014.

  1. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    You can use the new Analysis Timings overrides sliders (released in the last version) to tweak the look and feel. https://crazyminnowstudio.com/docs/salsa-lip-sync/modules/salsa/using/#slider-mode

    Due to the way the data structures work on the back end, you will need to make changes in edit mode and then play to see changes. Some settings can be tweaked at runtime, but generally speaking the best solution is to adjust in edit and then play to see results.

    Hope that helps,
    D.
     
    Last edited: Nov 14, 2019
  2. Object-Null

    Object-Null

    Joined:
    Feb 13, 2014
    Posts:
    70
    thanks alot! that was what i was looking for
     
    Crazy-Minnow-Studio likes this.
  3. EnigmaFactory

    EnigmaFactory

    Joined:
    Dec 10, 2011
    Posts:
    98
    Good morning,

    I'm looking for a little assistance with SALSA 2.2.3, UMA 2.9, and Unity 2019.2.12f1. I'm using the UMA DCS Demo - Simple Setup scene and then applying the SALSA One-Click UMA DCS setup. When running the project, everything works great until I attempt to use the Randomize, Change Gender, or wardrobe. The avatar goes T-pose and I get the following error:

    AvatarBuilder 'UMADynamicCharacterAvatar': Transform 'Head' parent 'Head_FixedAxis' must be included in the HumanDescription Skeleton
    UnityEngine.AvatarBuilder:BuildHumanAvatar(GameObject, HumanDescription)
    UMA.UMAGeneratorBase:CreateAvatar(UMAData, UmaTPose) (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBase.cs:279)
    UMA.UMAGeneratorBase:SetAvatar(UMAData, Animator) (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBase.cs:229)
    UMA.UMAGeneratorBase:UpdateAvatar(UMAData) (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBase.cs:201)
    UMA.UMAGeneratorBuiltin:UpdateUMABody(UMAData) (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBuiltin.cs:418)
    UMA.UMAGeneratorBuiltin:HandleDirtyUpdate(UMAData) (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBuiltin.cs:268)
    UMA.UMAGeneratorBuiltin:OnDirtyUpdate() (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBuiltin.cs:291)
    UMA.UMAGeneratorBuiltin:Work() (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBuiltin.cs:148)
    UMA.UMAGeneratorBuiltin:Update() (at Assets/UMA/Core/StandardAssets/UMA/Scripts/UMAGeneratorBuiltin.cs:101)


    Here's a quick video of what occurs: https://www.twitch.tv/videos/509648936

    Since the Avatar is dynamically generated, I've been having a hard time figuring it out.

    Any help would be greatly appreciated.

    Thank you,

    Enigma Factory Games
     
  4. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hi,
    Given the dynamic and very programmatic nature of the UMA character system, our one-click setup for UMA is only meant to demonstrate what is possible. Digging into the low level intricacies of the UMA system is beyond the scope of SALSA support, but we're happy to help with SALSA specific questions or issues. UMA characters head and eye bones are not correctly aligned to Unity's left handed coordinate system, so we leverage our FixTransformAxis and FixAllTransformAxes API methods to add corrective hierarchy above the head and eye bones in order to apply tracking calculations to the corrective hierarchy above the head and eyes. If you are dynamically changing your characters hierarchy on the fly, you will likely need to rebuild the SALSA suite on the fly as well. I recommend reading through our documentation and especially the API sections to get familiar with the SALSA API, and have a look at the UMA DCS one-click setup to get familiar with what it's doing behind the scenes. One thing you might try is to reapply the one-click code after you've made changes to the character.
     
  5. Arealight

    Arealight

    Joined:
    May 4, 2017
    Posts:
    29
    Anyone know how to setup Salsa with Photon Voice2 with a character. What should the character have more then the Salsa script. what photon script should I add? In the old Photon Classic Voice you simply add Photon Voice Speaker and Photon Voice Recorder script to the Charater, then it works. But in Photon Voice2 the scripts don´t exist anymore.
     
  6. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hi Aeralight,

    See the SALSA documentation, the updated Photon Voice 2 documentation is already there. Also, please don't use the Unity product review page for support inquiries.

    Thanks,
    Michael
     
    Last edited: Nov 19, 2019
  7. magique

    magique

    Joined:
    May 2, 2014
    Posts:
    4,030
    @Crazy-Minnow-Studio I was chatting with the UMA Power Tools developer to see if he could fix it so that his UMA prefabs would work with SALSA. It seems that they should already be able to work with SALSA, but because the prefabs are no longer tied to UMAData that SALSA one-click won't be able to set up properly. Would it be possible to get a different UMA one-click that works with a power tools prefab? You can see our conversation starting at the following thread location:

    https://forum.unity.com/threads/uma-power-tools-support-v-2-9.221290/page-13#post-5142506
     
  8. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hello,
    SALSA leverages UMAData once the character is created so that we can get the information needed to use the UMAExpressionPlayer. We originally tried to create our own expressions, leveraging the avatar's bone rig to create a OneClick and discovered any modifications to the rig messed up the defined expressions, which was (of course) completely undesirable. Therefore the decision was made to leverage the UMAExpressionPlayer. As mentioned a few posts up, the UMA ecosystem is so dynamic and is usable in so many different ways and it seems every customer we have is using it differently, we simply cannot create solutions for it all. However, SALSA v2 is flexible enough that anyone can manually create their own setup completely within SALSA.

    Looking at that thread, it doesn't appear there are any blendshapes on that model (of use to SALSA), so it would be a bone configuration and if the bones are not in the exact same orientations whenever there is a dna change, it simply won't work...just like the problems we had with regular UMA. However, once an avatar/prefab is created, it still is possible for anyone to manually create an expression set from any given set of bones within the SALSA Suite. We just can't produce a one-size-fits-all OneClick for it.

    D.
     
  9. UnLogick

    UnLogick

    Joined:
    Jun 11, 2011
    Posts:
    1,745
    This is interesting, you're right in the sense that there is no UMAData or UMAExpressionPlayer because I remove anything related to uma from the prefabs. By definition the UMAData needs to be removed (It does a bunch of texture, avatar, mesh cleanup that makes no sense on my prefabs). The UMAExpressionPlayer has a hard dependency on UMAData, however there is a base class UMA.PoseTools.ExpressionPlayer that has the values that drives the expressions, if you can do a GetComponentInChildren<ExpressionPlayer> and use that one, then I could make my own derived type that would work.
     
    magique likes this.
  10. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    We did a tutorial update for Photon Voice 2 a couple of months back and it did work at that time. If you haven't read the article, it may be of some help.
    https://crazyminnowstudio.com/docs/salsa-lip-sync/addons/photon-voice/
     
  11. SammmZ

    SammmZ

    Joined:
    Aug 13, 2014
    Posts:
    173
    Hey. I just start discovering the SALSA and it's amazing how simple and powerful it is! But seems like I still missed some basic concepts. Let's say I have a complex facial rig (with around 20 bones) and fbx file containing all my phonemes and emotions. How exactly should I proceed creating SALSA's visem & emote? For now I see the only way is to write down transform coordinates for every bone and added it one by one manually... I don't believe that's a way a living human being can proceed. So, what's the proper method?
     
  12. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hi pigglet,

    When mapping to bones:
    • Link a bone to a controller
    • The min transform is capture when you link the bone, so you only need to set the max transform.
    • Clicking the "adjust" button will lock the inspector and select the bone for scene manipulation.
    • Move the bone to the desired max transform (position, rotation, and/or scale).
    • Click the "<" button next to the "Max not Set" button to set the max transform, which also automatically clicks the release button to unlock the inspector and re-select the character root.
    • This process of capturing a min and max transform allows us to interpolate between the two set points in a normalized way that is similar to blendshapes.
    • Repeat this process for all bones you wish to link.
    We write one-click setup scripts, which perform all these actions automatically for popular standardized character generations system, but you are free to write your own one-click setup using the same techniques for any custom character. See our API examples in the documentation and existing one-click setups for more details.
     
    wetcircuit likes this.
  13. wetcircuit

    wetcircuit

    Joined:
    Jul 17, 2012
    Posts:
    1,409
    OH! I didn't know we could write our own 1-clicks. :)
     
  14. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Yeah unlike the old system where there were dependency scripts for different character systems, one-click scripts in the new system are just setting properties in the base system. Even if you are using one of our existing one-click setups, you can modify them to have them set things just they way you like. Our one-click setups typically consist of the following scripts:

    Fuse example
    • OneClickFuse.cs - settings for SALSA and EmoteR
    • OneClickFuseEyes.cs - setting for Eyes
    • OneClickFuseEditor.cs - menu option to kick the process off
    Keep in mind that if you modify our existing one-clicks they are at risk of being overwritten the next time you grab an updated version, so it's best to copy them to your own one-clicks folder and rename the files/classes to your own standard so they don't conflict with the existing one-clicks.
     
    wetcircuit likes this.
  15. atomicjoe

    atomicjoe

    Joined:
    Apr 10, 2013
    Posts:
    1,869
    I made my own OneClick for all my characters to configure them to my taste.
    One reason more to love SALSA!
    The results I'm getting are as good as AAA games using FaceFX! :p
    I had to tinker a lot with custom phonemes, timings and interpolations to get this level of quality, but it was worth it all the way!
    (I found the default settings to be too stiff. Like a robot talking.)
     
  16. HappyGoLucky

    HappyGoLucky

    Joined:
    Jan 5, 2014
    Posts:
    5
    Love the product! Very easy to use! I had my characters talking in no time! My project has a different way of initializing characters and the MeshRenderer isn't created until runtime. I was wondering if it is possible to assign the ExpressionComponent skinnedMeshRenderer when it is initialized.

    public void Initialize()
    {
    foreach (EmoteExpression _Expression in emoter.emotes)
    {
    foreach (ExpressionComponent _Component in _Expression.expData.components)
    {
    if (_Component.lipsyncControlType == ExpressionComponent.LipsyncControlType.Shape)
    {
    //Assign the components SkinnedMeshRenderer here
    }
    }
    }
    }
     
  17. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Run-time setup is totally possible. See our API examples in the documentation for details. This is essentially what all our one-click setups do, except we add an editor script so the process can be launch from the editor menus. In your case, you'll simply kick off the process with your own trigger.
     
  18. SammmZ

    SammmZ

    Joined:
    Aug 13, 2014
    Posts:
    173
    Oh, so actually the answer is "no, we don't have any other workflow except moving each bones manually directly in unity"... Do you consider to make any improvements to this process in future? Something that could fit to the real production pipeline (not a one bone to move a test cube). No offense, blendshapes is perfect and the overall salsa is a great concept, but the bones solution right now is quite unusable. Convenient "bone driven face" production pipeline includes a huge amount of bones on a face and special rigs to move them in 3D software. There is no way a person could move the face bones one by one directly in unity to reproduce emotions&visems with decent quality. Maybe a little tool to capture a bone poses from a runtime will help. So there will be a way to convert already animated emotion poses to salsa format.
     
  19. SammmZ

    SammmZ

    Joined:
    Aug 13, 2014
    Posts:
    173
    Oops, looks like I have an issue with Controller Type: UMA
    I have a custom UMA character with a custom Expression Set. But it seems that expressions to choose under the UMA controller is hardcoded somewhere in UmaUepProxy. Which is hidden in DLL.
    So my UMA character have many additional emotions and visems but I can't select them in UMA controller. Cos the selection list is limited to 36 default UMA emotions and hardcoded in DLL which I can't edit manually. Please, advise, what should I do?
     
  20. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Obviously we can't build a one-click setup for every scenario that is possible, so instead we focus our efforts on building one-click setups around common and stable standards such as the many character generation systems that exist, including UMA and it's default Expression Set.

    We understand the UMA system is exceptionally flexible, and many people use the system in different ways. We've been working with our awesome SALSA customers for over six years, so we've learned quite a bit about what works and what doesn't. If the current Expression Set does not work for you, we recommend doing direct bone mapping. Despite your earlier comments and assumptions, direct bone mapping in our suite is quite simple and very flexible. You can easily create complex multi-bone visimes and emotes from as many bones as you like, and with a little effort, you can script the process to create your own one-click setup so you only have to set it up manually once.
     
  21. SammmZ

    SammmZ

    Joined:
    Aug 13, 2014
    Posts:
    173
    I'm sorry, maybe I was not very clear. I'm not asking for one click solution. I understand that hardcoding bone names directly in a script (instead of reading the list of current bones) is a fast&easy solution ant it's your right to make it this way. But why do you hide it in dll, restricting us from editing it?
     
  22. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hello pigglet,
    The expression list is an abstraction of bone collections, coded into the Player's innards. We are not purposefully hiding it, there are a lot of things under the hood that require design consideration for how the modules are built. Number one being our controllers are part of our core system and creating a controller for something that is only going to be present for a small number of customers creates dependency issues if we try to read the data from the Expression system. To provide a generally flexible system to work with UMA, I created a generic system that mimicked the ExpressionPlayer's list to avoid the dependency. Our UMA solution was built based on our knowledge of previous customer usage, how the UMA product is built, and how our product is built. As Mike mentioned and as you are obviously aware, UMA is a beast of a solution that provides a myriad of ways of doing things and it is impossible to anticipate how everyone will use it. And to that matter, it is very difficult to come up with flexible solutions that encompass all potential uses of the product. It may be possible to expose the values in a manner that can be edited or overridden at some future point -- no guarantee and no idea when it would be completed. I'll put it on the ideas list and we will discuss it internally to see if it meets our core design goals.

    As per your previous inquiry about a solution for adding bone information to emotes or visemes, we put a rather flexible API in place specifically for highly custom requirements like yours and it would be the perfect way for you to programmatically access your specific bone requirements and create your visemes/emotes from them. We also have an Animator controller which may work for you and you can test it in the EmoteR module at this time (v2.2.3). It is currently not enabled for SALSA. And before we get more quippy responses to our attempts to help you, this solution was recently created as an experimental feature, specifically for EmoteR needs. The feature makes sense for EmoteR as implemented, I'm not sure it makes sense for SALSA at this point, but maybe.

    Darrin
     
  23. skinwalker

    skinwalker

    Joined:
    Apr 10, 2015
    Posts:
    509
    Hello,

    Im trying to automatically feed Salsa skinned meshes because we often delete the character and import it again the cool thing is that the data stays untouched as long as I assign a skinned mesh, but I want to assign it automatically based on the component name, so far I have this

    Code (CSharp):
    1. public class SalsaAutoSetup : MonoBehaviour
    2. {
    3.     [SerializeField] private Salsa _salsa;
    4.     [SerializeField] private SkinnedMeshRenderer _bodyMesh;
    5.     [SerializeField] private SkinnedMeshRenderer _jawMesh;
    6.     [SerializeField] private SkinnedMeshRenderer _tongueMesh;
    7.  
    8.     [Button]
    9.     public void AutoSetup()
    10.     {
    11.         foreach (var visme in _salsa.visemes)
    12.         {
    13.             foreach (var component in visme.expData.components)
    14.             {
    15.                 if (component.name.Contains("Body"))
    16.                 {
    17.                     var controller = (ShapeController) component.controller;
    18.                     controller.smr = _bodyMesh;
    19.                 }
    20.                 if (component.name.Contains("Jaw"))
    21.                 {
    22.                     var controller = (ShapeController) component.controller;
    23.                     controller.smr = _jawMesh;
    24.                 }
    25.                 if (component.name.Contains("Tongue"))
    26.                 {
    27.                     var controller = (ShapeController) component.controller;
    28.                     controller.smr = _tongueMesh;
    29.                 }
    30.              
    31.             }
    32.         }
    33.     }
    34. }
    35.  
    This code is not executed at runtime but the problem is that I get a null pointer exception for the "controller", its null if a skinned mesh is not assigned but thats what Im trying to do.
     
  24. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398

    Hi skinwalker,

    You can use the controllerVars InspectorControllerHelperData list in expData.

    Code (CSharp):
    1.  
    2. foreach (var viseme in salsa.visemes)
    3. {
    4.     foreach (var cntrlVar in viseme.expData.controllerVars)
    5.     {
    6.         if (cntrlVar.smr.name.Contains("Body"))
    7.         {
    8.             cntrlVar.smr = YourNewSMR;
    9.         }
    10.     }
    11. }
     
    skinwalker likes this.
  25. skinwalker

    skinwalker

    Joined:
    Apr 10, 2015
    Posts:
    509
    I tried your suggestion but I get a null pointer on this line

    if (cntrlVar.smr.name.Contains("Body"))

    Also cntrlVar.smr.name is the name of the skinned mesh I want the name of the component under viseme
     
    Last edited: Nov 29, 2019
  26. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Something like this then to check the component name and accessing the SMR without needing to cast.

    Code (CSharp):
    1.  
    2. foreach (var viseme in salsa.visemes)
    3. {
    4.     for (int com = 0; com < viseme.expData.components.Count; com++)
    5.     {
    6.         if (viseme.expData.components[com].name.Contains("Body"))
    7.         {
    8.             viseme.expData.controllerVars[com].smr = YourNewSMR;
    9.         }
    10.     }
    11. }    
     
    skinwalker likes this.
  27. FS9606

    FS9606

    Joined:
    Mar 12, 2015
    Posts:
    21
    Can you think of any reason why the mouth moves to words under android and the editor. But not under an iOS build.
    Or how I can debug this?

    This is using the old salsa. I would upgrade to the new salsa, but I dont think it supports mcs morph3d and we are still using this.
    Any hints would be welcome.
     
  28. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Are you trying to use it with a standard AudioClip or some other audio input? We tested SALSA 1.x on iOS Simulator and physical devices up to iPhone 6. If you haven't already, try setting up a very simple test scene in a new project with nothing but the essentials for a SALSA test.

    SALSA LipSync Suite (v2) is very flexible and can work with just about any model (bones, blendshapes, sprites, etc), we just don't offer a one-click setup or support for MCS since the developer stopped supporting it. It's not too difficult to write your own one-click setups once you settle on a standard setup.
     
  29. alija09

    alija09

    Joined:
    Dec 11, 2019
    Posts:
    2
    Purchased SALSA LipSync Suite to use Mixamo 3D animation model. Is Mixamo 3D animation model is working fine with SALSA LipSync Suite in Unity and which version of Unity? Suggest me to make 3D talking character with emotion and animation.
     
    Last edited: Dec 11, 2019
  30. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    We offer a one-click setup add-on for Adobe Fuse characters if that is what you're referring to. If not, than as long as your character has parts that can be animated (i.e blendshapes or bones), it can be animated with SALSA LipSync Suite through a manual setup. See the documentation for more details.

    https://crazyminnowstudio.com/docs/salsa-lip-sync/
     
  31. simonejennings

    simonejennings

    Joined:
    Jan 11, 2017
    Posts:
    13
    The newer version of salsa has the ability to have more visemes than small medium and big but a lot of the documentation is still just using those as the examples.
    I'm aware that salsa doesn't do proper phoneme mapping but are there any examples out there of ways we could fake it? e.g. "Best" setups for more visemes? (the trigger levels etc)
     
  32. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    The documentation describes the seven shapes we use for most of the character generation systems we support with one-click setups. See the SALSA Overview section.

    The Using SALSA section goes into great detail about trigger levels and fine tuning your results.
     
    simonejennings likes this.
  33. aman181092

    aman181092

    Joined:
    Oct 4, 2019
    Posts:
    1
    Hi Team,

    Just wanted the demo version of the plugin before buying it , as i need to test the accuracy as to whether it matches with my requirements.
     
  34. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hello, we replied to your email, but for others who may have seen this request, SALSA does not have a demo version. We have tried to produce a good quantity of video demonstrations and tutorials as well as written documentation to provide adequate and accurate representation of the product's functionality and flexibility. We also make zero claims to perfect accuracy -- this product will not produce Pixar-level lip sync. SALSA's primary goal is to produce compelling lip synchronization with a much easier and smoother real-time-oriented workflow. SALSA is capable of producing lip synchronization where other solutions are not (i.e. microphone, text-to-speech, etc.) and still provides great visuals where real-time is not necessary.

    Hope this helps!
    D.
     
  35. agonchar

    agonchar

    Joined:
    Jan 10, 2020
    Posts:
    2
    Hey Crazy Minnow,

    Our group has been using Salsa1 and RT-Voice for a while for our research project making talking avatars.
    We've recently started to explore Salsa2.

    Our project quite heavily relies on RT-Voice. We are a bit confused by the website and the asset store's page on this.

    Could you please clarify if the old RT-Voice will work with Salsa2 or are we required to purchase the pro version?
    Which versions does Salsa support?

    Thanks!
     
  36. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    As far as I know, the only difference between standard and pro (when standard was available) was the availability of the RT-Voice source code. However, since RT-Voice has stopped selling the non-pro version, I can't say whether or not there have been changes to RT-Voice pro that aren't available in the non-source version. This would be a question for the makers of RT-Voice. As far as SALSA is concerned, we support the current RT-Voice product.
     
  37. Invirtuo

    Invirtuo

    Joined:
    Nov 30, 2016
    Posts:
    14
    Hi, I hav a question: Is there a way to revert the head to the initial head orientation when you stop the random head movement? Right now it just stop moving and sometime it looks wierd.

    Thanks,
     
  38. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hi Invirtuo,

    Our next update, which should be out soon, includes the ability to enable/disable the individual component sections of the Eyes module. This can also be used to revert the head, eyes, or eyelids, to their start position.
     
    wetcircuit likes this.
  39. Invirtuo

    Invirtuo

    Joined:
    Nov 30, 2016
    Posts:
    14
    Thanks!
     
  40. agonchar

    agonchar

    Joined:
    Jan 10, 2020
    Posts:
    2
    Hey Crazy Minnow,

    Thanks for the initial reply!
    We have a new issue now. We've created new emotes (like a happy face) and are calling them manually. The idea is that during conversations we want to manually trigger a facial gesture, like smiling. And at times we want it to persist instead of doing it once.

    What we noticed is that we can use the one way manual emote function, but the problem is: while the avatar is talking and using emphasizer emotes, it overwrites our manual emote and so the avatar doesn't stay "smiling".

    Do you know of a way where we can toggle a emote and keep it on without it being overwritten?

    Thanks!
     
  41. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hello,
    Emotes are all created equally and override each other. You will need to either create a duplicate "smiling" emote on your model to use manually so it can remain triggered, or simply remove the conflicting shape from the emphasizers.

    In a more advanced setting, if you want to generally keep the "smiling" emote shape in the emphasizers, you can use the API to temporarily remove any emote configuration using the smiling shape while you are triggering the manual smiling emote. And add it back to the emphasizer group when you release the manual smile.

    NOTE: This will remove the entire emote from the emphasizer config and not the specific component that is conflicting. But will resolve the issue you are seeing.

    See the API docs for more info on programmatically removing/adding emotes to the specialty pool configs.
    https://crazyminnowstudio.com/docs/salsa-lip-sync/modules/emoter/api/#operational-methods

    Hope this helps!
    D.
     
  42. AGregori

    AGregori

    Joined:
    Dec 11, 2014
    Posts:
    527
    Hi @Crazy-Minnow-Studio, I have a question about the Slate add-on. First, it's great that it even exists, and that it makes 2 of the best assets (Salsa+Slate) reconcilable. But is there a way to check it in action in the editor while scrubbing Slate?
    The actions like Blink and Random_eyes work fine in the final build, but nothing shows in the editor. Thanks for any help.

    Unity_v8E1aN8mPL.png
     
    Last edited: Jan 29, 2020
  43. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hi Gregorik,

    Unfortunately, there is not currently a way to preview the actions by scrubbing the timeline because all of our processing is calculated at run-time.

    Michael
     
  44. iVizUnity

    iVizUnity

    Joined:
    May 23, 2018
    Posts:
    3
    Hi CrazyMinnow,

    You mentioned "You will need to either create a duplicate 'smiling' emote on your model to use manually so it can remain triggered, ..". Could you elaborate more on what you mean by duplicating the emote for the model?

    Thanks!
     
  45. AGregori

    AGregori

    Joined:
    Dec 11, 2014
    Posts:
    527
    Well, that still makes the Blink action useful since it adds some automated realism in a cutscene. As for the random head movements etc. without direct timeline control, they can be a little disruptive in a cutscene in my experience. But I understand that technically it's hard or impossible to implement currently.
     
  46. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Agreed, often it's desirable to disable the random capabilities in cut-scenes to retain full control when portraying a particular emotional state. You can always disable random and still use the available actions to direct your character. For example using the Eyes_Look action, you can direct your character head and eyes to focus on targets you specify, or use the Eyes_Blink, or Emoter_Emote action to express a particular emotion.

    The random capabilities are a great for an easy way to add more life to loitering NPC's where any emotion is better than no emotion.
     
    wetcircuit and AGregori like this.
  47. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Sure, either by externally editing the model in a 3D application (Maya, Blender, etc.) or using a capable Unity asset (Morphmixer, etc.), copy the existing smile blendshape to a completely new blendshape (i.e. smile2); thereby, implementing two distinct "smile" blendshapes. Bear in mind, enabling the original "smile" and the new "smile2" creates an additive effect, basically smile * 2, which may not be desirable.

    Hope that helps.
    D.
     
  48. domdev

    domdev

    Joined:
    Feb 2, 2015
    Posts:
    375
    Hi, I'm using SALSA Lipsync Suite, and work great in unity editor, now we want it in webgl so we purchase Amplitude to make it work, but mouth wont open also toggle the Use External Analysis, but still dont work,also I cant find that AmplitudeSALSA
     
  49. varan941

    varan941

    Joined:
    Jul 10, 2019
    Posts:
    7
    Hi, I'm trying to play emotions at the touch of a button. I use Sckin mesh render and Animations clips. But it works once and crookedly, if I had Salsa. How to do better? Or does it need to be implemented differently?

    The code:
    using UnityEngine;
    using CrazyMinnow.SALSA;
    using CrazyMinnow;

    public class Animations : MonoBehaviour
    {

    public SkinnedMeshRenderer _skin;
    public Animator _anim;

    public Salsa salsa;

    public void LookUP()
    {
    _anim.SetTrigger("lookUp");
    }

    public void SayLrg()
    {
    _anim.SetTrigger("sayLrg");
    }

    public void Blink()
    {
    _anim.SetTrigger("blink");
    }
    public void SayRest()
    {
    _anim.SetTrigger("sayRest");
    }

    void Update()
    {
    if (Input.GetKey(KeyCode.RightArrow))
    {
    salsa.emoter.AddToPool("emote 1", Emoter.PoolType.Random);
    //LookUP();
    }

    if (Input.GetKey(KeyCode.DownArrow))
    {
    salsa.emoter.AddToPool("emote 0", Emoter.PoolType.Random);
    }

    if (Input.GetKey(KeyCode.UpArrow))
    {
    salsa.emoter.AddToPool("emote 2", Emoter.PoolType.Random);
    }

    if (Input.GetKey(KeyCode.LeftArrow))
    {
    salsa.emoter.AddToPool("emote 3", Emoter.PoolType.Random);
    }


    }
    }
     
  50. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,398
    Hello varan941,

    See our full API documentation in the manual. It provides API details and examples.
    https://crazyminnowstudio.com/docs/salsa-lip-sync/

    Michael