Search Unity

SALSA Lipsync Suite - lip-sync, emote, head, eye, and eyelid control system.

Discussion in 'Assets and Asset Store' started by Crazy-Minnow-Studio, Apr 23, 2014.

  1. arance2021

    arance2021

    Joined:
    Nov 10, 2020
    Posts:
    2
    Hello,
    I'm facing the below issues after importing salsa OneClick Base v2.1.6 and OneClick UMA DCS v2.3.1.

    Steps followed:
    1) I created a new unity project
    2) Imported Salsa lip sync suite from store
    3) Gameobject -> Crazy minnow suite -> oneClicks -> Imported oneClick Base and OneClick UMA DCS
    4) After this, I'm getting the below errors.

    Assets/Plugins/Crazy Minnow Studio/SALSA LipSync/Plugins/OneClickRuntimes/OneClickConfiguration.cs(5,18): error CS0101: The namespace 'CrazyMinnow.SALSA.OneClicks' already contains a definition for 'OneClickConfiguration'

    Assets/Plugins/Crazy Minnow Studio/SALSA LipSync/Plugins/OneClickRuntimes/OneClickExpression.cs(12,16): error CS0111: Type 'OneClickExpression' already defines a member called '.ctor' with the same parameter types

    Assets/Plugins/Crazy Minnow Studio/SALSA LipSync/Plugins/OneClickRuntimes/OneClickConfiguration.cs(17,16): error CS0111: Type 'OneClickConfiguration' already defines a member called '.ctor' with the same parameter types
     
  2. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello! Sounds like you are using SALSA Suite v2.5+ and OneClickBase is updated and included in the core package, which is why you are seeing the duplicate define errors. Importing 2.1.6 overwrote the new version. Also, it is necessary to use UMA OneClick v2.5.0 for SALSA Suite v2.5.

    Since the legacy version of OneClickBase was imported over the top of the included Base, you will need to delete OneClickBase and either re-import SALSA Suite into your project (I don't believe simply re-importing via the right-click menu is enough) to overwrite the GUID in the asset database or you can download the new OneClickBase v2.5.0 (I've made it available on the downloads site for you) and import that into your project to clean up the duplicate define notifications. Also, don't forget to get UMA OneClick v2.5 as well, it is required for SALSA Suite v2.5. That should fix the issues you are seeing.

    Hope that helps,
    D.
     
  3. arance2021

    arance2021

    Joined:
    Nov 10, 2020
    Posts:
    2
    Thank you so much for your quick response. It worked fine now.
     
    Crazy-Minnow-Studio likes this.
  4. marwood82

    marwood82

    Joined:
    May 27, 2019
    Posts:
    6
    Hi,
    ive been playing around with SALSA and UMA for the last few days,

    is it possible to swap out the audiosource a SALSA instance is using for input for a different one via a script at runtime?

    im trying to create an app where there are pre-loaded UMA characters with SALSA attached and players can 'possess' them, then talk with player voices provided via photon voice, then switch to another UMA and talk through that instead.

    i see theres the .audioSrc property but i dont see a 'set' function for it?

    or does anyone know of a way of forcing the output of one audiosource as an input to another (or as an audioclip that can be attached to an audiosource)? (or any other way i might do this?)
     
  5. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hi marwood82,

    SALSA leverages a standard Unity AudioSource component, this means you can use the AudioSource API to change audio clips, play, and stop audio.

    https://docs.unity3d.com/ScriptReference/AudioSource.html

    Edit: To change the AudioSource associated with SALSA, you can use the AudioSource reference that you found, there is no set function, you can just set the property directly.
     
    Last edited: Nov 18, 2020
  6. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hi marwood82, to continue the above, throwing Photon in the mix complicates it a bit. We will leave it up to you on how to flip your PhotonVoice configurations around. I would think remote avatar configurations would not need any remapping of the AudioSource, but I might be missing something in your implementation. However, if you are leveraging local avatar lipsync, available in SALSA 2.5+, you will need to configure SALSA to use external analysis and remap the external analysis delegate.

    Normally, this reconfiguration processing occurs during PhotonVoiceView setup (usually in the spawning process). The SalsaPhotonVoice helper iterates over the Start process, waiting for the PhotonVoiceView to complete setup and then checks to see if it is recording locally (local avatar) and configures SALSA for local client use. So you might have to modify that process a little bit if you are slipping in and out of characters. Once you slip into a character, that character will now be "local". Remote characters should be fine since they are taking the stream from PhotonVoice and are already configured for remote, although if you slip out of an avatar, you would need to reconfigure SALSA for remote avatar processing. You will need to take a look at the SalsaPhotonVoice.cs script to see how and where it changes SALSA to work with local PhotonVoice avatars and manually implement or reverse that process as needed. Check our PhotonVoice2 documentation as well. https://crazyminnowstudio.com/docs/salsa-lip-sync/addons/photon-voice/

    Hope that helps,
    D
     
    marwood82 likes this.
  7. marwood82

    marwood82

    Joined:
    May 27, 2019
    Posts:
    6
    thanks for the update thats helpful
    fyi - actually, in the end it was quite easy once i understood what i was doing.

    i stuck some code in to allow me to find the speaker i was looking for based on the owner (i used a photon object to assign control of an uma so i just did a find for a speaker with the same owner) then once i had that was able to assign the audiosource on the fly based on one of the examples on the website.
    eg
    var audSrc = photonSpeaker.GetComponent<AudioSource>();
    var salsa = uma.GetComponent<Salsa>();
    var qp = uma.GetComponent<QueueProcessor>();
    salsa.audioSrc = audSrc;
    salsa.queueProcessor = qp;
    salsa.audioSrc.Play();

    not sure what i was initially doing wrong but it works great! thanks!
     
    Crazy-Minnow-Studio likes this.
  8. Volkerku

    Volkerku

    Joined:
    Nov 23, 2016
    Posts:
    114
    Is it possible to trigger an emote in a talking pause?
    For instance, off there is a longer pause in the the speech from a sound recording, the character produces a blendshape smile.
     
  9. Volkerku

    Volkerku

    Joined:
    Nov 23, 2016
    Posts:
    114
    I'm relatively new to this.
    I would like to send a command to salsa to stop SALSAing and another to start again.
    Is it bets to just disable the components? Is there an easier or better way?
    Can I send this command via an event?
    thx, volker
     
  10. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello! You can accomplish this in many different ways. The challenge is in knowing how long the pause is. If you want to hand-craft your animation, you could use Timeline to map EmoteR calls to your audio cues. Or you could leverage the built-in event system to detect when SALSA is not SALSAing and after an amount of time, trigger your EmoteR emote. You could fire off a one-way emote when you have detected silence and then use the event tie-in to turn the emote off when SALSA begins firing visemes again. Or you could fire off round-trip emotes and keep refreshing them as silence continues and then stop refreshing when visemes are firing. Or you could actively monitor SALSA's built-in boolean to detect when it is active.

    SALSA processes an AudioSource...if the source is playing audio, it will be SALSAing, so you can pause or stop the audio to accomplish this. If you want to stop SALSA's LateUpdate loop, you can disable the component. What is best depends on your situation and what you are trying to accomplish. If you can provide more details, I can offer better advice on how to handle it.

    Hope that helps,
    D.
     
  11. masai2k

    masai2k

    Joined:
    Mar 2, 2013
    Posts:
    45
    Hi,
    in the new Unity Bridge for DAZ3D, when I try to save the blendshaps associated to the head, the names are like this: Genesis8Male_eCTRLvW, Genesis8Male_eCTRLShock_HD and so on .... and SALSA can't recognize these names.
    I can't rename these bleandshapes, so I can't use SALSA anymore with DAZ. Any solution ??

    Massimo
     
  12. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello, sounds like there are a lot of changes in DAZ. We will take a look and see.

    Thanks,
    D
     
  13. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Quick update on the DAZ OneClick to work with DAZ to Unity bridge. I've got the shape searches mostly updated and am testing against all of the different model generations. Hope to have the new version released this weekend.

    D.
     
  14. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    I've posted DAZ OneClick v2.5.2b on the downloads server. This has name search support that should now work with DAZ to Unity bridge naming conventions (as well as the standard supported Gen 1, 2, 3, 8 models types).

    Thanks,
    D.
     
  15. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Version 2.5.2.109 has been submitted to the Asset Store!
    SALSA LipSync Suite v2.5.2.109 is a small update that will hopefully bring you some Holiday Cheer! SALSA now has an Editor-based configuration preview, including audio playback. It also includes EmoteR emphasizer emotes if configured and linked.

    There are also a couple of Eyes fixes to the disable/enable code that now properly and smoothly returns the head and eyes to the starting position/rotation (or influence) when disabled (and smoothly animates back ON from starting or influenced position/rotation when enabled).

    EXPECTATIONS:
    If you are upgrading an existing project, make a backup of your project before you upgrade SALSA Suite!

    If you find bugs or documentation issues, please let us know, we will knock them out as quickly as possible. Please ensure you've checked the Release Notes and latest documentation and ensure you have the correct (latest) Add-On/OneClick versions. It is best if you email us (assetsupport@crazyminnow.com), including as much detail as possible, screenshots and/or video, any errors received, versions of everything, and always include your Unity Invoice Number.

    Ho-ho-ho and Enjoy!
    Mike and Darrin
     
    Last edited: Dec 22, 2020
  16. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Version 2.5.2.110 has been submitted to the Asset Store!
    SALSA LipSync Suite v2.5.2.110 is a small update to correct null-ref issues when Silence Analyzer is placed on a SALSA instance configured to use external analysis. It also enables Silence Analyzer to run on SALSA instances configured to wait for an AudioSource.

    EXPECTATIONS:
    If you are upgrading an existing project, make a backup of your project before you upgrade SALSA Suite!

    If you find bugs or documentation issues, please let us know, we will knock them out as quickly as possible. Please ensure you've checked the Release Notes and latest documentation and ensure you have the correct (latest) Add-On/OneClick versions. It is best if you email us (assetsupport@crazyminnow.com), including as much detail as possible, screenshots and/or video, any errors received, versions of everything, and always include your Unity Invoice Number.

    Thanks,
    D.
     
    Last edited: Dec 22, 2020
  17. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    SalsaDissonanceLink v2.5.0 has been posted to the downloads portal!
    This version leverages the delegate processing substitution of SALSA 2.5 and the efficiencies gained by allowing SALSA to request the external analysis when it needs it vice active substitution being applied on every frame, external to the SALSA process. NOTE: This version requires SALSA v2.5+.

    Thanks,
    D.
     
  18. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Version 2.5.2.110 is now available!

    Enjoy,
    D
     
  19. waltercl

    waltercl

    Joined:
    Dec 30, 2018
    Posts:
    47
    I'm going to be hiring someone on Fiverr to do the necessary Blendshapes for my models that don't already have them so I can use them with SALSA.

    This may already be in the documentation somewhere, but I'd like to know exactly what I can tell the person I hire is needed so they can give me the necessary blendshapes for the mouth, eyes, and eyelids.

    From the one-click that I did on a DAZ3D character I can see that SALSA is using 7 mouth morphs (w,t,f,th,ow,ee,oo), 8 Emotes (exasper,soften,browsup,browup,squint,focus,scrunch,flare), 2 Eyes (eyeL,eyeR), and 4 Eyelids (eyelidL,eyelidR,eyelashL,eyelashR).

    I see next to these configurations the number of components which I'm thinking is the number of parts that have to be moved to achieve these different morphs.

    If I gave just this information to a 3D artist, would that be enough, or would they need to see a picture of how each one is going to look on a face? Are there such pictures already? I'm thinking someone probably has diagrams of how these all look somewhere.
     
  20. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    All of the model systems we work with are configured to use a representation of the visemes you mention above (w,t,f,etc...). Some character model systems do so better than others. DAZ for example, has multiple components per viseme primarily for the multiple mesh needs (i.e. face mesh and eyelashes mesh). DAZ has many phoneme sets included so the visemes we have chosen are already present and don't always require multiple blendshapes to create a viseme. Keep in mind, these visemes are not absolutely required. They are simply what we have chosen during the course of our experimentation -- there are likely many other different combinations that a developer/designer may prefer. Also note, it is more performant to use a single blendshape vs multiple, per viseme mesh.

    If you have a model that is composed of multiple meshes, it is more convenient, but not absolutely required to have affected meshes use identically named shapes. It's mainly easier for organization and OneClick application. For example, an emote that creates a smile will likely affect the mouth and eyes and (in the case of DAZ models) if the eyelashes are a separate mesh, the same blendshapes would need to be animated at the same time to produce a smile where the eyes squint slightly and the eyelashes match the shape.

    If you are paying someone to produce the blendshapes you wish to use solely for SALSA Suite and you do not require the flexibility to change things up for other applications or needs, it would be recommended to have them create a single blendshape for each affected mesh in the viseme you choose to use. SALSA Suite configuration is much easier/faster in that regard. You may want to make your emotes more flexible (i.e. left and right: smile, blink, brow, etc.). Possibly the best course of action would be to use one of the OneClick-supported model systems (like DAZ since you are familiar with it) and experiment with SALSA Suite until you find what you like, don't like, need, don't need, etc. and then commission the work on your other models once you are more familiar with the expectations you have with the system.

    We do have documentation for general requirements/suggestions:
    https://crazyminnowstudio.com/docs/salsa-lip-sync/modules/overview/#requirements

    It would probably also be beneficial to understand the OneClick system as much as possible:
    https://crazyminnowstudio.com/docs/salsa-lip-sync/addons/one-clicks/

    Hope that helps and good luck,
    D.
     
  21. waltercl

    waltercl

    Joined:
    Dec 30, 2018
    Posts:
    47
    Thanks for the detailed reply. I'll look into the links you've specified.

    SALSA is an absolutely incredible tool, and it's one of the best values in the entire Unity Asset Store. I exported a model from DAZ3D to Unity with the Blendshapes and the One-Click set everything up perfectly. My character talks, moves its head, blinks, etc. in a very realistic way.

    The only weakness that SALSA has is that it does require some knowledge of a 3d Modeling Program like Blender in order to set up your Blendshapes unless you can get all of your models from a source that already has a one-click solution (DAZ3D, UMA, etc.) In my situation those solutions don't have all the models I need so I've got to have Blendshapes added to many of my models.

    For a person who has the time to learn Blender there are more than enough tutorials on how to set up the Blendshapes.
     
  22. gregacuna

    gregacuna

    Joined:
    Jun 25, 2015
    Posts:
    59
    I'm getting really unsatisfactory results with lip synginc using Salsa Studio which I'm using on some characters bought on the asset store using blendshapes. I have a few questions:
    1. Does it matter if the rest position of the mouth when the models are added to the scene is NOT closed?
    2. I've watched the YouTube videos about setting up Salsa, but my feeling is I might be missing something or doing something wrong. Part of this feeling is because the setup video doesn't really go into the basics of what I'm supposed to be trying to achieve in the setup. I know the earlier version said specifically that we needed a small, medium and large shape mouth blendshapes, but that isn't clear in the new video. Do you have anything which very simply goes through the basic setup for lip syncing?
    Thanks!
     
  23. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello gregacuna,

    Generally speaking, the rest position is used as the return point for all shapes so the rest should be how the character looks with no visemes or emotes activated. The general overview section of the documentation goes over the shapes and sequence we typically used for our one-click character setups, but that doesn't mean that's the only way to do it. The setup of specific shapes can vary depending on your preference and the look you're going for.

    https://crazyminnowstudio.com/docs/salsa-lip-sync/modules/overview/
     
  24. gregacuna

    gregacuna

    Joined:
    Jun 25, 2015
    Posts:
    59
    Thanks for your reply. I'm getting a very jumpy animation and I'm not sure if this is because the rest position has the mouth open or because I'm using a single blendshape for the three Vismes using different percentages. On the models I have the artist has an A blendshape which is the mouth wide open. So I have it set so Visme 1 is A with Min/Max 0/30, Visme 2 is A with Min/Max 30/50 and Visme 3 is A with Min/Max 50/70. Is there a chance that having the movement in a Visme to a smaller range is why the animation is so jumpy?
     
  25. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Are you using SALSA Suite v2 or SALSA v1.5?

    I will assume SALSA v2. I will also assume there are no other animation influences on your model and everything else is configured correctly (like the timings, audio delay, and viseme triggers). For your particular setup, I recommend simply using a single viseme with blendshape A set with Min/Max of 0/70 -- ensure Advanced Dynamics is enabled with Primary Bias set to 0. Advanced Dynamics will determine the extent to animate blendshape A. If however, you wish to have 3 blendshapes using the same viseme, you would still set the min position to 0 on all three visemes.

    TLDR;
    You can read more about how all of this works in our online documentation. But for your particular scenario, here is what is happening. The reason you see the mouth left open in your configuration is because you cannot ensure viseme 1 is the last viseme fired and it is the only one with a closed-mouth rest position. So if the last bit of audio analysis fired viseme 3, and the audio ends. SALSA will tell the QueueProcessor silence was detected and to shut off the last viseme fired, viseme 3, which has a closed position of 50. If the last viseme fired was viseme 2, the mouth would return to 30, and so on.

    SALSA works with one viseme at a time (unless you are also using Secondary Mix) and remembers which viseme was fired. On the next analysis pulse, it determines which viseme to fire and turns off the previous viseme. It lets the QueueProcessor handle the conflict resolution and blending calculations for the visemes. Since you are using the same blendshape for each viseme, it is always in conflict and blending back and forth to your min/max settings.

    Also, on startup, SALSA will ensure all visemes are turned off and it does this in the order of the visemes. So, if you start your scene and no audio is playing, your mouth will be open -- blendshape A at 50 -- the third viseme's OFF position.

    Hope that helps,
    D.
     
    Last edited: Jan 14, 2021
  26. Sapien_

    Sapien_

    Joined:
    Mar 5, 2018
    Posts:
    102
    Hey does this work with liv microphone input? So say if I speak into the microphone, the asset will attempt to do mouth shapes?

    Also does this work with animations or sprites instead of blendshapes, for example if the mouth is a texture or sprite that has drawn frames?
     
    Last edited: Jan 15, 2021
  27. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
  28. Sapien_

    Sapien_

    Joined:
    Mar 5, 2018
    Posts:
    102
    So from What I have seen. The bone animator should suffice? My character is stylized so the top of the head just needs to move up and down depending on the sound input and make a separate object, that represents the teeth, visible and invisible (which I'm guessing scale could achieve) as it has a puppet and stop motion style to the animation.
     
    Crazy-Minnow-Studio likes this.
  29. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello!
    There are probably several ways you could implement this. You could use a bone and isolate to any or all of the transform properties. Off the cuff, it sounds maybe like position animations might be what you want, but you can do whichever works for your character. You could also drive an animation with a float value. It's up to you.

    D.
     
  30. Sapien_

    Sapien_

    Joined:
    Mar 5, 2018
    Posts:
    102
    Hmm I think looking at the videos I may know which direction to go. However I'm still stuck on an issue. How do you configure mouth shapes to match the sound? So if the audio is making an "ee" sound, the character will make the corresponding mouth shape. How do you configure this?
     
  31. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    SALSA doesn't work that way, it's an approximation system. The Overview documentation page I posted previously describes the visemes and sequence we use for our one-click setups. This is generally based on progressively larger visemes.
     
  32. gregacuna

    gregacuna

    Joined:
    Jun 25, 2015
    Posts:
    59
    Big thanks for your detailed reply. I am using Salsa Studio v2 and will try out your suggestion as soon as possible and reply again with the results. Cheers.
     
    Crazy-Minnow-Studio likes this.
  33. Sapien_

    Sapien_

    Joined:
    Mar 5, 2018
    Posts:
    102
    Sorry for the trouble and if this is a stupid question (as I'm not understanding too well) but I assumed Visemes were the mouth sounds for lip reading? In your video analysis settings, you had them named as such. Were they just representations? So I'm guessing the mouth shapes are based on audio levels?
     
    Last edited: Jan 18, 2021
  34. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Yes, arbitrarily named representations. The shapes that are triggered are based on a base-level analysis of the amplitude data, some timing calculations, some silence analysis, etc. We aim to produce a representation of perceived accuracy where animation timing and dynamics provide a realtime look-and-feel of lipsync that is convincing without the need to perform phoneme mapping/baking.
     
  35. skymeson

    skymeson

    Joined:
    Sep 30, 2016
    Posts:
    15
    Hi there,
    I'm trying to allow the user to change mic input settings during runtime. I'm using SalsaMicInput, however, I only see an option to change this through the editor script. Is there a way to assign the microphone deviceIndex during runtime?
     
  36. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello, SalsaMicInput uses the standard Unity Microphone class. There is an example of changing the microphone at runtime in the SalsaMicInput API documentation. https://crazyminnowstudio.com/docs/salsa-lip-sync/addons/salsa-mic-input/#examples

    Hope that helps!

    D.
     
  37. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399


    How-To Video Tutorial Released:

    Tuning SALSA part 1

    This video aims to demonstrate tuning SALSA for different design looks and uses two OneClick-based models from Reallusion and DAZ. I've tried to show how SALSA's settings can be easily adjusted to smooth out or chop up lip-sync by applying OneClicks from scratch and working through the adjustments. We tune some visemes and emphasis emotes and also tweak the Silence Analyzer and discuss its impact on the SALSA component.

    Part 2 should be coming out later today and will demonstrate some ideas on how to tune lip-synchronization for slow, deep bass voices verses fast, high-pitched, cartoony chatter. I will post back here when that one is up. For now, enjoy and I hope these tutorial videos are helpful in your game or application designs.

    Good luck on your projects!
    D.
     
  38. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399


    How-To Video Tutorial Released:
    Tuning SALSA part 2

    The second part of the How-To Tune SALSA tutorial set is now available. In this video I've taken DAZ's Emotiguy and tuned his lipsync to a low-pitched, slow moving audio file as well as a high-pitched fast moving sample. This shows how to go about getting the look and feel that matches your character and audio. Tweak those settings!


    Enjoy and good luck on your project!
    D.
     
    Last edited: Jan 20, 2021
  39. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Is that a prudent choice though if the goal is to use SALSA? It seems they don't have one-click support for the ARKit 52 blendshapelocations...
     
  40. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Just to clarify, not having a one-click setup does not mean it doesn't work, it means manual setup or writing your own one-click will be required. We have a documented API that allows you to create custom one-click scripts for any model that meets our system requirements.

    M
     
  41. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    If this the 52 blendshapelocations is a recommended and/or a common approach, why isn't there a one-click setup for this? It seems tedious to manually set it up
     
  42. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    We try to provide one-clicks script for highly requested, common, and standardized character systems. For everything else, we provide a documented API to write your own one-click setups. The one-click setup script is simply a configuration script, it sets the same properties the inspector sets.
     
  43. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    In this case, which blend shape rig would you suggest for a custom model, if the ARKit 52 blendshapelocation is not highly requested / common / standardized?
     
  44. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399


    New How-To Video Tutorial:

    UMA OneClick v2.5.1

    UMA updated their product to 2.11 and included some changes that required updates to the UMA OneClick. Version 2.5.1 of the UMA OneClick is now available on our downloads site. Namely, they now have an in-Editor preview of the avatar character. This created some conflict with the existing OneClick for UMA and it has now been updated to utilize the UMA preview instead of the preview prefabs we used previously. Our online documentation has also been updated to reflect the changes in the UMA and OneClick products.

    The video is a simple how-to for adding the OneClick to an UMA v2.11 character.

    NOTE: If you use the new UMA prefab exporter, the SALSA OneClick will no longer work. This is because the OneClick uses the UMAExpressionPlayer to drive the lip-sync and facial animations and UMA prefab exports are no longer wired to the UMA Dynamic Character Avatar system. In fact, they are not wired to anything UMA at this point. They are simply bone-rigged characters without blendshapes.

    Enjoy!
    D.
     
  45. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    We have a description of how we setup our blendshape visemes in our requirements documents. If you want to create your own blendshape sets for use in SALSA, you can follow our path and then create a new OneClick for it using the OneClick documentation. The easiest way is by simply editing an existing set of OneClick files. The other thing you could do is model your blendshape implementation after one of the existing, supported systems, ensuring your naming convention is the same for blendshapes and SkinnedMeshRenderer gameObjects. You could then simply apply the OneClick associated with the character model system you mimicked.

    Hope that helps,
    D.
     
  46. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Can you please just create a OneClick implementation for the ARKit 52 blendshapelocations?
     
  47. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Sorry, it just isn't that simple. Creating a OneClick for an arbitrary set of blendshape names isn't really possible. Each underlying model is different and the model components and bones have different names (of which we know nothing about). Additionally, we would have no way of knowing how any one designer is implementing the shapes. Implementations may be more or less than what we would expect. A Minnow-created OneClick wouldn't work for this scenario. If you want to use the ARKit blendshape archetypes, that is perfectly fine, but the end result is the same, your model is custom and we don't know anything about it other than the fact you used similar blendshape names. Once you've created your model, you can then configure your model with SALSA Suite and it is easy to edit an existing OneClick to work with it assuming you have more models configured in the same manner -- otherwise, just configure the one model and be done. We have a document showing how to create your own Custom OneClick. For the reasons above, you would have to edit the OneClick either way.

    Good luck on your project!
    D.
     
  48. Nanita

    Nanita

    Joined:
    Jun 17, 2016
    Posts:
    6
    Hi, I just bought Salsa today and tried to assign an audio clip via script but I was having trouble. I was hoping you could kindly provide guidance.

    The steps I took;
    - opened the demo scene with the box head avatar and salsa already set up
    - deleted the promo audio clip from the audio source of box head
    - Successfully downloaded an audio clip from a phone's persistent data path. (can play the audio via audio source . play())
    - assigned the audioClip to the salsa's audioSource via script

    The result;
    - Box head still lip sync's the promo audio clip
    - the downloaded audio plays at the same time while box head is lip syncing the promo audio

    Tried to fix by;
    - deleting the promo audio clip from the assets folder

    result;
    - The downloaded audio clip plays but box head's mouth does not move.

    Another question is how could I stop and start the lipsync via script instead of having it auto play from the start?

    Please kindly help! T_T
     
    Last edited: Jan 24, 2021
  49. Crazy-Minnow-Studio

    Crazy-Minnow-Studio

    Joined:
    Mar 22, 2014
    Posts:
    1,399
    Hello Nanita!

    Please post your code so the community can help you resolve your issue. We cannot point out a logic issue without seeing the code.

    Your second question should be easily enough resolved by simply playing or stopping the linked AudioSource, either via a direct reference to the source or by using the reference SALSA has to the AudioSource.

    D.
     
  50. Nanita

    Nanita

    Joined:
    Jun 17, 2016
    Posts:
    6
    Hello D!

    Thank you for your very fast reply!

    It's actually working now! xD
     
    Crazy-Minnow-Studio likes this.