A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Separate names with a comma.
Discussion in 'Assets and Asset Store' started by Crazy-Minnow-Studio, Apr 23, 2014.
We are working on CC4 OneClick support. I hope to have an initial stab at this very soon.
Turns out it was even more simple than this. I didn't realize you could actually change the transform to something else. Placing it on a bone just before the head did the trick. Thanks for the help
I've stumble on an issue regarding the Editor scripts for Amplitude & Salsa LipSync (Possibly others but those are the two that we have)
It's in regard of the prefab files containing the salsa components. If you make a change to the values in the salsa component, it will update in the inspector. But it won't trigger a save of the file. Meaning no changes are spotted in the repo.
We found that using SerializedProperty was a way to resolve this issue (see attached)
If you need more info let me know.
Yes we were notified by some customers a while back about the SALSA Suite components not saving changes in prefab instances from (I believe it was) Unity 2019.LTS onward. We researched and repaired that issue, so as long as you are on the latest version of SALSA Suite, that shouldn't be an issue for those modules. Amplitude, as you indicated, probably does need a fix. Using Unity's SerializedProperty method is interesting, they must have baked that into their new-improved prefab handling at the time. Likely a good retrofit for smaller scripts. I've never been a fan of stringified field name searches tho -- too susceptible to my typos.
The implementation we were using was to add a call to:
It works best with some sort of change detection, but also does the trick.
Thanks for letting us know!
Its not working in WebGL, its not enough to buy Suite?! There's no mentions about supported platform in asset store, you must buy +1 plugin to make it work in WebGL?
Also its not clear how to get the free bridge bridge between Salsa and Amplitude there's no any invoice with number from Unity asset store
As mentioned on the SALSA Suite store page, SALSA does not work with WebGL out of the box. This is also mentioned in several locations on our web site and in the online documentation. You do not have to use Amplitude, but that is the only product that we support. You are free to create your own or use another product. We have tried to make the requirements as transparent as possible.
As mentioned on the Amplitude store page, the bridge between Amplitude and SALSA is available on the Amplitude downloads portal.
If you have suggestions for how we can change the wording to better clarify the requirements, please let us know.
Hope that helps,
Hello, it seems the bone rotation must be out of alignment, but we would have to see the model to be sure. Please send your model and invoice number to our support email and reference this thread. Mike is out of town this week, but intends to check this out when he returns.
Ok. I would suggest to make it bold and in the top of the description. AmplitudeSalsa and Amplitude samples don't work for me on WebGL but integration in my project does!
I have a batch of visems in 3D model I use and currently I tried to match some of them to your list of recommendations here Maybe you could navigate me which order from top to bottom will output best results with Salsa for my list (attached)?
Hello, it is really hard to say which of your visemes match up best to the visemes we have chosen as our default configuration for OneClicks, which are the ones listed in the recommendations. Keep in mind, this is not a hard and fast rule or requirement, they are simply what we have found to create good lip-sync dynamics. It also depends on your model and you might find some other combinations better meet your needs. Just guessing, I would say (CH, FF, TH, DD, U, E, O) would probably be close to the recommendations. Ultimately, the best choice is where dynamics oscillate between shapes and gradually grow in size.
Hope that helps,
Yes it does, thanks.
I don't understand how to use OneClicks, according to video I should download it from plugins but there's no such plugin listed:
after awhile I found that OneClicksBase.cs is already part of the Suite.
When I try to add it I see only preset boxHead-Demo
If I add it nothing happens and still I need to configure visems etc. manually I have an .fbx model, I can't figure out what the purpose of OneClicks is it automatically search for visems??
P.S. I just wanted to try maybe OneClicks has better performance then what I have now with configured manually visems
Hi! Does anyone use SALSA with wwise at the moment? Any tips on how to implement it?
OneClickBase is a support file used by SALSA & EmoteR OneClicks. OneClicks are only available for supported model systems, like DAZ, Reallusion, etc. You can find the supported list of OneClicks in the online documentation. The download portal in your screenshot is the Amplitude portal. OneClicks are in the SALSA Suite downloads portal. Otherwise, you can configure your model and make you own OneClick - following the detailed documentation online.
SALSA likely won't work with Wwise out-of-the-box. I don't know much about Wwise, but if you have access to the raw data you can implement a relatively easy middle-ware piece to feed that data to SALSA. See the example code for an idea on how to do this here:
Keep in mind, if you don't have access to the raw data, and only have access to post-processed data, there will be some caveats to be aware of. Notably, any processing that modifies the amplitude dynamics of the audio (i.e. spatial, volume, etc) will negatively affect the dynamics of lipsync. This is why raw data is most desired. In the example code, the audio filter insert (OnAudioFilterRead) gets post-processed data. Replace this with your data stream.
Hope that helps,
Thanks, this is great advice! I’ll check to see if I can get raw data from Wwise, and whether I can get that into SALSA with your suggestions.
Will keep you posted!
Follow up question: is there a way to bake the animations created by SALSA?
It wouldn’t be the most convenient way to go about things, but it would be an option then to first feed SALSA regular AudioClips, use that to bake blendshape/bone rotation into an animation, and then later trigger said animation concurrently with a Wwise audio event.
EDIT: Some early testing shows that this is potentially done with the Unity Recorder from the Package Manager. Will have to find out if this can be sufficient for our needs
Technically, you could probably record the blendshape/bones configured in each viseme and then play them back. Depending on your model and usage, if you are also using EmoteR and Eyes, you would probably want to record all of that as well, since they all work together to avoid conflicts of bone/shape usage. And EmoteR emphasizers work on audio timings via SALSA. If this consideration is with respect to getting Wwise to work with SALSA, let's hold that as the very last resort, since it would basically eliminate the core goal of SALSA -- to accelerate the workflow.
You're right, it would eliminate the best feature of SALSA. However, given my current skillset and amount of available time I don't see a way to get Wwise to talk to SALSA directly. It should be possible, and I'll gladly hear if someone has this as a solution, but right now I cannot create that.
The core of the problem is that the direction of commands is a little reversed when using middleware. Audio (and all effects) stay inside this second application (Wwise / FMOD). In this application you simply attach Events to (groups of) audio clips. From Unity you call those events wherever you want (Start, OnTriggerEnter, firing a gun, etc). But in essence you are talking to your middleware, which holds all the audio information, which includes any information that SALSA would need for processing. By default you don't get any of that information back from Wwise into Unity, so there's nothing for SALSA to know or analyse for lipsync processing...
So my options are to either not use audio middle-ware at all, and keep everything in Unity. Which is fine for the most part, but many of the things (not lipsync-related, that is) are just very much more convenient in Wwise. Even at its most basic level, in Wwise it's just much easier to create randomisation, small variations, room reverb etc, to make the soundscape much more interesting. Also: all sounds simply live inside Wwise, and are called by Unity via basic events. No more hunting around the scene to figure out where certain sounds are held, and all that complexity.
Another option is to only have voice-lines in Unity, run those through SALSA the good old fashioned way, and have other sounds in Wwise. This is also a very fine solution, and can easily be achieved by running Wwise events through a preliminary (event-) stage which checks where the specific event should go: either to Wwise, or to SALSA.
- Downsides: some audio in Unity, some audio in Wwise.
- Upsides: SALSA is live baby! Plus changing (the length of) voicelines can be done on the fly.
What I'm currently aiming at is to make a convenient system to run all voicelines through SALSA, and record each as an AnimationClip which holds the SALSA generated lipsync + EmoteR generated facial movements. You'd use this (hopefully once) during development, and store the animation clips in your project together with your other animations.
To play these back, another system creates AnimationOverrideControllers every time a new voiceline is requested. This loads the corresponding animation into the OverrideController and plays it on the Animator, while sending off the corresponding Event to Wwise for processing there. (Theoretically you can always keep EmoteR and Eyes live during this stage too, with random emotes for instance.)
- Downsides: little bit harder to change voice-lines mid-development, since they'll need to have their animations re-recorded.
- Upsides: All audio lives in Wwise, and can be manipulated at will there.
I'll let you know how this little experiment goes!
Okay, I have made something based on my limited testing. Using this system you can record AnimationClips using SALSA / EmoteR / Eyes from a list of AudioClips. You can set it running and have some coffee and come back later if you'd like. Then, in order to use the new AnimationClips in combination with Wwise Events, a separate component uses the default Wwise Event and creates an AnimationOverrideController with the earlier recorded AnimationClip. This way you don't need to create huge Animator state machines. Feel free to use and change the code provided if it's useful to you:
Link to GitHub: SALSAWwisely
I think I would have voted for this option! It's too bad Wwise won't give you access to any data. That would have been the best solution. Good luck on your project. We always look forward to hearing how things are going.
Yes man! I might be able to integrate that option in the system as well at some point. So that you can select on a per-voiceline basis whether it should be routed through SALSA live, or trigger Wwise+Animation event.
any updates on this?
Hi, we have currently purchased the Salsa Lipsync suite for our project, we are planning to use the oneclick setup for iclone/cc3 characters, but facing issues in exporting the visemes required from iclone to the unity fbx, has someone done this and can anyone guide for the process to export teh visesme blendshapes to be used with salsa from iclone.
any help is appreciated.
I'm hoping to post a beta build tomorrow.
Reallusion CC OneClick v2.6.0
Release with initial support for CC4 character models.
Reallusion CC4 models are now supported in this release (v2.6.0). The package contains OneClicks for CC3 and CC4. Ensure you are applying the correct OneClick on your model. NOTE: Please remove any existing Reallusion CC OneClick files prior to installing this one. There is one Editor file and 2 Plugins/OneClickRuntimes files to remove. Since I combined them, it felt necessary to modify some of the class file names to be generic as 'CC'.
Please also consider CC4 support to be beta. If you find issues with the OneClick for your CC4 models, please do not respond here, but send us an email, with your Invoice number and some details around what is not working. Also, please send your model so we can see what is missing and test any required changes. Reallusion changed quite a bit on the blendshape names and there may be other changes that we haven't seen with the test models we have.
Get the package from our downloads portal.
Hi, does the texture(controller type) work exactly the same as the sprite workflow(as you showed on youtube), if I may understand?
Hello, yes it does. The sprite, texture, material controllers all operate the same way. They simply apply to their respective Renderer classes.
Thank you for the reply!
May I understand if the addon can somehow record the after lipsync data to the animation keyframe, please?
No it does not. You would need to roll-your-own solution or use a 3rd party asset to record the animation effects on the blendshapes and/or bones.
I just bought the plugin and want to test it with ARKit 52 blendshape model. I want to know if there is any OncClick for this type of model so I can have a quick start.
Sorry there is not a OneClick for ARKit 52 models. ARKit is a specification and not a model system, so while the blendshapes *should* adhere to the ARKit 52 names, the meshes and bones used do not and the OneClick system relies on mesh and skeleton names to apply correctly and efficiently. You can; however, take a look at the CC4 or DAZ8.1 OneClicks and get a good start on creating your own custom OneClick for your models. Those two systems utilize similar shape names to the ARKit spec.
Hope that helps.
I posted this accidentally on the RT-Voice forum; your two products work so well together.
To optimize CPU I'd like to stop and start emotive SALSA processing depending on if the player is speaking to an avatar or not.
To stop facial animation, would calling the method SALSA -> TurnOfAll() be the method to use?
To restart facial animation and lipsync ability, would Salsa -> Initialize() be the way to go?
Does sample rate for voice recordings affect how closely lip sync from audio clips can track?
Streaming of course is out, and I'm trying PCM 22kHz compressed in memory. Before that I'd tried decompress on load / Vorbis.
Is there a way to enable an EmoteR only while the avatar is speaking and disable afterwards? The setting of the project does not have a specific timing because the talking process is triggered by a text-to-speech framework. So, I can't really know when exactly should the EmoteR be on but I want it to generate animation only if the character is talking at that moment. I could only find a command TurnOffAll() but I guess that's not what I need because I have other components that should stay on and I couldn't find a command to turn it back on. Thank you in advance!
SALSA is designed to be very processor friendly. While there are some checks going on, they happen so quickly it is nearly zero processor time devoted to audio silence. You can check this in the profiler to confirm. TurnOffAll() is the method to reset the blendshapes utilized in a SALSA configuration to zero. To truly eliminate SALSA from the mix, you would need to disable the component. And re-enable it when you need it. But, I would check your profiler with Deep Profile enabled to see if this will net you anything for the added complexity. Below you can see where SALSA is processing the audio clip vs when the audio clip was paused and SALSA still active. It is up to you and your situation, but you would need to disable and re-enable the component.
Sample rate can provide a cleaner and more dynamic recording, but it all depends on your recording. It will not provide a more phoneme accurate recording if that is what you mean. Opt for a clean recording with good dynamics for best results. Silence should be silent and the voice dynamics should be well distributed in the spectrum without peaking. You can use streamed audio, but you have to be aware of the asynch timing involved. Do not start playback until Unity provides the asynch ok, and of course, the audio type has to be supported.
OK, this seems like the same post as the above and I assumed @XyrisKenn was talking about SALSA. EmoteR has even less going on than SALSA during silence assuming you are talking about Emphasis emotes. Emphasis emotes are triggered by SALSA via its link to the EmoteR component. IF SALSA isn't talking, there aren't any Emphasis emotes firing and EmoteR isn't doing anything except checking to see if there are any random or sequencer emotes to fire, which if you don't have any configured, there won't be anything to fire. If you want to disable EmoteR, you can do so -- just disable it. But then you'll need to run another loop to see if SALSA is SALSAing and if so, turn EmoteR back on. You will spend more processor cycles to do that than you will to let the Suite do its thing. Disabling EmoteR will remove the LateUpdate loop, but you'll need to add another loop to check and see if EmoteR needs to be turned back on. It is entirely up to you, but I cannot conceive of how you would make the process more streamlined and processor friendly. As mentioned in the previous post, please check your profiler to confirm.
Hope that helps,
Thank you for your response! how can I disable it? Can I disable one specific EmoteR, if an object has more than one EmoteR connected to it? If so, which function should I use?
If I don't write scripts for each component and just add SALSA components through the Unity interface, where (in which files) are the changes saved?
There is no internal function to disable it. If SALSA isn't sending any triggers, it is already effectively disabled. If you want to disable the component, you would disable it just like you would any other component on a GameObject.
I'm not sure I understand the question, but I will take a stab at it. The configuration data for each editor-configured component is maintained in Unity's serialization process. They are not saved in a file per se, they are stored in the Asset database which handles all assets/components.
I'm not a Unity developer (yet) so please excuse my ignorance. I would like to have someone build a Unity app that listens for a JSON message. In the message will be a character model name and a path to an mp3 file. The app loads that CC3 model from an AssetBundle and uses Salsa to lipsync the mp3 file and then applies an idle animation to the model. Is this possible? If so is there any relevant documentation? I would like to get the app built and then be able to keep adding more models to the AssetBundle without needing to rebuild the app. Or maybe there's an easier way?
I would say that would be possible. On the SALSA Suite side of things you just want to implement some run-time configuration of your CC3 model when it is loaded. https://crazyminnowstudio.com/docs/salsa-lip-sync/modules/further-reading/runtime-setup/
A am moving up from v 4.2.1 to the current. Do I need to remove the older version first? Any special steps to upgrade?
I think you've posted to the wrong forum thread. We don't have any software with that version number.
correction 2.4.1, (not 4.2.1) transpose error. Question stands. Can I import current on top of v2.4.1.
OK, that makes more sense. There isn't any requirement for updating the package, just import over the top. I would suggest two things:
1) As with all package updates, backup your project before updating.
2) 2.4.1 is over 2 years old and there were some significant changes in 2.5.0. You will likely need to make some configuration changes depending on how you are using the product. If you are using the API, there were some breaking API changes in Eyes along the way. Also possibly some architectural changes in Eyes that may leave some orphaned components on the skeleton. Check the release notes for details on version changes.
Otherwise, as long as you backup your project, you shouldn't have any problems.
Great asset, I have a non human character, a dragon that doesn't have blend shapes, however it has bones that open and close the mouth, would this work with Salsa?
Yes it will, it just won't be as dynamic in its mouth movement with a simple jaw bone. It will work tho. Configure one viseme with max opening you want the jaw to have and enable Advanced Dynamics for best variation.
thanks for this amazing plugin, I am having a blast with it.
One annoying issue that I am having is that it throws an exception like this way too often and it is quite unpredictable:
NullReferenceException: Object reference not set to an instance of an object
CrazyMinnow.SALSA.Salsa.SalsaLssGetAudioClipSampleCount () (at <c6912b865df54414bfb4f0cf1b09ee9c>:0)
CrazyMinnow.SALSA.SalsaAdvancedDynamicsSilenceAnalyzer.ProcessAudioStream () (at <c6912b865df54414bfb4f0cf1b09ee9c>:0)
CrazyMinnow.SALSA.SalsaAdvancedDynamicsSilenceAnalyzer.LateUpdate () (at <c6912b865df54414bfb4f0cf1b09ee9c>:0)
I've searched through this topic and I've found that it was brought up a couple of times, but I didn't find any reliable solution. Since I am using audio I don't think I am able to get rid of SilenceAnalyzer as mentioned here: https://forum.unity.com/threads/sal...id-control-system.242135/page-44#post-7576081
I have a single audio source component on my characters that is used for voiceovers but also for combat sounds, grunts etc. Maybe that's an issue? I am not playing null audio clips, I am using PlayOneShot though.
It would be really awesome if I could get rid of this as it messes up with my automated QA using the unity test framework.
I'm going to make some assumptions based on the information provided. In this particular error, it appears SALSA has lost the AudioSource or clip. If you turn SALSA off or disconnect the AudioSource while playing non-voice audio in your single AudioSource, it will be necessary to turn off SilenceAnalyzer as well since it calls SALSA's processing delegates and its AudioSource and clip references. You could also try separate AudioSources for effects and voice, leaving SALSA active -- again, I am assuming you are disconnecting the source or clip, based on the information. A OneShots inserts audio further up the filter chain, so I wouldn't think that would not cause any issues with SALSA analyzing the AudioClip, in fact, I just tested that and it does not disrupt SALSA. Without knowing more details, it is hard to say. You could also just remove SilenceAnalyzer if it is incompatible with your project scenario.
If none of this points you in the right direction, please send us an email with your pertinent details: Invoice number, OS/SALSA/Unity/etc. version numbers and more details about your project and we will see what we can do to help.
Hope that helps,
Thank you for your detailed and thoughtful advice. I'll leave it run. I appreciate the efficient CPU usage. Cheers!