A friend quickly translated this script for me from one of his js projects. The point is that the script should move the mouth of a humanoid model through the voices in the audio file. The code itself has 5 errors and I don't know some things and their purpose like "MathF" and things like that. Could someone please explain this to me? Code (CSharp): using System.Collections; using System.Collections.Generic; using UnityEngine; public class LipSyncV2 : MonoBehaviour { public SkinnedMeshRenderer Character; public AudioSource AudioLocation; public int TalkingMouthNumber; // TalkingMouthNumber = your mouth talking blendshape number public float volume = 40f; public float frqLow = 200; public float frqHigh = 800; float[] freqData; int nSamples = 1024; int fMax = 24000; float BandVol(float fLow, float fHigh) { fLow = Mathf.Clamp(fLow, 20, fMax); // limit low... fHigh = Mathf.Clamp(fHigh, fLow, fMax); // and high frequencies AudioLocation.GetSpectrumData(freqData, 0, FFTWindow.BlackmanHarris); int n1 = Mathf.Floor(fLow * nSamples / fMax); int n2 = Mathf.Floor(fHigh * nSamples / fMax); float sum = 0; // average the volumes of frequencies fLow to fHigh for (int i = n1; i <= n2; i++) { sum += freqData[i]; } return sum * (n2 - n1 + 1); } void Start() { if (!AudioLocation) AudioLocation = GetComponent<AudioSource>(); freqData = new float[nSamples]; } void Update() { if (Character) { float DATAREADA = Mathf.Clamp((BandVol(frqLow, frqHigh) * volume * 2), 0, 100); DATAREADA = Mathf.Lerp(0, DATAREADA, Time.time * 0.1); //Character.SendMessage("FaceTalking", DATAREAD, SendMessageOptions.DontRequireReceiver); FaceTalking(DATAREADA); } } void FaceTalking(float TalkingNow) { if (SkinnedMeshRenderer) { SkinnedMeshRenderer.SetBlendShapeWeight(TalkingMouthNumber, TalkingNow); } } }
How to understand errors in general: https://forum.unity.com/threads/ass...3-syntax-error-expected.1039702/#post-6730855 All class have documentation. Start there. https://docs.unity3d.com/ScriptReference/Mathf.html This sounds intuitively like something that is gonna rely on a LOT more detail than the code itself. Code in Unity is only a tiny fraction of the problem. The rest is the scene, model, and prefab setup. Engineering is accomplished generally by defining and understanding a problem, then creating a solution. "Finding" a script is generally not a useful way to accomplish anything. I would recommend starting with tutorials to do what you want. Voice mouth modeling is a very complex topic in general.
I made it in a easy other way without tutorial. I just made a 1 second animation with mouth open and close and made this small script that always play this animation in a second Animator Layer, if a mp3 is played in the audiosource for the Voicelines: Code (CSharp): using System.Collections; using System.Collections.Generic; using UnityEngine; public class IsVoicelinePlaying_Lipsync : MonoBehaviour { private AudioSource obj_Speechblendaudiosource; Animator YunoAnimator; private void Start() { obj_Speechblendaudiosource = GameObject.Find("obj_SpeechBlendUNDVoiceManager").GetComponent<AudioSource>(); YunoAnimator = GameObject.Find("YunoIK_withRibbonV7").GetComponent<Animator>(); } void Update() { if (obj_Speechblendaudiosource.isPlaying) { obj_Speechblendaudiosource = GameObject.Find("obj_SpeechBlendUNDVoiceManager").GetComponent<AudioSource>(); YunoAnimator.SetInteger("SpeechLayer", 1); } else { obj_Speechblendaudiosource = GameObject.Find("obj_SpeechBlendUNDVoiceManager").GetComponent<AudioSource>(); YunoAnimator.SetInteger("SpeechLayer", 0); } Debug.Log("obj_Speechblendaudiosource.isPlaying = " + obj_Speechblendaudiosource.isPlaying); } }