Hello all, Since a significant part of Chop Chop involves interacting with characters in dialogue / cutscene sequences, I have been thinking that it might be good to add in a little more nuance to characters. I know this project is meant to be a vertical slice and that this feature hasn't been specifically requested, but I actually developed this system for my own game and thought it would be nice to share the implementation if there is sufficient interest! The system that I'm working on (implementation described in detail below) leverages Unity's Timeline API to create custom sequencing components, which can be used to define a character's mood as a function of time. The character's mood is then used in other sub-systems to control its' eye textures, mouth textures, and animations. Additional features like eye blinking and lip-syncing are also supported and, ultimately, determined from the character's mood. Finally, localization is supported in this system. So, for example, the lip-syncing mouth textures used on a character will change with the active language. Combined, I just refer to the whole thing as a Character Expression System. But before I carry on, please watch the demonstration video link below to get an idea of what this system does. Please note that the system runs smoother than is shown in the video. My computer just has trouble with video recording software. The potentially "controversial" addition my system adds in is the use of phonemes. If you don't know what that is, it basically is a system used for character lip-syncing to spoken dialogue. That's why I mentioned this system could be a little controversial...given that the characters will be speaking Sim-lish / Animal Crossing style. Still, I personally like the effect. Even though the audio won't be spoken language, the dialogue text will be, and I think it looks quite good when the character's lips sync with the text the player is reading on-screen. This also avoids the problem of having the same mouth animation play out every time a character speaks, as in the current implementation. But of course, a system like this has some tradeoffs too...for one, many more textures will be needed for each character. I'm happy to work on that, if needed, but I'm not an artist by trade, so please feel free to usurp me if your talents are better than my own (you probably won't have to try very hard). Here's an overview of how the system works: (1) A custom Timeline track / clip / behaviour is defined for character moods (see image below) Note a few things in this image: As you can see in the clip settings to the right, a custom "mood set" can be assigned to each of these clips...more on that later. The CharacterMoodTrack has a binding to the main system I use : ExpressionManager, which is itself derived from MonoBehaviour. There are additional clip settings to control which animation to play. As will be shown below, mood sets allow the assignment of an array of animations...so you might have five animations, for example, that go along with a character's "happy" mood. Now, you might want the animation that plays out to either be selected randomly (default) or you might want to force a specific animation within the array to play (that's what the animation index is for). The CharacterMoodClips are independent of DialogueClips, so a character's mood can change mid-sentence or even when there is no dialogue at all Before we continue, I had to crop a bunch of images together into one (see below) because I was running into my upload limit. Please reference this image as the systems below are discussed. (2) A mood system is implemented. This system again leverages Timeline and defines custom tracks / clips / behaviours. The mood system allows you to set the character's mood (happy / sad / angry / explaining, etc.) and the mood is then used to set textures for the eyebrows, eyes, mouth, and character pose. And yes, I know "explaining" is not a mood, but in this demo, I was trying to stretch the utility of this system. Maybe "mood system" isn't the best label since it really can be used to model lots of character states. Here are the elements that define a mood collection... The actor that it effects, represented by the ActorSO object The mood itself, an enumeration that is used internally as a key in a dictionary to quickly reference other mood collection properties Three eye textures for blinking - eyes fully open, eyes mid-blink, and eyes fully closed. The mouth textures are obtained from another system called PhonemeSetSO (more on that later...). This system is localized because phonemes (the basic mouth shapes used for lip syncing) are language-specific. Here again, the character's mouth shape will be a function of its mood...so an "AH" sound, for example, will have different textures whether the character is happy or sad, etc. As for the character pose, I currently have implemented small (and probably not very good) animations (I am not an animator) and use Animator.CrossFade to fade-in to the pose. Finally, you'll see the animator clip title section. Here, the titles of anim clips (already in the character's Animator Controller) that you'd like to play out for this mood are set. Internally, the titles are converted to hashes for efficiency. (3) The PhonemeSetSO class, which defines the base sounds that a character can make, links the appropriate mouth texture to use for a given base sound. As can be seen in the thumbnail below, lots of sounds ("K", "AA", "AE", ...) use the "Happy_AH" mouth texture. If you'd like to know more about these base sounds, I use the CMU pronouncing dictionary. Remember also that in the mood collections (described above), the PhonemeSetSO member is localized. That means that you'll need one of these classes for each language and each mood. Ex: If we have happy and sad moods and support English and French, then we'd need four PhonemeSetSOs defined. Furthermore, as far as I know, the CMU pronouncing dictionary is only intended for the English language. Other phoneme sets can be found for other languages. The PhonemeSetSO class has a public function called GetMouthShape(string phonemeKey), which takes in the phoneme key (base sound, like "AH") and returns a Texture2D, which is ultimately passed along by the ExpressionManager to the appropriate ActorSO component. That ActorSO component then sets its internal mouth material mainTexture property to the Texture2D that was returned from GetMouthShape(...). Then, the mouth mesh on the character that is assigned the mouth material will automatically update its mainTexture component. (4) Modification to DialogueLineSO In addition to the original localized dialogue "Sentence", there is another LocalizedString for the phoneme sequence. Here you can see an example of how a regular line of text is (manually) entered as a phoneme sequence : "And win the game" = "EH N D . W IH N . TH AH . G EY M ." The system interprets whitespace as a separator for parsing out the individual phonemes (using string.split(" ")). For this reason, it is important not to enter double spaces, because no phoneme is defined for whitespace. You'll also note that periods "." are used in the phoneme sentence. This is used to indicate the end of a word and communicates to the Expression System that the character should close its mouth before forming the next word. Note that this format is automatically generated by the CMU pronouncing dictionary. There are additional options to indicate primary and secondary emphasis of phonemes, which could potentially be added in to this system at a later date. Just to be clear, no outside resource is actually necessary for this system to work. Once you learn the basic sound phonemes, you'll be able to type out these phoneme sentences on your own. In my implementation, all of these sentences are entered manually. (5) Modification to DialogueBehaviourSO This is where all the cool stuff related to parsing the phoneme sentence is done. First, the LocalizationSettings are used to retrieve the "LocalizedStringAsync", and when that asynchronous operation is completed, the localize phoneme set is stored. In other words, I just get the underlying PhonemeSet needed for the current language. Then, this localized phoneme set is parsed, using whitespace as the phoneme separator. See the image below for more detail... (5) Modification to ActorSO, and eye / mouth texture solution None of the base functionality has changed in ActorSO, but there are several default settings that need to be set here. For example, a default eye texture, mouth texture, and animation clip need to be set in case the system doesn't know what textures / animations to display at any point. Also, two material references are needed (one for the eyes, one for the mouth). Within the ActorSO script, there are several public functions used by the Expression System to set the "mainTexture" property of the eye and mouth materials. Note that this represents a change in how the characters are currently setup (this character change will not be part of the PR, unless requested). Prior to my changes, there were multiple copies of facial meshes parented to the head bone in the character rig. These meshes were enabled / disabled using an animation to give the desired expression sequence. My system needs only one mesh for each facial component (L Eye, R Eye, Mouth -- and can be extended to include eyebrows easily), and changes the texture on each of those facial components. Of course, the system can be easily modified to how things were previously, but I think my solution is more scaleable if lots of textures are needed. Below, I have attached some example eye and mouth textures using this mood / expression system. Please note that these textures were made very quickly with a mouse...I will be getting a graphics tablet soon and can make improved textures later. These textures were modeled after the artwork for the main character, Hamlet. Some use the original artwork directly. And that's basically it! Of course, I haven't really talked about how the ExpressionManager itself works, but it essentially is just a front end that interfaces with functions on the ActorSO component. For example, at the appropriate time, it will call the ActorSO.SetMouthTexture(...) or ActorSO.SetEyeTexture(...) or ActorSO.TransitionToAnimatorClip(...). See the diagram below for more information on the core systems. All-in-all, I think this system is pretty flexible and slots in quite nicely to the existing framework. And I think it allows for nuanced interactions with characters. But of course, the decision has to be left up to the community and Unity itself. I should mention though that this system doesn't interfere with any underlying systems and can be bypassed if one wants to use the currently implemented system. I will be submitting a pull request with this system implemented, hopefully by tomorrow. No worries if it doesn't fit the style of the game. Note that this currently only works on one character. I am in the process of extending this system to all characters (it involves a lot of dictionaries!) and that will be in a future pull request.