Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Games WOTT - Workout Tempo Tracker

Discussion in 'Works In Progress - Archive' started by TantzyGames, Aug 18, 2018.

  1. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51


    I've been working on a workout/fitness app called WOTT that's almost ready for release after 3 years of development. After release there are still many features to be added, so I thought I'd start a thread here to share what I'm doing, what I've done, get some feedback and learn from you guys.

    Website: WOTT.club
    Twitter: twitter.com/WOTTclub

    I'm really eager for people to help with testing. If you're interested, please sign up for early release at the website.

    Some of the stuff I'm eager to discuss here includes:
    • Playables for the animation (see some earlier discussion here).
    • Firebase for the backend - including facebook/twitter/google sign in.
    • Lots of things I've learned about Unity UI.

    Some stuff I'm eager to do, and share as I go includes:
    • More playables (including animation C# jobs)
    • More AI, path navigation, etc
    • Character customization

    Some History
    Today marks 3 years since I started working on the first prototype for WOTT. At the time I was doing a progression based bodyweight workout program that uses tempo training, called Convict Conditioning. I wanted an app that could keep my tempo, save my progress and show me what progression I was up to.

    I created a protoype using Unity 5.1 and Playmaker, because I'd already used Playmaker to make a game and, well... I knew next to nothing about coding. I'd dabbled with learning programming a few times over the years, but hadn't got very far. That first prototype was really little more than a metronome, but it kept my tempo, and stored my reps for the workout so I could wait until the end to write them down.

    Here is the prototype with an early concept pic.



    The next step was to get a workout routine into the app. I wanted to know what exercise to do each day, and what the target sets & reps were for each progression. I'd had some experience working with XML files in Playmaker, but what I needed to do this time was significantly more complex. Between this, my struggle with getting the indicator dials working, and having no idea how I was going to accomplish some of the other features I had planned, I realized that I was going to have to learn programming. So I started learning C#.

    After a couple of months of learning, I'd figured out enough linq to process the XML routines, choose a routine, a program and a progression, and show the target reps.



    Another project meant I wasn't able to get anything done for a few months. I didn't even have time to continue my coding training. But as soon as I could, I got back into it. I started figuring out Unity UI and working on a menu system to view and edit routines. I designed a database structure for routines, switched from XML to json and got saving and loading to the device working.

    After a few more months I was making progress. The routine menus weren't fully functional, but the basic ideas were coming together using accordion style menus (although using a different method than I'm using now).



    But I was still manually editing json files and using the original method of choosing a workout, so the next major milestone was to get the routine menu functional enough for me to be able to create and load routines. Also the main workout timer was still using Playmaker, so almost a year after starting I finally replaced it with code and was able to remove Playmaker from the project.

    During the refactor of the workout screen I finally figured out what to use the 3rd dial for. Initially it was going to show what progression you were up to, and it changed a few times along the way until it ended up as a tabata style visual representation of the total time for the workout.



    The rest of 2016 was just getting it all working together, a lot of refactoring, adding inheritance for routine options, and making sure they all worked. I redesigned the accordions, which initially used scaling to open and close, to just moving up and down (which solved a few problems but created a few more).



    By the end of 2016 it was working pretty well. I could create, save and load routines, user data and user progress, but it was all on the device. There were 2 big things to do: add the character, and get online. I started to work on the character, but then another project ended up taking a huge chunk out of 2017 (plus my wife broke her ankle to I was running after her and 3 kids for a few months) so I didn't get much done until getting back onto it in September.

    I then created a 3 month plan to be testing by Christmas 2017.
    (Narrator: Testing didn't start until July 2018)

    Thanks so much for reading this far, I'll continue the story real soon. In the mean time I'd love to know what you think and please ask me anything.
     
    Last edited: Aug 24, 2018
    khos likes this.
  2. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    A push-up consists of an up pose, a down pose, and the transitions between the poses. I've created my own transitions, which allow me to adjust the time a transition takes based on the tempo set by the user.


    A half push-up is just a full push-up going half way down. I figured I can reduce the animation workload by re-using poses, and adjusting the transition to go part way to a pose instead of all the way.

    I implemented it last year, but hadn't had a chance to test it. The first test didn't go so well:


    Turns out while I was only going part way to the down pose, the up clip was still going to 0. The up pose would need to be at 1 - distance:


    So that didn't work out so well either. I'd set all the other clips in the mixer to 1 - distance, not just the up clip.

    To enable transitioning to a clip from any other pose or combination of poses, I'm taking a snapshot at the beginning of each transition. I realized I can use this to determine the correct mixer input to set to 1 - distance, and keep all the others at 0.

    With this in place partial transitions are up and running:
     
    Last edited: Sep 15, 2018
  3. OffThHeezay91

    OffThHeezay91

    Joined:
    Feb 23, 2013
    Posts:
    45
    The UI looks very nice. I laughed my face off on that animation that "did not go very well" GJ for getting it working!
     
  4. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    Thanks @OffThHeezay91

    I'm excited to be working on something I've been looking forward to doing for a while.

    For a long time I had everything stored on the device. When I started using the database I stopped saving to the device to make sure the database was all working properly. I've been caching audio and move animations, but not any of the user or routine data.

    This week I'm combining the two, caching data from the database on the device, and using that unless there's updated data online. The main reason for this is to speed up loading on startup, but it will also save money as there will be less reads to the database, and will set everything up for the app to work offline again, which it hasn't been able to do since using the database.

    Another benefit is that it's giving me the opportunity to refactor a whole lot of code. In some areas apparently I've been loading related data concurrently on startup, then closing my eyes and crossing my fingers and hoping it all works. Now, following my flow chart, I am making sure things are loaded when they're needed, and in the correct order.
     
    Last edited: Aug 29, 2018
  5. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    I'm just uploading a new build that has data caching to the device. As I'd hoped, I'm also now in full control of the loading process, what happens and when. If there is data on the device I load it, then check if there's newer data on the database load that instead, or add to the data on the device depending on the type of data.

    I had some time set aside last week for animation. I started working on a unilateral move, before realizing that I hadn't added support for unilateral animations, yet. One of the benefits to using playables for my animation system is that I can add new animations without requiring a new build, or the users to update. So I decided my time would be better spent adding support for unilateral animations for this build, so I can create many supported animations afterwards.

    The character is a humanoid so that I can take advantage of mirroring animation clips, but I have a custom IK system which makes it trickier. It took a couple of days to add support for mirrored clips for a right sided set, one day for the data and a day to detect and play the mirrored clips at the right time. The next step was to mirror my IK targets.

    First I swapped values between the left and right sides, negating the relevant channels. That didn't work, because their starting positions/rotations aren't mirrored.

    So I thought if I record the starting position of each target, then get the vector of change from the starting position to the current position, I could use the mirror of that vector to add to the opposite target. Unfortunately that didn't work either. I still think it should work, but the rotations weren't behaving as expected. The IK targets are a somewhat complex hierarchy (supporting future character customization) which makes it all more difficult.

    I need to spend more time on it. So for now, unilateral moves that don't require IK are working.
     
  6. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    Oh, I nearly forgot. I added a little bounce animation to the popups. It's amazing what a difference it makes. They really grab your attention now.

     
  7. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    WOTT's routine descriptions are a mix of standard rich text tags for styling, TextMesh Pro <link> tags for links, and my own <img> tags for images.

    When I show the description, I parse it, separating the images from the text, and then create blocks of text, images and buttons (for images with links). I also complicate things by adding a <color> tag to links so they stand out.

    The first version of the editor, which I knew was terrible, was just a big input box, which opened the full description. It allowed simple descriptions to be written, but relied on the user understanding tags for any styling. Because you can't select in an input box, the only way I could add styling, links or images was to just add them to the end of the string, and let the user move them if they wanted.



    Of course since it opened the entire description in the input box on the mobile keyboard, it wasn't long before a description would be to large to fit. Then it required scrolling which was a horrendous experience.

    For something to meet my minimum acceptable level of quality, I need to use it to create content. If I want or need to cheat, it's not good enough. Since I was creating descriptions in a text editor and copying them directly to the database, instead of using the editor, I knew something had to be done.

    My first thought was to mimic native mobile text editing. I looked up what solutions others had found. Advanced Input Field looked really promising, but it doesn't play well with standard and TextMesh Pro input fields. I have lots of input fields and I only need this functionality in this one, so I didn't really want to change them all. And while this would solve the mobile input problem, it wouldn't solve the problem of inflicting tags on my users.

    By this time I had a pretty good idea of what I wanted to do. Since I was separating the description into blocks anyway, why not use those blocks for editing as well? That way larger areas of text can be separated into smaller blocks for easier editing, and images and text can be moved around using the same method as we use when editing routines.

    I already had the basics. Adding buttons to a scrollable list, that can then be renamed, moved, or deleted is a staple feature of routine editing. I just needed to include a text box or image depending on what type it is. It didn't take long to set that up.

    Now users don't have to see image tags, but what about all the other tags?

    I really wanted WYSIWYG editing. A nice feature of rich text is that it shows tags when you edit the text on mobile, which lets my users edit tags and immediately see the results. I mainly needed to figure out how to select text so I could add styles and links to existing text.

    In WOTT, many routine elements can be renamed but because they're in Scroll Rects I can't just use an input field. An input field is activated as soon as it's clicked, so scrolling input fields doesn't work well - they keep opening when you just want to scroll. Instead I use a standard text field, and swap it with an input field when I detect a long press.

    This behavior is a great starting point for the description editor. It let's me scroll text blocks, and separates the text displayed to the user from the text they actually edit. This allows me to hijack the results to add <color> tags to links without the user seeing them. More importantly it provides the basis for selecting text, since text can't be selected in a mobile input field.

    To start with I needed a way to select text, and a way to get the resulting indexes from the raw text. Luckily @Stephan_B has provided both in TextMesh Pro which gave me a great head start.

    Once I had basic selection working I needed to balance moving the block with modifying a selection. In keeping with mobile convention, a long press selects a word. Then dragging from within a selection modifies the selection, and dragging from outside a selection moves the block (to reposition or delete). The next evolution will include a selection box and widgets to adjust the selection to conform with standard mobile conventions which might require a bit of adjustment here.



    Now I could select text and add tags around the selection, but this was where things started getting complex.

    There are 4 versions of the text for each block:
    1. The rich text displayed to the user which they use to select. This also includes color tags for links,
    2. The raw text of the display text, which also includes color tags,
    3. The text the user edits, which includes all the tags except color tags, and
    4. The resulting text for the description which doesn't include color tags, but has added paragraph tags so I know where to separate text blocks.

    To further complicate things, I accidentally left rich text editing on for the input field, so when I was testing in the editor it wasn't showing any tags, which got me all turned around in my thinking a few times.

    My first attempt was to add tags to the raw text on either side of a selection. This appeared to work, but quickly resulted in a mess of tags, and ended up breaking fairly easily. I tried verifying the tags after each edit, removing strays, and doubles, etc. but that quickly ended up super complex, and still had lots of edge cases.

    Instead, I decided to keep my own copy of the displayed rich text as a char array, with each entry having flags for bold, italic, underline and links. Links are stored as an index to a string array which holds the link url.

    This made things so much easier. When adding styles or links from selected text, I can just modify the flags in the array. Then when I need the text, I save the array out to a string, recreating the tags where needed. This ensures there aren't any stray tags or doubles and keeps everything nice and neat.


    I think I ended up rewriting it about 3 times as I tried different things. I'm glad I stuck with it, because I'm super happy with the final result. When I get a chance, I'll add some videos of the editor in action. In the mean time, join early access at WOTT.club to check it out for yourself.
     
  8. RobsonCozendey

    RobsonCozendey

    Joined:
    Oct 19, 2013
    Posts:
    69
    Looking great.

    You could really do some on-site advertising if gyms would let you put a banner of your game inside their gyms, in exchange for a banner of their gym inside your game! :)
     
  9. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    Thanks @RobsonCozendey, that's a great idea :)

    Hopefully some gyms start using it and promoting it to their clients. Currently they can create their own routines, and promote themselves through the description. Eventually there will be other customizations that routine creators can include too - such as modifying the in-game gym and adding their logo to the wall.
     
  10. TantzyGames

    TantzyGames

    Joined:
    Jul 27, 2016
    Posts:
    51
    A few weeks ago I added support for mirrored animations, but didn't have much luck with getting my IK goals to mirror. I worked on it some more, and have managed to figure it out.

    The character in WOTT is a humanoid, which provides a number of advantages. One is that animations don't have to share the same rig as the final character. That gives me the flexibility to make changes to the rig without having to remake previous animations, and also provides the ability to create animations using a variety of software packages if needed.

    Another big advantage is that humanoid animations can be mirrored in Unity with a single checkbox. This halves the animation workload for unilateral, or Left/Right exercises which is important because there will eventually be thousands of exercise animations in WOTT.

    With so many animations, it's important that the users only download the animations they need for the routines they use - there's no point downloading or storing animations they don't use. To accomplish this I needed a way to create animation graphs on the fly which can't be done with Mecanim. The only way to do this with the necessary flexibility is to use AnimationPlayables which meant I needed to create my own animation system.

    I originally had a month scheduled to create the animation system in WOTT, after which I had a month scheduled to create the database and backend. After a month I didn't get everything done that I had planned, but I was happy enough with where the animation system was.

    The fundamental elements of WOTT's animation system are the ability to create downloadable animations, and then plug them into an animation graph when needed. Completing the animation system was always a long term project, but for now I just needed those fundamentals working so I could create exercise animations while I worked on the backend. Then, in a month, once the backend was done I could continue working on the animation system.

    There were just two problems with that plan. The first was finding the time to create animations while trying to get the backend working as quickly as possible - two competing goals that the backend usually won. The second problem kind of snuck up on me around the time I felt like the backend was getting close to being complete, 5 months after starting work on it.

    The backend took way longer than I'd anticipated, or scheduled, and I couldn't just abandon it the way I'd done with the animation system because it is so important to the core functionality of the app. Then there was public testing and, long story short, nearly a year after I abandoned the animation system I started animating a unilateral move before realizing I hadn't added support for mirrored animations. I'd thought about it a lot, but hadn't actually done anything about it. So instead of creating animations, it was time to add support for mirrored animations.

    Thanks to Unity's built in support for mirroring humanoid animation, it didn't take too long to add that to the animation system. Unfortunately using Animation Playables instead of Mecanim means that I can't just set a clip to be mirrored at run-time, so I need to include mirrored clips in the data. Then it's just a matter of deciding which variation to use. I realized that also needed to add another type of animation, to swap from left to right without leaving the exercise (in case there is little or no rest between left/right sets, or for intermittent left/right reps).

    For animation that doesn't need IK, I'm done. This is working great. But lots of animations need IK so the next step was to mirror the animation of the IK targets.

    The IK targets are an odd hierarchy to support future character customization so it wasn't as easy as it would be if all the IK targets had the same parent.

    First I tried swapping values between the left and right sides, reversing the relevant channels. That didn't work because, as I realized, their starting rotations aren't mirrored which means, because it's a hierarchy, their starting positions aren't either.

    Then I thought if I record the starting position of each target, then get the vector of change from the starting position to the current position, I could use the mirror of that vector to add to the opposite target. Unfortunately that didn't work either. It was close, but the rotations still weren't right, and it was getting complex trying to manage the parent/child relationships.

    I realized that a solution that I'd been avoiding, because it seemed to have more elements, would actually be much simpler and completely avoid the issues with hierarchy. Here's what I ended up with:

    Code (CSharp):
    1. public class Mirror : MonoBehaviour {
    2.  
    3.     // The parent of the hierarchy
    4.     public Transform ikBase;
    5.  
    6.     // The following fields should be arrays if
    7.     // you have more than one object on each side
    8.  
    9.     // The transforms you want to mirror
    10.     public Transform targetLT, targetRT;
    11.  
    12.     // These hold the temporary mirrored values before applying
    13.     // them to the transforms. See the Trans class below.
    14.     MiniTransform tempLT, tempRT;
    15.  
    16.     // These hold the difference between a transform's mirrored
    17.     // value and the actual rotation of it's opposite partner. If your
    18.     // left and right sides are mirrored to start with, you don't need these.
    19.     Quaternion LTOffset, RTOffset;
    20.  
    21.     // Use this for initialization
    22.     void Start () {
    23.  
    24.         // Calculate the offsets
    25.  
    26.         // This is the formula for calculating the offset
    27.         // because with Quaternions, the order matters.
    28.         // Left = Right * Offset
    29.         // Offset = Right(inv) * Left
    30.  
    31.         // Calculate Left side offset
    32.         Quaternion q = targetLT.rotation;
    33.  
    34.         // To mirror a Quaternion, reverse the values corresponding
    35.         // to the vectors of the plane you want to mirror across.
    36.         // In this case we're mirroring along the x axis, across the yz plane
    37.         q.y *= -1;
    38.         q.z *= -1;
    39.  
    40.         // Apply the formula
    41.         LTOffset = Quaternion.Inverse(q) * targetRT.rotation;
    42.  
    43.         // Do the same for the right side
    44.         q = targetRT.rotation;
    45.         q.y *= -1;
    46.         q.z *= -1;
    47.         RTOffset = Quaternion.Inverse(q) * targetLT.rotation;
    48.     }
    49.  
    50.     // LateUpdate occurs after animation, but before IK
    51.     void LateUpdate () {
    52.  
    53.         // Remove the parent transform values
    54.         // This gets the transform values as if the parent
    55.         // is at (0,0,0) so it works anywhere in the scene
    56.         tempLT.position = ikBase.InverseTransformPoint(targetLT.position);
    57.         tempRT.position = ikBase.InverseTransformPoint(targetRT.position);
    58.         tempLT.rotation = Quaternion.Inverse(ikBase.rotation) * targetLT.rotation;
    59.         tempRT.rotation = Quaternion.Inverse(ikBase.rotation) * targetRT.rotation;
    60.  
    61.         // Mirror the values across the x axis
    62.         tempLT.position = new Vector3(-tempLT.position.x, tempLT.position.y, tempLT.position.z);
    63.         tempRT.position = new Vector3(-tempRT.position.x, tempRT.position.y, tempRT.position.z);
    64.         tempLT.rotation.y *= -1;
    65.         tempLT.rotation.z *= -1;
    66.         tempRT.rotation.y *= -1;
    67.         tempRT.rotation.z *= -1;
    68.  
    69.         // Apply the offsets using our formula:
    70.         // Left = Right * Offset (Offset * Right will give a different result)
    71.         tempLT.rotation = tempLT.rotation * LTOffset;
    72.         tempRT.rotation = tempRT.rotation * RTOffset;
    73.  
    74.         // Restore the parent transform values
    75.         // This sets the world position to where it
    76.         // would be if it was a child of ikBase
    77.         tempLT.position = ikBase.TransformPoint(tempLT.position);
    78.         tempRT.position = ikBase.TransformPoint(tempRT.position);
    79.         tempLT.rotation = ikBase.rotation * tempLT.rotation;
    80.         tempRT.rotation = ikBase.rotation * tempRT.rotation;
    81.  
    82.         // With a hierarchy of transforms, make sure you calculate
    83.         // all the positions (above) in one pass, and then apply those
    84.         // positions to the transforms (below) in a second pass
    85.  
    86.         // Apply mirrored positions
    87.         targetLT.position = tempRT.position;
    88.         targetRT.position = tempLT.position;
    89.  
    90.         // Apply mirrored rotations
    91.         targetLT.rotation = tempRT.rotation;
    92.         targetRT.rotation = tempLT.rotation;
    93.     }
    94. }
    95.  
    96. // This just gives us a single object to store temporary position and rotation values
    97. public class MiniTransform
    98. {
    99.     public Vector3 position;
    100.     public Quaternion rotation;
    101. }

    This is just my test code, but extended to arrays of transforms it works to mirror any hierarchy, anywhere in the scene. It does this by moving each transform into the equivalent of local space, mirroring along the x axis, then moving the transform back to it's parent space.

    You can test this code by only applying the mirrored position/rotation to the left side. Then you can move the right side to see the left object mirror the motion.

    You could also store the offsets as MiniTransforms which would allow you to offset position and rotation.

    It took me a while to wrap my head around all this. My first pass worked perfectly when the character rotation was (0,0,0), but broke when the character turned. That prompted me to learn what Transform.TransformPoint and Transform.InverseTransformPoint are for.

    There was also some trial and error involved in getting the right order for the Quaternion operations. Speaking of which, I should also note that there seems to be two ways to mirror quaternions, complex and simple (mathematicians probably call these correct and incorrect). This uses the simple way, because we're mirroring along a world axis. If you need to mirror across an arbitrary plane, you need to use the more complex method which is discussed in this thread.

    I'm looking forward to updating to Unity 2018 so I can hopefully make all this a whole lot more efficient, but for now I can mirror animations whether they use IK or not, which is awesome.

     
  11. moonlight_sun

    moonlight_sun

    Joined:
    Oct 30, 2020
    Posts:
    1
    wow, your work is amazing. But it's no secret that currently most people are increasingly using different versions of stimulants. This trend is everywhere. And now they are not as harmful and dangerous to the body as it was 10 years ago. Now the danger of the use of these stimulants are equal to zero. I am no exception, I use https://paradigmpeptides.com/product/mt-2-melanotan-2-10mg-buy/. Now, an important question that interests me. Are you planning to add a section to your app that will allow you to control not only your training, but also how you use stimulants? However, it can be considered as a complex activity and development of your body. I will be grateful for your response.
     
    Last edited: Nov 8, 2020