I’ve been working on a workout/fitness app called WOTT that’s almost ready for release after 3 years of development. After release there are still many features to be added, so I thought I’d start a thread here to share what I’m doing, what I’ve done, get some feedback and learn from you guys.
Firebase for the backend - including facebook/twitter/google sign in.
Lots of things I’ve learned about Unity UI.
Some stuff I’m eager to do, and share as I go includes:
More playables (including animation C# jobs)
More AI, path navigation, etc
Character customization
Some History
Today marks 3 years since I started working on the first prototype for WOTT. At the time I was doing a progression based bodyweight workout program that uses tempo training, called Convict Conditioning. I wanted an app that could keep my tempo, save my progress and show me what progression I was up to.
I created a protoype using Unity 5.1 and Playmaker, because I’d already used Playmaker to make a game and, well… I knew next to nothing about coding. I’d dabbled with learning programming a few times over the years, but hadn’t got very far. That first prototype was really little more than a metronome, but it kept my tempo, and stored my reps for the workout so I could wait until the end to write them down.
Here is the prototype with an early concept pic.
The next step was to get a workout routine into the app. I wanted to know what exercise to do each day, and what the target sets & reps were for each progression. I’d had some experience working with XML files in Playmaker, but what I needed to do this time was significantly more complex. Between this, my struggle with getting the indicator dials working, and having no idea how I was going to accomplish some of the other features I had planned, I realized that I was going to have to learn programming. So I started learning C#.
After a couple of months of learning, I’d figured out enough linq to process the XML routines, choose a routine, a program and a progression, and show the target reps.
Another project meant I wasn’t able to get anything done for a few months. I didn’t even have time to continue my coding training. But as soon as I could, I got back into it. I started figuring out Unity UI and working on a menu system to view and edit routines. I designed a database structure for routines, switched from XML to json and got saving and loading to the device working.
After a few more months I was making progress. The routine menus weren’t fully functional, but the basic ideas were coming together using accordion style menus (although using a different method than I’m using now).
But I was still manually editing json files and using the original method of choosing a workout, so the next major milestone was to get the routine menu functional enough for me to be able to create and load routines. Also the main workout timer was still using Playmaker, so almost a year after starting I finally replaced it with code and was able to remove Playmaker from the project.
During the refactor of the workout screen I finally figured out what to use the 3rd dial for. Initially it was going to show what progression you were up to, and it changed a few times along the way until it ended up as a tabata style visual representation of the total time for the workout.
The rest of 2016 was just getting it all working together, a lot of refactoring, adding inheritance for routine options, and making sure they all worked. I redesigned the accordions, which initially used scaling to open and close, to just moving up and down (which solved a few problems but created a few more).
By the end of 2016 it was working pretty well. I could create, save and load routines, user data and user progress, but it was all on the device. There were 2 big things to do: add the character, and get online. I started to work on the character, but then another project ended up taking a huge chunk out of 2017 (plus my wife broke her ankle to I was running after her and 3 kids for a few months) so I didn’t get much done until getting back onto it in September.
I then created a 3 month plan to be testing by Christmas 2017.
(Narrator: Testing didn’t start until July 2018)
Thanks so much for reading this far, I’ll continue the story real soon. In the mean time I’d love to know what you think and please ask me anything.
A push-up consists of an up pose, a down pose, and the transitions between the poses. I’ve created my own transitions, which allow me to adjust the time a transition takes based on the tempo set by the user.
A half push-up is just a full push-up going half way down. I figured I can reduce the animation workload by re-using poses, and adjusting the transition to go part way to a pose instead of all the way.
I implemented it last year, but hadn’t had a chance to test it. The first test didn’t go so well:
Turns out while I was only going part way to the down pose, the up clip was still going to 0. The up pose would need to be at 1 - distance:
So that didn’t work out so well either. I’d set all the other clips in the mixer to 1 - distance, not just the up clip.
To enable transitioning to a clip from any other pose or combination of poses, I’m taking a snapshot at the beginning of each transition. I realized I can use this to determine the correct mixer input to set to 1 - distance, and keep all the others at 0.
With this in place partial transitions are up and running:
I’m excited to be working on something I’ve been looking forward to doing for a while.
For a long time I had everything stored on the device. When I started using the database I stopped saving to the device to make sure the database was all working properly. I’ve been caching audio and move animations, but not any of the user or routine data.
This week I’m combining the two, caching data from the database on the device, and using that unless there’s updated data online. The main reason for this is to speed up loading on startup, but it will also save money as there will be less reads to the database, and will set everything up for the app to work offline again, which it hasn’t been able to do since using the database.
Another benefit is that it’s giving me the opportunity to refactor a whole lot of code. In some areas apparently I’ve been loading related data concurrently on startup, then closing my eyes and crossing my fingers and hoping it all works. Now, following my flow chart, I am making sure things are loaded when they’re needed, and in the correct order.
I’m just uploading a new build that has data caching to the device. As I’d hoped, I’m also now in full control of the loading process, what happens and when. If there is data on the device I load it, then check if there’s newer data on the database load that instead, or add to the data on the device depending on the type of data.
I had some time set aside last week for animation. I started working on a unilateral move, before realizing that I hadn’t added support for unilateral animations, yet. One of the benefits to using playables for my animation system is that I can add new animations without requiring a new build, or the users to update. So I decided my time would be better spent adding support for unilateral animations for this build, so I can create many supported animations afterwards.
The character is a humanoid so that I can take advantage of mirroring animation clips, but I have a custom IK system which makes it trickier. It took a couple of days to add support for mirrored clips for a right sided set, one day for the data and a day to detect and play the mirrored clips at the right time. The next step was to mirror my IK targets.
First I swapped values between the left and right sides, negating the relevant channels. That didn’t work, because their starting positions/rotations aren’t mirrored.
So I thought if I record the starting position of each target, then get the vector of change from the starting position to the current position, I could use the mirror of that vector to add to the opposite target. Unfortunately that didn’t work either. I still think it should work, but the rotations weren’t behaving as expected. The IK targets are a somewhat complex hierarchy (supporting future character customization) which makes it all more difficult.
I need to spend more time on it. So for now, unilateral moves that don’t require IK are working.
WOTT’s routine descriptions are a mix of standard rich text tags for styling, TextMesh Pro tags for links, and my own tags for images.
When I show the description, I parse it, separating the images from the text, and then create blocks of text, images and buttons (for images with links). I also complicate things by adding a tag to links so they stand out.
The first version of the editor, which I knew was terrible, was just a big input box, which opened the full description. It allowed simple descriptions to be written, but relied on the user understanding tags for any styling. Because you can’t select in an input box, the only way I could add styling, links or images was to just add them to the end of the string, and let the user move them if they wanted.
Of course since it opened the entire description in the input box on the mobile keyboard, it wasn’t long before a description would be to large to fit. Then it required scrolling which was a horrendous experience.
For something to meet my minimum acceptable level of quality, I need to use it to create content. If I want or need to cheat, it’s not good enough. Since I was creating descriptions in a text editor and copying them directly to the database, instead of using the editor, I knew something had to be done.
My first thought was to mimic native mobile text editing. I looked up what solutions others had found. Advanced Input Field looked really promising, but it doesn’t play well with standard and TextMesh Pro input fields. I have lots of input fields and I only need this functionality in this one, so I didn’t really want to change them all. And while this would solve the mobile input problem, it wouldn’t solve the problem of inflicting tags on my users.
By this time I had a pretty good idea of what I wanted to do. Since I was separating the description into blocks anyway, why not use those blocks for editing as well? That way larger areas of text can be separated into smaller blocks for easier editing, and images and text can be moved around using the same method as we use when editing routines.
I already had the basics. Adding buttons to a scrollable list, that can then be renamed, moved, or deleted is a staple feature of routine editing. I just needed to include a text box or image depending on what type it is. It didn’t take long to set that up.
Now users don’t have to see image tags, but what about all the other tags?
I really wanted WYSIWYG editing. A nice feature of rich text is that it shows tags when you edit the text on mobile, which lets my users edit tags and immediately see the results. I mainly needed to figure out how to select text so I could add styles and links to existing text.
In WOTT, many routine elements can be renamed but because they’re in Scroll Rects I can’t just use an input field. An input field is activated as soon as it’s clicked, so scrolling input fields doesn’t work well - they keep opening when you just want to scroll. Instead I use a standard text field, and swap it with an input field when I detect a long press.
This behavior is a great starting point for the description editor. It let’s me scroll text blocks, and separates the text displayed to the user from the text they actually edit. This allows me to hijack the results to add tags to links without the user seeing them. More importantly it provides the basis for selecting text, since text can’t be selected in a mobile input field.
To start with I needed a way to select text, and a way to get the resulting indexes from the raw text. Luckily @Stephan_B has provided both in TextMesh Pro which gave me a great head start.
Once I had basic selection working I needed to balance moving the block with modifying a selection. In keeping with mobile convention, a long press selects a word. Then dragging from within a selection modifies the selection, and dragging from outside a selection moves the block (to reposition or delete). The next evolution will include a selection box and widgets to adjust the selection to conform with standard mobile conventions which might require a bit of adjustment here.
Now I could select text and add tags around the selection, but this was where things started getting complex.
There are 4 versions of the text for each block:
The rich text displayed to the user which they use to select. This also includes color tags for links,
The raw text of the display text, which also includes color tags,
The text the user edits, which includes all the tags except color tags, and
The resulting text for the description which doesn’t include color tags, but has added paragraph tags so I know where to separate text blocks.
To further complicate things, I accidentally left rich text editing on for the input field, so when I was testing in the editor it wasn’t showing any tags, which got me all turned around in my thinking a few times.
My first attempt was to add tags to the raw text on either side of a selection. This appeared to work, but quickly resulted in a mess of tags, and ended up breaking fairly easily. I tried verifying the tags after each edit, removing strays, and doubles, etc. but that quickly ended up super complex, and still had lots of edge cases.
Instead, I decided to keep my own copy of the displayed rich text as a char array, with each entry having flags for bold, italic, underline and links. Links are stored as an index to a string array which holds the link url.
This made things so much easier. When adding styles or links from selected text, I can just modify the flags in the array. Then when I need the text, I save the array out to a string, recreating the tags where needed. This ensures there aren’t any stray tags or doubles and keeps everything nice and neat.
I think I ended up rewriting it about 3 times as I tried different things. I’m glad I stuck with it, because I’m super happy with the final result. When I get a chance, I’ll add some videos of the editor in action. In the mean time, join early access at WOTT.club to check it out for yourself.
You could really do some on-site advertising if gyms would let you put a banner of your game inside their gyms, in exchange for a banner of their gym inside your game!
Hopefully some gyms start using it and promoting it to their clients. Currently they can create their own routines, and promote themselves through the description. Eventually there will be other customizations that routine creators can include too - such as modifying the in-game gym and adding their logo to the wall.
A few weeks ago I added support for mirrored animations, but didn’t have much luck with getting my IK goals to mirror. I worked on it some more, and have managed to figure it out.
The character in WOTT is a humanoid, which provides a number of advantages. One is that animations don’t have to share the same rig as the final character. That gives me the flexibility to make changes to the rig without having to remake previous animations, and also provides the ability to create animations using a variety of software packages if needed.
Another big advantage is that humanoid animations can be mirrored in Unity with a single checkbox. This halves the animation workload for unilateral, or Left/Right exercises which is important because there will eventually be thousands of exercise animations in WOTT.
With so many animations, it’s important that the users only download the animations they need for the routines they use - there’s no point downloading or storing animations they don’t use. To accomplish this I needed a way to create animation graphs on the fly which can’t be done with Mecanim. The only way to do this with the necessary flexibility is to use AnimationPlayables which meant I needed to create my own animation system.
I originally had a month scheduled to create the animation system in WOTT, after which I had a month scheduled to create the database and backend. After a month I didn’t get everything done that I had planned, but I was happy enough with where the animation system was.
The fundamental elements of WOTT’s animation system are the ability to create downloadable animations, and then plug them into an animation graph when needed. Completing the animation system was always a long term project, but for now I just needed those fundamentals working so I could create exercise animations while I worked on the backend. Then, in a month, once the backend was done I could continue working on the animation system.
There were just two problems with that plan. The first was finding the time to create animations while trying to get the backend working as quickly as possible - two competing goals that the backend usually won. The second problem kind of snuck up on me around the time I felt like the backend was getting close to being complete, 5 months after starting work on it.
The backend took way longer than I’d anticipated, or scheduled, and I couldn’t just abandon it the way I’d done with the animation system because it is so important to the core functionality of the app. Then there was public testing and, long story short, nearly a year after I abandoned the animation system I started animating a unilateral move before realizing I hadn’t added support for mirrored animations. I’d thought about it a lot, but hadn’t actually done anything about it. So instead of creating animations, it was time to add support for mirrored animations.
Thanks to Unity’s built in support for mirroring humanoid animation, it didn’t take too long to add that to the animation system. Unfortunately using Animation Playables instead of Mecanim means that I can’t just set a clip to be mirrored at run-time, so I need to include mirrored clips in the data. Then it’s just a matter of deciding which variation to use. I realized that also needed to add another type of animation, to swap from left to right without leaving the exercise (in case there is little or no rest between left/right sets, or for intermittent left/right reps).
For animation that doesn’t need IK, I’m done. This is working great. But lots of animations need IK so the next step was to mirror the animation of the IK targets.
The IK targets are an odd hierarchy to support future character customization so it wasn’t as easy as it would be if all the IK targets had the same parent.
First I tried swapping values between the left and right sides, reversing the relevant channels. That didn’t work because, as I realized, their starting rotations aren’t mirrored which means, because it’s a hierarchy, their starting positions aren’t either.
Then I thought if I record the starting position of each target, then get the vector of change from the starting position to the current position, I could use the mirror of that vector to add to the opposite target. Unfortunately that didn’t work either. It was close, but the rotations still weren’t right, and it was getting complex trying to manage the parent/child relationships.
I realized that a solution that I’d been avoiding, because it seemed to have more elements, would actually be much simpler and completely avoid the issues with hierarchy. Here’s what I ended up with:
public class Mirror : MonoBehaviour {
// The parent of the hierarchy
public Transform ikBase;
// The following fields should be arrays if
// you have more than one object on each side
// The transforms you want to mirror
public Transform targetLT, targetRT;
// These hold the temporary mirrored values before applying
// them to the transforms. See the Trans class below.
MiniTransform tempLT, tempRT;
// These hold the difference between a transform's mirrored
// value and the actual rotation of it's opposite partner. If your
// left and right sides are mirrored to start with, you don't need these.
Quaternion LTOffset, RTOffset;
// Use this for initialization
void Start () {
// Calculate the offsets
// This is the formula for calculating the offset
// because with Quaternions, the order matters.
// Left = Right * Offset
// Offset = Right(inv) * Left
// Calculate Left side offset
Quaternion q = targetLT.rotation;
// To mirror a Quaternion, reverse the values corresponding
// to the vectors of the plane you want to mirror across.
// In this case we're mirroring along the x axis, across the yz plane
q.y *= -1;
q.z *= -1;
// Apply the formula
LTOffset = Quaternion.Inverse(q) * targetRT.rotation;
// Do the same for the right side
q = targetRT.rotation;
q.y *= -1;
q.z *= -1;
RTOffset = Quaternion.Inverse(q) * targetLT.rotation;
}
// LateUpdate occurs after animation, but before IK
void LateUpdate () {
// Remove the parent transform values
// This gets the transform values as if the parent
// is at (0,0,0) so it works anywhere in the scene
tempLT.position = ikBase.InverseTransformPoint(targetLT.position);
tempRT.position = ikBase.InverseTransformPoint(targetRT.position);
tempLT.rotation = Quaternion.Inverse(ikBase.rotation) * targetLT.rotation;
tempRT.rotation = Quaternion.Inverse(ikBase.rotation) * targetRT.rotation;
// Mirror the values across the x axis
tempLT.position = new Vector3(-tempLT.position.x, tempLT.position.y, tempLT.position.z);
tempRT.position = new Vector3(-tempRT.position.x, tempRT.position.y, tempRT.position.z);
tempLT.rotation.y *= -1;
tempLT.rotation.z *= -1;
tempRT.rotation.y *= -1;
tempRT.rotation.z *= -1;
// Apply the offsets using our formula:
// Left = Right * Offset (Offset * Right will give a different result)
tempLT.rotation = tempLT.rotation * LTOffset;
tempRT.rotation = tempRT.rotation * RTOffset;
// Restore the parent transform values
// This sets the world position to where it
// would be if it was a child of ikBase
tempLT.position = ikBase.TransformPoint(tempLT.position);
tempRT.position = ikBase.TransformPoint(tempRT.position);
tempLT.rotation = ikBase.rotation * tempLT.rotation;
tempRT.rotation = ikBase.rotation * tempRT.rotation;
// With a hierarchy of transforms, make sure you calculate
// all the positions (above) in one pass, and then apply those
// positions to the transforms (below) in a second pass
// Apply mirrored positions
targetLT.position = tempRT.position;
targetRT.position = tempLT.position;
// Apply mirrored rotations
targetLT.rotation = tempRT.rotation;
targetRT.rotation = tempLT.rotation;
}
}
// This just gives us a single object to store temporary position and rotation values
public class MiniTransform
{
public Vector3 position;
public Quaternion rotation;
}
This is just my test code, but extended to arrays of transforms it works to mirror any hierarchy, anywhere in the scene. It does this by moving each transform into the equivalent of local space, mirroring along the x axis, then moving the transform back to it’s parent space.
You can test this code by only applying the mirrored position/rotation to the left side. Then you can move the right side to see the left object mirror the motion.
You could also store the offsets as MiniTransforms which would allow you to offset position and rotation.
It took me a while to wrap my head around all this. My first pass worked perfectly when the character rotation was (0,0,0), but broke when the character turned. That prompted me to learn what Transform.TransformPoint and Transform.InverseTransformPoint are for.
There was also some trial and error involved in getting the right order for the Quaternion operations. Speaking of which, I should also note that there seems to be two ways to mirror quaternions, complex and simple (mathematicians probably call these correct and incorrect). This uses the simple way, because we’re mirroring along a world axis. If you need to mirror across an arbitrary plane, you need to use the more complex method which is discussed in this thread .
I’m looking forward to updating to Unity 2018 so I can hopefully make all this a whole lot more efficient, but for now I can mirror animations whether they use IK or not, which is awesome.
wow, your work is amazing. But it’s no secret that currently most people are increasingly using different versions of stimulants. This trend is everywhere. And now they are not as harmful and dangerous to the body as it was 10 years ago. Now the danger of the use of these stimulants are equal to zero. I am no exception, I use https://paradigmpeptides.com/product/mt-2-melanotan-2-10mg-buy/. Now, an important question that interests me. Are you planning to add a section to your app that will allow you to control not only your training, but also how you use stimulants? However, it can be considered as a complex activity and development of your body. I will be grateful for your response.