I have come across an article talking about how Zombie Villie was done. I was surprised that devs said that graphics where done in a 2D package (body parts, weapons, etc…) but animation was done in a 3D package. Has anybody done this too? Have been looking for some insight in google without luck. My main concern is if animation will show too “robotized”.
Interesting, I have been wondering how they did it cause it looks somewhat “special”.
(Or you can say robotized )
If you want to go for smooth 2D animations I would prefer to pre-render them to pngs
and use SpriteManager 2 to play the animations. Using SpriteManager 2 is very similar
to what you do in Adobe Flash with Keyframe animations - just you have a good performance
It really just depends on the style you’re going for, and what limitations you want to sign up for. Sprite sheets are simple and direct, and the only limitation on how smooth your animations are is (a) how many frames you can make that will fit into memory and (b) your own skill as an artist. That said, making dozens or hundreds of frames of animation can be very time consuming, and prohibitively memory intensive if you have a lot of variety on screen or are developing for retina display.
Personally I prefer our cutout-style of animation, where we construct a 2d-looking mesh in a 3d package and animate their body parts via translation, rotation, scale, and the occasional texture swap. The reason you won’t find much help on google is because, well… nobody else is really doing it. The benefits are pretty substantial, you can animate a large suite of animations with a single piece of art by creatively manipulating their various parts through translation, scale and rotation alone, saving enormous amounts of time and memory. You end up using far less artwork, and your animations can run at any framerate by virtue of being curve based, allowing you to procedurally change their speed at runtime. Further, you can leverage Unity’s animation system, allowing for animation layers, crossfading, and additive animation for things like hit reactions and gun recoils that simply layer on top of whatever else is currently playing (such as a run cycle).
There are downsides though - it takes a lot of planning to make your base artwork usable for as many animations as you can squeeze out of it. Making a cutout model that can look like its attack, running, swimming, jumping, etc… sometimes its very difficult or just plain impossible to pull off without completely swapping the texture. This complicates the content creation process a bit. In OMG Pirates, we ended up using about a dozen different base “poses” for the ninja. Still, a dozen different sprites isn’t bad when you consider he has several hundred frames of animation. Creating him with a sprite sheet would have been impossible without making his animation much simpler.
If you’re worried about it looking “robotized” because the run cycle in Zombieville is a little… lame, then I don’t blame you. I think I cranked that out in like 5 minutes. A better representation of what the style is capable of would be our more recent projects, like OMG Pirates! or our soon-to-be-released RPG, Battleheart. There’s video of both on our website, www.mikamobile.com.
MikaMobile I have been reading some post around about how you mounted models in maya. I have read that you were using bones in order to get the model rendered in 1 draw call. Would you continue using bones today that we have a batching system? Would you just use a hierchy of quads?
Another thing, what do you mean by cutout-style? I guess you have your characters splitted in several parts, arms, head, etc… and then you mount the parts to make the character in maya?.
I still use bones, primarily due to the shader animation I’ve been employing in Battleheart. Automatic batching is not “free” cpu-wise, nor is skinning, so there’s overhead either way you slice it.
As far as what I mean by “cutout”, you basically just described it as I would have.
By shader animation what do you mean :S. Are you using vertex shaders on iphone?
And last question (I swear!!!). I’m just programmer, but wanna test this method a little to see if I can include it in my workflow (don’t have a modeler right now, but I can do things in max). So, are you using skeltons definitions as the ones for 3D? I mean do you have bones connected in a hierarchy or just assigned a bone to every quad and you freely move them? I have been trying setting an hierarchy and weighting the bones to just affect the quad vertices they are attached (all to 1), but haven’t got it right yet :).
Regarding shader animation: I animate the color values of my characters and effects at runtime, for things like smoke clouds or magical sparklies that have their alpha fade out, or for tinting characters such as flashing red when taking damage, flashing green when they’re healed, pulsing blue when they have a shield, etc. You can see it in the Battleheart trailer. If they weren’t made from a single mesh, I’d have to edit their shared material, but then that would effect all instances of that enemy unless I made them create their own version of the shader every time they spawn and assign it to each batched piece, blah blah blah… it’s just easier to use skinned meshes for what I’m doing.
Whether your skeleton is in a heirarchy or just free-floating in your 3d app really doesn’t matter as long as you skin them properly. Not sure how its done in Max, but in maya I just set bone weights manually on a per vertex basis.
Yes, This evenening I have been playing more deeply with max and I was able to do something aceptable. Thanks a lot for the advices, you have been really kind, no everybody shares knoledge in such on openly way.
In the other hand, there’s something that has come to mind when animating my character, how do you handle for example things that can’t be animated through Position, Rotation and Scale? I mean for example an enemy that is just a ball and when detects you does a textured animation (By texture animation I mean a sequence of frames, you know, the old good sprite animation).
Last question (I know I said before it was last, but… the more I read the more interested I am), I have read in another post that you are using a bone for texture information. When you say you swap the texture, do you mean swap to another different texture, or just show another frame in the current you are using?
My input to this thread, is just to say thanks to MikaMobile, for openly sharing so much of your development process.
After playing Zombieville/OMG, I was inspired to give it a go myself (with zero 3D/animation experience). With mostly the wealth of information from their posts, I’ve made huge strides, and actually have my first game in early development now.
Resolving the “robotized” issue seems to me a matter of mastering this technique, as well as swapping the textures as needed for different poses. Things actually end up looking more smooth than most any Sprite based game, in my opinion.
Yea, I use empty nodes in my scene (such as a single bone with nothing attached to it) to store meta-data sometimes, since we don’t really have any other means of doing so with the FBX format. Like in OMG Pirates!, we had a dozen different textures for the ninja for different poses. In maya, we could have made a custom attribute for swapping the texture, but alas that would not be usuable by Unity at all, much the same way that constraints and IK have to be baked down to the bones because they can’t be interpreted in Unity. So knowing that all we could rely on was raw transform information, we set up a relationship where some extra bone floating in the scene somewhere would change the texture based on its X-scale. So x-scale of 1 = texture #1, X-scale of 2 = texture #2, and so on. Then in Unity, we had a script that essentially did the same thing - it checked the transform.localScale.x value of the same bone, and changed the renderer.material.mainTexture of the character’s mesh in a LateUpdate loop.
This wasn’t a perfect solution though, since if you’re crossfading or interpolating from texture 8 to texture 2 for example - it was possible for the textures in between to flicker briefly if you caught a glimpse of a frame in between. We had to go to some extra lengths to ensure that we never interpolated across some garbage frames. If we end up doing a lot of texture swapping in our next project I think I’ll figure out a more graceful solution, but it got the job done for OMG.
More recently I’ve been using the same technique to drive alpha values, such as for fading a visual effect over time.
First off, I want to say congratulations for producing some quality games! I’ve got all three of them and they are a treat! Loved the “Powered by Rum” technology in OMG Pirates! XD
I’m still a bit new to Unity and I’m more of a programmer (my brother is the artist, I’ll leave most of that to him). I hate prying but as you said, “nobody else is really doing it”, I’m still having trouble understanding how you are assembling your mesh in Unity.
Does the model make it into Unity as a single mesh or do you “assemble” your cutout mesh together in Unity?
Someone on UnityAnswers described the method as “animated planes”, I don’t think that is accurate, or is it?
Would you consider doing a basic tutorial for the community?
I understand you may be busy and time is money in indie dev world, I just think the positives you’ve outlined for this method would help a lot of Unity developers that aspire to create quality 2D depicted games more efficiently than using complicated sprite sheets which severely limit the complexities of animation and available resources for mobile devices. I would honestly like to put forward money if need be for the opportunity to see such a method available for the Unity community to digest. Please PM me if you are interested in the monetary offer and thanks again for the already great support and love you have shown to the Unity community and I wish for your continued success!
The model is a single mesh before it comes into unity - merged in Maya or equivalent software, and skinned to a heirarchy of bones so that the different parts can still be moved independently of each other.
I’ve been thinking about putting together a demo scene of our “cutout” technique and making it available on the Unity Asset Store, since it seems like there’s interest.
As I stated earlier in this thread, I had no experience in 3D and very little in Maya. Yet through reading all of the interviews and posts on this technique, I have been able to figure it out, and create my own models. It took a couple of months of reading and trying stuff, but it’s not too difficult to get it down. This coming from someone who didn’t know what a mesh, bone, or plane was when he started.
While a demo might be nice (I’m sure I’d learn from it as well), don’t just wait around for a step by step on how to do it. MikaMobile has posted pretty extensively on the topic, and there is a thread where someone created an example with a billiard table using this technique. I’ll find the thread and add it to this post.
Hmm, I thought the animations in zombiville were really charming and fun as opposed to overly produced like some AAA house would do. In no way would I have changed them. As a matter of fact if you look at the 35 game pack there is a substandard knock off with a guy shooting dinosaurs. They even tried to mimic the animation style.
I am going to have to try messing about more with trying to create 2D animations in 3D. Personally I’d love to see a tutorial of some sort.
BTW my grandson loves to sit with me and play Battleheart, his dad (my son) had to pick it up as well. Oh and he’s two. Keep up the great work.
I just wanted to say that I really would like to get my hands on that MikaMobile example tutorial (even willing to pay for it!). I’m just really impressed with the graphics that you are able to get out of the device. While browsing the forums I can get a general idea of the technique that you are using, I’d really love to see a working example of it!
First, Thanks for your open attitude and the tons of help you’re providing.
Now, I’m trying to implement your technique and I’m not too seasoned a dev yet. In fact, I’m teaching myself Maya in the process of putting this thing together, so excuse me if my question is trivial or silly. Anyway, here goes:
You say you’re animating by means of creating quads and then combining them into a mesh. No problems there. But then you say you attach bones to every quad in order to animate them. Do you by any chance mean you attach joints? Because as far as I can find out at Maya Help, bones are just visual cues for connections between joints. Is that correct?
Also, when you say
Could you expand a bit on that? Are those the skin weights you modify with the Component Editor? What would be the use of that weighting? I mean, wouldn’t you achieve the same result by just parenting the quads to the skeleton?
Finally, if all the quads are combined into one mesh, does that mean they all share the same texture? Do you then use just one image file with all possible poses for every body part?