We are working on a game about wolves, and it’ll be important for the wolf model to have some pretty detailed facial expression animations (such as howling and snarling). That requires a fair bit of rigging, but all that rigging becomes excessive overhead for much of the game when the wolves are running around the environment. So we’re wondering if we need two versions of the model, one with full body rigging and the other with facial rigging, and then create a camera angle that’s tied to the face for things where the facial expression is critical. Or maybe there’s another way to swap or blend in the detailed version as needed? Is that possible? Any suggestions?
In GC Palestine we have a lot of facial animation as well as a lot of people onscreen. What we do is that when we know somebody should be animated in the face, we swap the models.
Basically, we have a setup like this:
Low-Lod (for when they’re 32 px high on screen, app. 200 polys) -
Medium - 2 objects, one for face one for body. This is the one most commonly used. 2k poly
Then we have a 2K hipoly face we can replace the medium face with for when they are speaking with the player. We seldomly have more than 3 of these on-screen.
One thing: We are targetting pretty low-end hardware, so your numbers may be different than ours. Ours runs fine on an intel integrated graphics card on a 3 year old machine…
Awesome.
It would be really cool if anyone felt like making some sort of LOD-Swap package for those of us who cant implement it from scratch. Ideally you’d want to LOD-swap everything, wouldnt you, like mipmaps…?
Or is excessive swapping a pitfall also? The GC Palestine project is great in that it looks like its doing all the things we’d like to be doing eventually. Its cool that you guys are doing that 'cause it must fully road test Unity before you pass it on to us. Another Bonus…Cool ay?
AC