View model & world model - How are they synchronised in games like Fallout 3 etc.?

Since the first fully-3D FPS games like Quake came out, developers have used two different models for each weapon that can be equipped by the player:

  1. The “View Model” is what the player commonly sees from his view in the lower right corner of the screen, with his virtual hands gripping the weapon. This model is generally much more detailed because is is very near and because a lot of polygons can be deleted if they are never seen by the player (the right side of the weapon, for example).
  2. The “World Model” is what other players, and also the player him or herself from outside would see (for example, when switching to a third-person view). The player model (which is most often not seen from the player’s view) holds the weapon.

So far, so good. The problem: The view and world models are not at the same position or even of the same size in world space! (view models are often elongated along their z-axis so they look better from a first-person view). The camera doesn’t follow every movement of the player model’s head, and thus the view model that is attached to the caemra will also not always (if ever) be in the same position as the world model!

How can muzzle flashes and especially projectiles appear at the same position in world space? How are games doing this?

Some images to clear things up:

The player’s view with the view model in blue:

The player model seen from outside, with the "normally invisible) view model in blue and the world model in red. The red line indicates the camera’s aiming point. The line’s end near the head is where the camera is at.

The player model seen from the side. Note how even now, the two models do not align.

Looking up and down makes it worse: The camera doesn’t move, it only rotates (this is common with almost all FPS games), while the upper body and head twist in a realistic fashion:




muzzle flashes - One for first person, another for third. Don’t render the third person one for the player who’s viewing from first person, and vice versa for those seen from third.

projectiles - shoot from the first person view’s offset. Third person, just have a big muzzle flash and pretend it’s okay.

Some games will fake the initial position of the projectile to make it look it’s coming from the gun. Other games just make sure th first person view matches the third person’s head position. Most games if you look closely have the projectiles spawning from essentially the nose of the person shooting when you look at them in third person.

Interesting topic.
Back side polygon deletion is something that I have not seen in a while - last gen?
Seems like modern engines (Unity) can handle the issues that back side polygon editing solved back in the day.

Arent a lot of devs using only one model for this now?
With the FPS camera being setup with mesh culling or masking (if desired) to not show the body beyond the arms, and proper character controller to not 100% map to the characters movement, and the 3rd person cameras being setup with 3rd person controls that dont cull/mask the mesh?

I might be out of touch - I havent worked on a FPS game in several years.

Thanks for the responses thus far. After more research, I am under the impression that games really fake it and don’t bother with any kind of synchronization. However, I still don’t know how a weapon like a laser gun that has a continuous beam does it; is there one beam only for the player view and one beam for the outside view? What if one of the beams if blocked by a wall, but the other is not (since it’s offset)?

Yes, you are probably right. It was a thing in games like Half-Life when view model animation was limited and poly counts were at a premium.

1 Like