Well I wouldn’t call that projection mapping, that’s simply a projection. Projection mapping more often refers to when you project a scene onto a non-uniform surface, such as a building. Its also not straightforward to accomplish.
e.g.
https://www.youtube.com/watch?v=O0XKmU5hF5s
For the interaction part, generally, you can use touch-screens, web-cams or the kinect.
However in the specific case of the video you posted I don’t see how the kinect could work. For starters it doesn’t have the field of view to capture the entire length, secondly if place it next to the projection there isn’t the depth range to capture the people putting there arms out, its also on the limit for capturing peoples bodies that close.
The obvious place for a camera would be in the ceiling, however once above 30-45 degrees the kinect will not recognise people, so again unlikely to work in this instance. You could place the kinect on the opposite wall to the projection, but then you wouldn’t capture people touching the projection.
In this specific case it looks more like they probably used standard webcam or mini-cam, placed in the ceiling (maybe more than one) and simple blob tracking. Not to mention the video was uploaded in 2007 some time before the kinect was released 
So overall is it possible to create something exactly like in your example video. Definitely. In terms of hardware, I’d either suggest a touch-screen (the easy method) or using one or more webcams in the ceiling and basic blob tracking. Do a google search and you’ll find plenty of code examples of blob tracking, it isn’t that difficult. However I don’t think a kinect would work or be appropriate in this instance.
If you want to use the kinect for interactive experiences then you have two options.
-
OpenNI
OpenNI is an initiative started by Primesense, who created the kinect, or rather the tech behind the kinect. Its fully featured, providing a large number of interactive options, from full on skeleton tracking, to simple hand tracking and providing the camera streams (rgb, depth, IR).
-
MS Kinect SDK
Unfortunately the sdk uses .Net4 which will not work with Unity. I think someone has maybe written a wrapper plugin to get around this, but its not really a straightforward method and the last time I looked at the MS SDK it wasn’t nearly as fully featured as openNI.
In both cases you’ll find its quite a steep learning curve and although openNI can be harder its the one I’d recommend at the moment.
If you search the Unity forums you’ll find a few threads based on getting openNI/kinect working in Unity, though these are all quite old now, but should give you some insight. Better yet join the UnityKinect google group and the openNI google group and maybe download ‘Zigfu’ which is a complete package based around openNI with a nice Unity package to get you started. Though it is aimed at getting kinect experiences in browsers and in my opinion may well become a licensed product at some point.