I am currently doing some technological research into some methods i want to use for a future project, one of these being scanning of real world elements by using digital cameras (Optical scanning).
I might not necessarily use this method for characters, but i thought it would be a fun subject to research, so here is the result of my initial test by capturing a persons facial features.
The eyes are added in later and have tons of issues, but it helped me give some good ideas on how to approach the next set of tests. You can try the demo project here: http://rasmusslot.com/unity/jakob.html
Looks really good!
Do you use any tools/libs apart from Unity?
How much hinting do you need to add to get a nice result as in your demo? (How much tweaking does one need to do after model generation?)
A wireframe mode in the demo would be nice; it would be interesting to see the topology (or at least poly count) of the mesh. It looks very hi res, am just wondering how well it would respond to optimization.
The model needed a bit of tweaking to make it look decent, but i suspect that most of these mistakes came from the uncontrolled environment of the actual images + that the camera the images was taken with was handheld and unfortunately not always in focus, so the results from a more controlled environment should prove far better.
I won’t really go into tools used for now since it suspect the workflow will change quite a bit in terms of software. In terms of raw polycount, the head model is currently 7147 verts, but this could most likely be a bit lot lower. I have added another backdrop and a wireframe option for the model to the original project demo: http://rasmusslot.com/unity/jakob.html
The camera used was a entry level Sony DSLR, i can’t remember the specific model. Only one camera was used for this specific sample, but I hope to do another test using three cameras in a more controlled environment in the near future to judge the limits of this technique compared to the project at hand.