This technology will be previewed tomorrow at Siggraph. It’s pretty amazing…
http://labs.live.com/photosynth/videodemo.html
And check out this Java applet to get a feel for how it will work…
http://phototour.cs.washington.edu/applet/index.html
This technology will be previewed tomorrow at Siggraph. It’s pretty amazing…
http://labs.live.com/photosynth/videodemo.html
And check out this Java applet to get a feel for how it will work…
http://phototour.cs.washington.edu/applet/index.html
Neat! It seems like an extremely idealized demonstration of how it’d work, though. All the pictures are from around the same time of day, no one is mooning the camera, it’s a small environment.
So does this use GPS data or what?
-Jon
No, I don’t think so. It uses some kind of search engine (can’t tell if it’s only text based) to scour the web for images of a particular location and then it somehow analyzes the photos to determine what portion of the location they are illustrating. You can actually find the original photos used in the demo on Flickr.com.
Out of curiosity, I just did a search of Flickr using the term “Pisa” and got back 21,544 photos! :shock:
And there were 55K responses to “Eiffel.” :shock: :shock:
I guess the search engine doesn’t have to be too sophisticated!