I saw this video and was floored. It was done with a Kinect (motion sensor for Microsoft’s Video game system the X-box) which costs roughly $100.
It works in the following way-
The Kinect projects a laser dot pattern into a scene and looks for distortions using an infrared camera, a technique called structured light depth sensing. This generates a “point cloud” of distances to the camera that the Kinect uses to perceive and identify objects and gestures in real time.
A KinectFusion user waves a Kinect around a scene or object. An algorithm called iterative closest point (ICP) is used to merge data from the snapshots being taken at 30 frames per second into an ever-more-detailed 3-D representation. ICP is also used to track the position and orientation of the camera by comparing new frame data with previous frames and the composite merged representation. The team describes the use of a standard computer graphics processing unit for both camera tracking and image generation as a major innovation.- source
They still have not released to program to do this yet. It is also hard to tell the exact resolution of the 3D model. It might not be high enough resolution for artifact scans but for buildings or digs it might be. Sure would save a lot of time on building survey to have something like this. More info from the site here.