Advanced Computing in the Age of AI | Friday, March 29, 2024

Microsoft Demos 3D Scanning Systems 

<img style="float: left;" src="http://media2.hpcwire.com/dmr/kinectfusion.jpg" alt="" width="95" height="63" />Although 3D scanning technology has been available for some time, high costs, cumbersome equipment and sluggish processing time has kept in from winning consumer popularity. So when Microsoft previewed its fast and potentially inexpensive 3D scanning research at last week's TechFest, the demonstrations were a quick hit.

Although 3D scanning technology has been available for some time, high costs, cumbersome equipment and sluggish processing time has kept in from winning consumer popularity. So when Microsoft previewed its fast and potentially inexpensive 3D scanning research at last week's TechFest, the demonstrations were a quick hit.

The software giant showed off several scanning methods, each of which was able to scan either a person or an object with relative ease and swiftness, indicating that 3D scanning may soon be available in a showroom near you.

While the average tech enthusiast may not see the need for 3D scanning right now, they may change their mind once the technology is integrated into next-generation gaming and augmented reality. Not only that, but according to Microsoft VP Peter Lee, this could allow 3D printing hobbyists to more easily reproduce objects they already have.

The three systems demoed by Microsoft were among several other 3D scanning projects that the company is pursuing. The highest quality system of the three combined a green screen booth with six digital cameras to create a 3D replica of anyone who steps inside. The next two setups, however, still managed to create effective results even with basic equipment.

The first system, Kinect Fusion, took advantage of the depth-sensing camera built into the Kinect for Windows and Kinect for Xbox. To create a 3D scan, a user simply grabs the Kinect and moves it around the object to be scanned, allowing the camera to establish a fix on the object from all angles. By estimating the position of the camera in space relative to the images it receives, the system is able to generate a 3D image that becomes more comprehensive over time.

It should be noted that this particular system only gathers geometric data and excludes all information about an object's color, creating more of a 3D model than a 3D image.

The other system used a custom app for Windows Phone, which works similarly to the panorama function available on many current smart phones. Except rather than the photographer pivoting to allow the camera to capture the surrounding environment, the photographer will walk around a person's face, allowing the camera to take pictures from multiple angles at a given height. The result is not only a 3D mesh model, but a color image that is then overlaid to create a faithful representation of the subject.

The data is then sent to the cloud where it is processed by a more powerful computer before the final rendering is then downloaded back onto the phone.

So why have three separate research projects within one company striving to meet the same goal? According to Lee, the overlapping projects fall right in line with the competitive culture of Microsoft's 800-person research team.

For the time being, the focus is on improving the technology internally before it is considered for release. But Lee noted that Microsoft plans to include the Kinect Fusion 3D scanning technology in the next version of the Kinect for Windows software development kit.

Full story at All Things D

EnterpriseAI