Microsoft’s Beijing research lab have now been able to take 3D modeling one step further and have managed to produce a near-perfect 3D render of a face. The technique uses 3D scanning, a motion capture system and their newly developed technique which figures out how many face scans are needed in order to create a perfect render. With this new technology it will help create renders faster and more accurately.
Microsoft’s system picks up the complex nuances of an actor’s facial expressions. Image: Microsoft Research
In a paper to be presented at the SIGGRAPH computer graphics conference in Vancouver, B.C., this week, Microsoft researchers tip their hats to the technologies used in those movies for accurate 3D computer modeling of the human face.
Although this achievement may not seem like much on the face of it, it’s very likely that this technology will later be implemented into Kinect (if they have the technology, why not implement it?) which will ultimately make Kinect more powerful than it already is. More power and technology in the product should create more features and overall a nicer user experience. A nicer experience for its users means more sales of Kinect, which eventually means more money to be put into more research.
Per abstract from the paper:
This paper introduces a new approach for acquiring high-?delity 3D facial performances with realistic dynamic wrinkles and ?nescale facial details. Our approach leverages state-of-the-art motion capture technology and advanced 3D scanning technology for facial performance acquisition. We start the process by recording 3D facial performances of an actor using a marker-based motion capture system and perform facial analysis on the captured data, thereby determining a minimal set of face scans required for accurate facial reconstruction. We introduce a two-step registration process to ef?ciently build dense consistent surface correspondences across all the face scans. We reconstruct high-?delity 3D facial performances by combining motion capture data with the minimal set of face scans in the blendshape interpolation framework. We have evaluated the performance of our system on both real and synthetic data. Our results show that the system can capture facial performances that match both the spatial resolution of static face scans and the acquisition speed of motion capture system.