CHI 2012: Microsoft Research Shows 'Dual Views on Common LCD Screens'; AR Sandbox; SoundWave; Humantenna; LightGuide; MirageTable and More

At the ACM SIGCHI Conference on Human Factors in Computing Systems this week in Austin, Texas -- Microsoft Research is contributing 41 papers and five notes this year (94% of which are co-authored with academic partners), spanning areas such as natural user interfaces (NUI), technologies for developing countries, social networking, healthcare, and search. "Nine of […]

Microsoft Research at ACM SIGCHI

At the ACM SIGCHI Conference on Human Factors in Computing Systems this week in Austin, Texas -- Microsoft Research is contributing 41 papers and five notes this year (94% of which are co-authored with academic partners), spanning areas such as natural user interfaces (NUI), technologies for developing countries, social networking, healthcare, and search.

"Nine of those papers and notes received an honorable mention from the conference program committee, and Kevin Schofield, general manager and chief operations officer for Microsoft Research, will receive the SIGCHI Lifetime Service Award for his contributions to the growth of the ACM's Special Interest Group on Computer Human Interaction and for his influence on the community at large," reveals Next at Microsoft blog.

Here are some of the highlights:

SoundWave: Using the Doppler Effect to Sense Gestures relies on hardware readily available on computers, laptops, and even mobile devices -- the microphone and speaker -- to sense motion. It's the work of Sidhant Gupta and Shwetak N. Patel of MSR and the University of Washington along with Dan Morris and Desney Tan. "The Doppler Effect characterizes the frequency change of a sound wave as a listener moves toward or away from the source and the team found that it could be used to measure movement, direction, velocity and size of a moving object. With that insight, they were able to create a series of hand gestures that could be recognized by existing hardware."

Humantenna uses electromagnetic noise from the environment as a signal to determine body movement - by using the body as an antenna.

LightGuide: Projected Visualizations for Hand Movement Guidance project explores a new approach to gesture guidance by projecting visual hints directly onto the user's body. As the video below shows, it could be used to guide a user in all manner of activities such as learning a musical instrument, physiotherapy, or exercise.

MirageBlocks another project from Hrvoje Benko and Andy Wilson that with Ricardo Jota has developed in to MirageTable. It uses a 3D stereoscopic projector to project virtual content on to a curved screen that is then captured by a Kinect sensor. That sensor is also tracking a user's gaze, which enables the correct perspective views to a single user of the virtual content.

Microsoft Research also shows dual views on existing LCD displays. The projected was shown at the Microsoft TechVista event in India earlier this year and will be shown again this week at CHI 2012.

The technique uses how LCD screens on laptops can show different colors and contrasts, depending on which angle the user views the screen. The software that Microsoft Research has developed allows current LCD screens to show two images or videos, which can change depending on how the LCD screen is angled.

The software uses a random dot pattern to hide images on the screen unless that screen is moved or viewed to the correct angle. This technique can also be used to create a stereo 3D effect without the use of glasses.

Imagine multiplayer Halo without the split screen and the cheating that goes with it. Or watching two TV shows concurrently? Or the example in the video of playing a game of cards across a screen and seeing only your own view. Even more intriguing is the ability for stereo 3D displays without glasses.

Check out the images below as examples - they're in JPS format, which can be opened as JPEG by regular image viewing software, but viewed with the actual demo effect with a 3D (stereo)-capable display device like 3D glasses.

Microsoft Research shows dual views on existing LCD displays

Microsoft Research shows dual views on existing LCD displays

And, here is the video demonstrating -- Enabling Concurrent Dual Views on Common LCD Screens:

Finally, this project lists of Kinect enabled applications to blend the physical and digital worlds.

The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences built this for an National Science Foundation funded project on informal science education and were inspired by a project from a group of Czech researchers who showed an early prototype of an AR sandbox.

The video description explains,

Video of a sandbox equipped with a Kinect 3D camera and a projector to project a real-time colored topographic map with contour lines onto the sand surface. The sandbox lets virtual water flow over the surface using a GPU-based simulation of the Saint-Venant set of shallow water equations.

We built this for an NSF-funded project on informal science education. These AR sandboxes will be set up as hands-on exhibits in science museums, such as Lawrence Hall of Science or the Tahoe Environmental Research Center.

Although the video demonstration shows a delay in the environment from reacting to changes to the sandpit, the author explains this is actually on purpose to filter out moving objects such as the hand when reshaping the environment. The rendering is currently powered by a Geforce 580.

There is also a second video which simulates a dam failure using the same system:

You can find more information at th project site.

[Via]