A graduate student in the Department of Computer Science at the University of Toronto, named Uzma Khan, realized the Kinect natural user interface (NUI) could provide childrens learning experience which involves seeing, listening/speaking, and touching.
To achieve this, she used the Kinect for Windows SDK to create a prototype of an application that utilizes speech and gestures to simplify complex learning, and make early childhood education more fun and interactive, revealed Sheridan Jones, Business and Strategy Director, Kinect for Windows.
“The application asks young children to perform an activity, such as identify the animals that live on a farm. Using their hands to point to the animals on a computer screen, along with voice commands, the children complete the activities. To reinforce their choices, the application praises them when they make a correct selection,” said Jones.
“Using the speech and gesture recognition capabilities of Kinect enables children to not only learn by seeing, listening, and speaking; it lets them actively participate by selecting, copying, moving, and manipulating colors, shapes, objects, patterns, letters, numbers, and much more.”
Watch the video: