Microsoft Cognitive Services, a collection of 25 tools reaches to general availability as on Tuesday February 7, let developers add features such as emotion and sentiment detection, vision and speech recognition, and language understanding to their applications — “all without any expertise in machine learning.”
The Custom Speech Service is in public preview, while two other Cognitive Services, the Content Moderator and Bing Speech API, will be moving to general availability in March, company announced Tuesday. “While Content Moderator allows users to quarantine and review data such as images, text or videos to filter out unwanted material, such as potentially offensive language or pictures,” says Mike Seltzer. “The Bing Speech API converts audio into text, understands intent and converts text back to speech.”
“The entire collection of Cognitive Services stems from a drive within Microsoft to make its artificial intelligence and machine learning expertise widely accessible to the development community to create delightful and empowering experiences for end users,” said Andrew Shuman.
“Customers are using Cognitive Services that enable developers to apply intelligence to visual data such as pictures and video. For example, business intelligence company Prism Skylabs used the Computer Vision API in its Prism Vision application, which helps organizations search through closed-circuit and security camera footage for specific events, items and people,” added Seltzer.
Having being used by more than 424,000 developers in 60 countries, the services are held in high regard as Human Interact, the makers of Starship Commander, explain in the video below.
“You’ve got to make it so that anytime anybody says anything, [the speech recognition engine] is going to understand them and run them down the right path in the script,” he explained. “And that,” he added, “is the magic of Microsoft Cognitive Services.”