Google is building many products with accessibility features to help users across the globe, today the G Suite accessibility team has officially announced a new that bring captions in Google Slides.
This new ability activates during a presentation on Slides, “it detects spoken presentation through computer’s microphone and then transcribes [what a presenter say] as captions on the slides they’re presenting in ‘real time’.”
Closed captioning built primarily to benefit people with hearing disabilities, Google says it will also help those listening in a noisy environment or in poor sound settings with “no hearing loss.”
As well as, it helps when a presenter is speaking a non-native language or is not projecting their voice.
Here is how this feature works?
First make sure to connect an external or internal microphone to the device, and then after starting a presentation, the user will need to tap or click on the “CC” button in the navigation box or activate shortcut with “Ctrl + Shift + c” in Chrome OS or in Windows and or “⌘ + Shift + c” in Mac.
Instantly after the user speaks into the device’s microphone, the closed captions will automatically appear in real-time at the bottom of the screen.
Like other speech recognition-enabled services from Google, the closed captions are also powered by machine learning (ML).
The feature is heavily influenced by the speaker’s accent, voice modulation, and intonation abilities.
This new ability currently support U.S. English and a single user presenter and works on a laptop or desktop computer connected with either internal or external microphone with the Chrome browser.
The new “automated closed captions” in Google Slides will be officially rolling out worldwide gradually to all users beginning “this week.”
Google plans to add more language support in the future as well as working on to improve caption quality.
A video look at how Google Slides closed captions works: