This Week's Sponsor:

Collections Database

A Powerful Database with iCloud Sync


Smart Speakers and Speech Impairment

Steven Aquino covers an important accessibility angle of smart speakers that I’ve never truly considered:

Since the HomePod started shipping last week, I’ve taken to Twitter on multiple occasions to (rightfully) rant about the inability of Siri—and its competitors—to parse non-fluent speech. By “non-fluent speech,” I’m mostly referring to stutterers because I am one, but it equally applies to others, such as deaf speakers.

This is a topic I’ve covered before. There has been much talk about Apple’s prospects in the smart speaker market; the consensus seems to be the company lags behind Amazon and Google because Alexa and Google Home are smarter than Siri. What is missing from these discussions and from reviews of these products is the accessibility of a HomePod or Echo or Sonos.

As I see it, this lack of consideration, whether intentional or not, overlooks a crucial part of a speaker product’s story. Smart speakers are a unique product, accessibility-wise, insofar as the voice-first interaction model presents an interesting set of conditions. You can accommodate for blindness and low vision with adjustable font sizes and screen readers. You can accommodate physical motor delays with switches. You can accommodate deafness and hard-of-hearing with closed captioning and using the camera’s flash for alerts.

But how do you accommodate for a speech impairment?

A human assistant would know how to deal with stuttering, dialects, or even just the need to repeat a part of a sentence you got wrong. None of the modern digital assistants currently goes beyond being a slightly humanized command line activated by voice, and I wonder who will get there first.