Steven Aquino on the accessibility implications of Face ID on the iPhone X:
The way Apple has built Face ID, hardware- and software-wise, into iOS quite literally makes using iPhone a “hands-free” experience in many regards. And that’s without discrete accessibility features like Switch Control or AssistiveTouch. That makes a significant difference to users, myself included, whose physical limitations make even the most mundane tasks (e.g., unlocking one’s device) tricky. As with so many accessibility-related topics, the little things that are taken for granted are always the thin
The combination of Face ID with Raise to Wake (or, arguably, the simplicity of Tap to Wake) truly sounds like a remarkable improvement accessibility-wise, perhaps in a way that we didn’t foresee when we started speculating on Apple abandoning Touch ID. Hands-free unlocking is one of my favorite aspects of the iPhone X experience so far.
Last week we reported on a new cochlear implant that was designed to integrate in special ways with an iPhone. This week, Steven Levy has more details for WIRED on the work that went into bringing this product to fruition.
To solve the huge problem of streaming high-quality audio without quickly draining the tiny zinc batteries in hearing aids, Apple had previously developed a new technology called Bluetooth LEA, or Low Energy Audio. The company released that (but didn’t talk about it) when the first Made for iPhone hearing aids appeared in 2014...“We chose Bluetooth LE technology because that was the lowest power radio we had in our phones,” says Sriram Hariharan, an engineering manager on Apple’s CoreBluetooth team. To make LEA work with cochlear implants he says, “We spent a lot of time tuning our solution it to meet the requirements of the battery technology used in the hearing aids and cochlear implants.” Apple understood that, as with all wireless links, some data packets would be lost in transmission—so the team figured out how to compensate for that, and re-transmit them as needed. “All those things came together to figure out how to actually do this,” says Hariharan.
This story perfectly demonstrates how solving accessibility issues may require a lot of hard work and investment, but in the end it can produce results that are truly life-changing.
Today Cochlear introduced a new cochlear implant sound processor that serves as the first such device directly compatible with iOS devices. The company’s press release states:
With the Nucleus 7 Sound Processor, people with a Cochlear Nucleus Implant can now stream sound directly from a compatible iPhone, iPad and iPod touch directly to their sound processor. They will also be able to control, monitor and customize their hearing on their iPhone or iPod touch through the Nucleus® Smart App available to download for free from the App Store®.
The Nucleus Smart app also includes a feature resembling Apple’s ‘Find My iPhone’ called ‘Find My Processor.’ Especially helpful for children who may be more prone to losing their sound processor, this feature employs an iPhone’s built-in location services to determine the last place the processor was connected to its paired iPhone.
Sarah Buhr of TechCrunch notes that today’s announcement is the fruit of a lengthy period of research and development within Apple in response to the growing issue of hearing loss.
Apple...has spent a number of years developing a hearing aid program within the company. Apple soon developed a protocol the company offered for free for hearing aid and implant manufacturers to use with their devices.
Today Microsoft introduced a new app exclusively for iPhone, Seeing AI. This app is designed as a tool for the low vision community; using the iPhone’s camera and its AI smarts, Seeing AI converts the visual experience of the world into an audible one. As you point the camera at things in the world around you, the app will describe that world in a quick, informative manner.
From a user’s perspective, the app is tremendously simple to use; there’s very little that needs to be done before Seeing AI can begin describing the space around you. If you want to identify people, you can first set them up as recognizable from the sidebar menu’s ‘Face Recognition’ option. Otherwise, all you have to do to start identifying things is select from one of five different categories (the app calls them ‘channels’) to help the app understand what type of object it needs to identify. The five current categories are:
- Short Text
- Scene (currently tagged as ‘Beta’)
Microsoft says a category for currency will be coming soon, allowing the app to intelligently identify different denominations of cash.
In my testing of the app, it’s far from perfect in its ability to identify things, but it has done a solid job all-around. Though the tech driving the app may only be experimental and have a long way to go, the app is far from barebones in what it can do now. When identifying a document, Seeing AI will audibly guide you through the capture process to help you get the full document in view. After scanning a product’s barcode, in some cases you’ll receive additional information about the product beyond just its name. And if the app is scanning a person, it can even describe a best guess at their visible emotional state. It’s an impressive, deep experience that nevertheless remains dead simple to operate.
Even if you aren’t in the market for Seeing AI yourself, it’s a fascinating product worth checking out, and it’s entirely free. You can download it on the App Store.
Microsoft has a short introductory video that gives a great taste of all that the app can do, embedded below.
Great overview by Steven Aquino on the Accessibility changes coming with iOS 11. In particular, he’s got the details on Type to Siri, a new option for keyboard interaction with the assistant:
Available on iOS and the Mac, Type to Siri is a feature whereby a user can interact with Siri via an iMessage-like UI. Apple says the interaction is one-way; presently it’s not possible to simultaneously switch between text and voice. There are two caveats, however. The first is, it’s possible to use the system-wide Siri Dictation feature (the mic button on the keyboard) in conjunction with typing. Therefore, instead of typing everything, you can dictate text and send commands thusly. The other caveat pertains to “Hey Siri.” According to a macOS Siri engineer on Twitter, who responded to this tweet I wrote about the feature, it seems Type to Siri is initiated only by a press of the Home button. The verbal “Hey Siri” trigger will cause Siri to await voice input as normal.
Technicalities aside, Type to Siri is a feature many have clamored for, and should prove useful across a variety of situations. In an accessibility context, this feature should be a boon for deaf and hard-of-hearing people, who previously may have felt excluded from using Siri due to its voice-first nature. It levels the playing field by democratizing the technology, opening up Siri to an even wider group of people.
I wish there was a way to switch between voice and keyboard input from the same UI, but retaining the ‘Hey Siri’ voice activation seems like a sensible trade-off. I’m probably going to enable Type to Siri on my iPad, where I’m typing most of the time anyway, and where I could save time with "Siri templates" made with native iOS Text Replacements.
Early last year, James Rath, a young filmmaker who was born legally blind, created a video about the impact Apple products have had on his life. That video caught the attention of Apple:
In the ensuing months, Rath’s YouTube career has taken off and he’s become a strong advocate for the blind.
To mark Global Accessibility Awareness Day, Tim Cook spoke with Rath and two other YouTubers, Rikki Poynter and Tatiana Lee about accessibility. Cook and Poynter, who is deaf, discussed closed captioning and how accessibility is a core value at Apple. Lee talked to Cook about the Apple Watch and its ability to track wheelchair use. Rath and Cook explored the history of Apple’s commitment to accessibility and the democratization of technology. The interviews follow the release of a series of videos made by Apple spotlighting the accessibility features of its products.
The interviews, which were filmed in the courtyard at Apple’s Infinite Loop campus are available after the break.
Katie Dupere writes for Mashable about the stories shared in a new series of Apple videos:
Meera is nonverbal, living with a rare condition called schizencephaly that impacts her ability to speak. But with the help of her iPad and text-to-speech technology, she can make her thoughts and opinions known — and she sure does. From her love of Katy Perry to her passion for soccer, Meera will let you know exactly what's on her mind. All it takes is a few taps of her tablet, and with a specialized app stringing letters into words, and words into phrases, her thoughts are played out loud.
Meera's relationship with tech is just one of seven stories featured in a powerful video series created by Apple to spotlight the company's dedication to accessible technology. The videos were released in celebration of Global Accessibility Awareness Day on May 18, a day emphasizing the importance of accessible tech and design.
Accessibility features have long been prioritized in Apple's software, and this new video series tells the stories of people who depend on those features. What to some may simply be an ignored option in the Settings app is to others a pathway to significant new experiences and empowerment.
Some interesting thoughts about the AirPods by Steven Aquino. In particular, he highlights a weak aspect of Siri that isn't usually mentioned in traditional reviews:
The gist of my concern is Siri doesn't handle speech impediments very gracefully. (I've found the same is true of Amazon's Alexa, as I recently bought an Echo Dot to try out.) I’m a stutterer, which causes a lot of repetitive sounds and long breaks between words. This seems to confuse the hell out of these voice-driven interfaces. The crux of the problem lies in the fact that if I don’t enunciate perfectly, which leaves several seconds between words, the AI cuts me off and runs with it. Oftentimes, the feedback is weird or I’ll get a “Sorry, I didn’t get that” reply. It’s an exercise in futility, sadly.
Siri on the AirPods suffers from the same issues I encounter on my other devices. It’s too frustrating to try to fumble my way through if she keeps asking me to repeat myself. It’s for this reason that I don’t use Siri at all with AirPods, having changed the setting to enable Play/Pause on double-tap instead (more on this later). It sucks to not use Siri this way—again, the future implications are glaringly obvious—but it’s just not strong enough at reliably parsing my speech. Therefore, AirPods lose some luster because one of its main selling points is effectively inaccessible for a person like me.
That's a hard problem to solve in a conversational assistant, and exactly the kind of Accessibility area where Apple could lead over other companies.
Apple opened what will in all likelihood be its last event in Town Hall at One Infinite Loop in Cupertino with a video highlighting the importance of accessibility features built into its products. In addition to a video, Apple has created a separate webpage highlighting the accessibility. The page includes videos highlighting wheelchair workouts on the Apple Watch, switch controls on the Mac, Live Listen designed for the hearing impaired, VoiceOver, and Speak Screen.
You can also follow all of the MacStories coverage of today's Apple's keynote through our October 27 Keynote hub, or subscribe to the dedicated October 27 Keynote RSS feed.