This Week's Sponsor:

Kolide

Ensures that if a device isn’t secure it can’t access your apps.  It’s Device Trust for Okta.


Posts tagged with "accessibility"

Apple Leads First-Ever Swift Playgrounds Course for Blind and Low-Vision Students in Austin

Earlier this week, Apple engineers visited the Texas School for the Blind and Visually Impaired where they led a programming course from the company’s Everyone Can Code curriculum. According to the Austin Statesman’s technology blog, Open Source, the class was the first such session led by Apple for blind and low-vision students.

With the assistance of VoiceOver, the students completed assignments in Apple’s Swift Playgrounds iPad app. The students also got a chance to go outside and fly Parrot drones using Swift Playgrounds. Viki Davidson, a technology teacher at the school, told Open Source:

“We see this as a way to get them interested in coding and realize this could open job opportunities,” said Vicki Davidson, a technology teacher at the Texas School for the Blind and Visually Impaired. “Apple has opened up a whole new world for kids by giving them instant access to information and research, and now coding.” 

Apple’s director of accessibility, Sarah Herrlinger, who will participate in a session on Innovations in Accessibility at South By Southwest on March 15th, said:

“When we said everyone should be able to code, we really meant everyone,” said Sarah Herrlinger, Apple’s director of accessibility. “Hopefully these kids will leave this session and continue coding for a long time. Maybe it can inspire where their careers can go.”

Swift Playgrounds and Apple’s Everyone Can Code curriculum have grown at a remarkable rate and are fantastic resources for students, teachers, and parents. However, it’s Apple’s long-standing commitment to accessibility across all of its products that helps ensure that those resources are available to as many students as possible.

Permalink

Smart Speakers and Speech Impairment

Steven Aquino covers an important accessibility angle of smart speakers that I’ve never truly considered:

Since the HomePod started shipping last week, I’ve taken to Twitter on multiple occasions to (rightfully) rant about the inability of Siri—and its competitors—to parse non-fluent speech. By “non-fluent speech,” I’m mostly referring to stutterers because I am one, but it equally applies to others, such as deaf speakers.

This is a topic I’ve covered before. There has been much talk about Apple’s prospects in the smart speaker market; the consensus seems to be the company lags behind Amazon and Google because Alexa and Google Home are smarter than Siri. What is missing from these discussions and from reviews of these products is the accessibility of a HomePod or Echo or Sonos.

As I see it, this lack of consideration, whether intentional or not, overlooks a crucial part of a speaker product’s story. Smart speakers are a unique product, accessibility-wise, insofar as the voice-first interaction model presents an interesting set of conditions. You can accommodate for blindness and low vision with adjustable font sizes and screen readers. You can accommodate physical motor delays with switches. You can accommodate deafness and hard-of-hearing with closed captioning and using the camera’s flash for alerts.

But how do you accommodate for a speech impairment?

A human assistant would know how to deal with stuttering, dialects, or even just the need to repeat a part of a sentence you got wrong. None of the modern digital assistants currently goes beyond being a slightly humanized command line activated by voice, and I wonder who will get there first.

Permalink

What Face ID Means for Accessibility

Steven Aquino on the accessibility implications of Face ID on the iPhone X:

The way Apple has built Face ID, hardware- and software-wise, into iOS quite literally makes using iPhone a “hands-free” experience in many regards. And that’s without discrete accessibility features like Switch Control or AssistiveTouch. That makes a significant difference to users, myself included, whose physical limitations make even the most mundane tasks (e.g., unlocking one’s device) tricky. As with so many accessibility-related topics, the little things that are taken for granted are always the thin

The combination of Face ID with Raise to Wake (or, arguably, the simplicity of Tap to Wake) truly sounds like a remarkable improvement accessibility-wise, perhaps in a way that we didn’t foresee when we started speculating on Apple abandoning Touch ID. Hands-free unlocking is one of my favorite aspects of the iPhone X experience so far.

Permalink

Steven Levy on Apple Putting Voices in Users’ Heads

Last week we reported on a new cochlear implant that was designed to integrate in special ways with an iPhone. This week, Steven Levy has more details for WIRED on the work that went into bringing this product to fruition.

To solve the huge problem of streaming high-quality audio without quickly draining the tiny zinc batteries in hearing aids, Apple had previously developed a new technology called Bluetooth LEA, or Low Energy Audio. The company released that (but didn’t talk about it) when the first Made for iPhone hearing aids appeared in 2014…“We chose Bluetooth LE technology because that was the lowest power radio we had in our phones,” says Sriram Hariharan, an engineering manager on Apple’s CoreBluetooth team. To make LEA work with cochlear implants he says, “We spent a lot of time tuning our solution it to meet the requirements of the battery technology used in the hearing aids and cochlear implants.” Apple understood that, as with all wireless links, some data packets would be lost in transmission—so the team figured out how to compensate for that, and re-transmit them as needed. “All those things came together to figure out how to actually do this,” says Hariharan.

This story perfectly demonstrates how solving accessibility issues may require a lot of hard work and investment, but in the end it can produce results that are truly life-changing.

Permalink

Cochlear Launches First Made for iPhone Hearing Implant

Today Cochlear introduced a new cochlear implant sound processor that serves as the first such device directly compatible with iOS devices. The company’s press release states:

With the Nucleus 7 Sound Processor, people with a Cochlear Nucleus Implant can now stream sound directly from a compatible iPhone, iPad and iPod touch directly to their sound processor. They will also be able to control, monitor and customize their hearing on their iPhone or iPod touch through the Nucleus® Smart App available to download for free from the App Store®.

The Nucleus Smart app also includes a feature resembling Apple’s ‘Find My iPhone’ called ‘Find My Processor.’ Especially helpful for children who may be more prone to losing their sound processor, this feature employs an iPhone’s built-in location services to determine the last place the processor was connected to its paired iPhone.

Sarah Buhr of TechCrunch notes that today’s announcement is the fruit of a lengthy period of research and development within Apple in response to the growing issue of hearing loss.

Apple…has spent a number of years developing a hearing aid program within the company. Apple soon developed a protocol the company offered for free for hearing aid and implant manufacturers to use with their devices.

Permalink

Microsoft Launches iPhone App for Low Vision Community: Seeing AI

Today Microsoft introduced a new app exclusively for iPhone, Seeing AI. This app is designed as a tool for the low vision community; using the iPhone’s camera and its AI smarts, Seeing AI converts the visual experience of the world into an audible one. As you point the camera at things in the world around you, the app will describe that world in a quick, informative manner.

From a user’s perspective, the app is tremendously simple to use; there’s very little that needs to be done before Seeing AI can begin describing the space around you. If you want to identify people, you can first set them up as recognizable from the sidebar menu’s ‘Face Recognition’ option. Otherwise, all you have to do to start identifying things is select from one of five different categories (the app calls them ‘channels’) to help the app understand what type of object it needs to identify. The five current categories are:

  • Short Text
  • Document
  • Product
  • Person
  • Scene (currently tagged as ‘Beta’)

Microsoft says a category for currency will be coming soon, allowing the app to intelligently identify different denominations of cash.

In my testing of the app, it’s far from perfect in its ability to identify things, but it has done a solid job all-around. Though the tech driving the app may only be experimental and have a long way to go, the app is far from barebones in what it can do now. When identifying a document, Seeing AI will audibly guide you through the capture process to help you get the full document in view. After scanning a product’s barcode, in some cases you’ll receive additional information about the product beyond just its name. And if the app is scanning a person, it can even describe a best guess at their visible emotional state. It’s an impressive, deep experience that nevertheless remains dead simple to operate.

Even if you aren’t in the market for Seeing AI yourself, it’s a fascinating product worth checking out, and it’s entirely free. You can download it on the App Store.

Microsoft has a short introductory video that gives a great taste of all that the app can do, embedded below.


iOS 11 and Accessibility

Great overview by Steven Aquino on the Accessibility changes coming with iOS 11. In particular, he’s got the details on Type to Siri, a new option for keyboard interaction with the assistant:

Available on iOS and the Mac, Type to Siri is a feature whereby a user can interact with Siri via an iMessage-like UI. Apple says the interaction is one-way; presently it’s not possible to simultaneously switch between text and voice. There are two caveats, however. The first is, it’s possible to use the system-wide Siri Dictation feature (the mic button on the keyboard) in conjunction with typing. Therefore, instead of typing everything, you can dictate text and send commands thusly. The other caveat pertains to “Hey Siri.” According to a macOS Siri engineer on Twitter, who responded to this tweet I wrote about the feature, it seems Type to Siri is initiated only by a press of the Home button. The verbal “Hey Siri” trigger will cause Siri to await voice input as normal.

Technicalities aside, Type to Siri is a feature many have clamored for, and should prove useful across a variety of situations. In an accessibility context, this feature should be a boon for deaf and hard-of-hearing people, who previously may have felt excluded from using Siri due to its voice-first nature. It levels the playing field by democratizing the technology, opening up Siri to an even wider group of people.

I wish there was a way to switch between voice and keyboard input from the same UI, but retaining the ‘Hey Siri’ voice activation seems like a sensible trade-off. I’m probably going to enable Type to Siri on my iPad, where I’m typing most of the time anyway, and where I could save time with “Siri templates” made with native iOS Text Replacements.

Permalink

Tim Cook Interviewed For Global Accessibility Awareness Day

Early last year, James Rath, a young filmmaker who was born legally blind, created a video about the impact Apple products have had on his life. That video caught the attention of Apple:

In the ensuing months, Rath’s YouTube career has taken off and he’s become a strong advocate for the blind.

To mark Global Accessibility Awareness Day, Tim Cook spoke with Rath and two other YouTubers, Rikki Poynter and Tatiana Lee about accessibility. Cook and Poynter, who is deaf, discussed closed captioning and how accessibility is a core value at Apple. Lee talked to Cook about the Apple Watch and its ability to track wheelchair use. Rath and Cook explored the history of Apple’s commitment to accessibility and the democratization of technology. The interviews follow the release of a series of videos made by Apple spotlighting the accessibility features of its products.

The interviews, which were filmed in the courtyard at Apple’s Infinite Loop campus are available after the break.

Read more


Apple Releases New Video Series to Promote Accessibility Awareness

Katie Dupere writes for Mashable about the stories shared in a new series of Apple videos:

Meera is nonverbal, living with a rare condition called schizencephaly that impacts her ability to speak. But with the help of her iPad and text-to-speech technology, she can make her thoughts and opinions known — and she sure does. From her love of Katy Perry to her passion for soccer, Meera will let you know exactly what’s on her mind. All it takes is a few taps of her tablet, and with a specialized app stringing letters into words, and words into phrases, her thoughts are played out loud.

Meera’s relationship with tech is just one of seven stories featured in a powerful video series created by Apple to spotlight the company’s dedication to accessible technology. The videos were released in celebration of Global Accessibility Awareness Day on May 18, a day emphasizing the importance of accessible tech and design.

Accessibility features have long been prioritized in Apple’s software, and this new video series tells the stories of people who depend on those features. What to some may simply be an ignored option in the Settings app is to others a pathway to significant new experiences and empowerment.

Permalink