This week's sponsor

1Blocker X

The Fast and Secure Safari Content Blocker

Posts tagged with "siri"

Tom Gruber, Co-Founder of Siri, Retires from Apple

The Information reports that Tom Gruber, Apple’s head of the Siri Advanced Developments group, has retired to pursue personal interests including photography and ocean conservation. Gruber joined Apple as part of the company’s acquisition of Siri in 2010 along with his co-founders Dag Kittlaus and Adam Cheyer, who previously left Apple in 2011 and 2012 respectively. In addition to Gruber, The Information reports that Vipul Ved Prakash, Apple’s head of search, has left the company. Apple confirmed both departures to The Information.

Siri, which Apple incorporated into iOS in 2011, has been through recent leadership changes as it has fallen behind voice assistants like Amazon’s Alexa and Google Assistant. In 2017, Craig Federighi, Apple’s Senior Vice President of Software Engineering, took over Siri’s oversight from Eddy Cue. Just this past May, Apple hired John Giannandrea, Google's former Chief of Search and Artificial Intelligence to be Apple’s Chief of Machine Learning and AI Strategy. Last week, Giannandrea showed up on the leadership page on, and, according to a TechCrunch story, the Siri team now reports to him.

With all of Siri’s co-founders departed from the company, it will be interesting to see in what direction Giannandrea and the Siri team take Apple’s voice assistant.

Shortcuts: A New Vision for Siri and iOS Automation

In my Future of Workflow article from last year (published soon after the news of Apple's acquisition), I outlined some of the probable outcomes for the app. The more optimistic one – the "best timeline", so to speak – envisioned an updated Workflow app as a native iOS automation layer, deeply integrated with the system and its built-in frameworks. After studying Apple's announcements at WWDC and talking to developers at the conference, and based on other details I've been personally hearing about Shortcuts while at WWDC, it appears that the brightest scenario is indeed coming true in a matter of months.

On the surface, Shortcuts the app looks like the full-blown Workflow replacement heavy users of the app have been wishfully imagining for the past year. But there is more going on with Shortcuts than the app alone. Shortcuts the feature, in fact, reveals a fascinating twofold strategy: on one hand, Apple hopes to accelerate third-party Siri integrations by leveraging existing APIs as well as enabling the creation of custom SiriKit Intents; on the other, the company is advancing a new vision of automation through the lens of Siri and proactive assistance from which everyone – not just power users – can reap the benefits.

While it's still too early to comment on the long-term impact of Shortcuts, I can at least attempt to understand the potential of this new technology. In this article, I'll try to explain the differences between Siri shortcuts and the Shortcuts app, as well as answering some common questions about how much Shortcuts borrows from the original Workflow app. Let's dig in.

Read more

Apple Hires John Giannandrea, Google’s Chief of Search and Artificial Intelligence

According to The New York Times, Apple has hired John Giannandrea, Google’s chief of search and artificial intelligence. In a memo obtained by The Times, Tim Cook said:

“Our technology must be infused with the values we all hold dear,” Mr. Cook said in an email to staff members obtained by The New York Times. “John shares our commitment to privacy and our thoughtful approach as we make computers even smarter and more personal.”

Giannandrea joined Google in 2010 as part of the company’s acquisition of Metaweb and is credited with infusing artificial intelligence across Google’s product line. Giannandrea will report directly to Apple CEO Tim Cook.

This is a huge ‘get’ for Apple and comes fast on the heels of reports that the company is hiring over 100 engineers to improve Siri.


Erasing Complexity: The Comfort of Apple’s Ecosystem

Every year soon after WWDC, I install the beta of the upcoming version of iOS on my devices and embark on an experiment: I try to use Apple's stock apps and services as much as possible for three months, then evaluate which ones have to be replaced with third-party alternatives after September. My reasoning for going through these repetitive stages on an annual basis is simple: to me, it's the only way to build the first-hand knowledge necessary for my iOS reviews.

I also spent the past couple of years testing and switching back and forth between non-Apple hardware and services. I think every Apple-focused writer should try to expose themselves to different tech products to avoid the perilous traps of preconceptions. Plus, besides the research-driven nature of my experiments, I often preferred third-party offerings to Apple's as I felt like they provided me with something Apple was not delivering.

Since the end of last year, however, I've been witnessing a gradual shift that made me realize my relationship with Apple's hardware and software has changed. I've progressively gotten deeper in the Apple ecosystem and I don't feel like I'm being underserved by some aspects of it anymore.

Probably for the first time since I started MacStories nine years ago, I feel comfortable using Apple's services and hardware extensively not because I've given up on searching for third-party products, but because I've tried them all. And ultimately, none of them made me happier with my tech habits. It took me years of experiments (and a lot of money spent on gadgets and subscriptions) to notice how, for a variety of reasons, I found a healthy tech balance by consciously deciding to embrace the Apple ecosystem.

Read more

Siri Struggles with Commands Handled by the Original 2010 App

Nick Heer at Pixel Envy tested how well 2018 Siri performs commands given to the voice assistant in a 2010 demo video. The video takes Siri, which started as a stand-alone, third-party app, through a series of requests like ‘I’d like a romantic place for Italian food near my office.’ Just a couple of months after the video was published, Siri was acquired by Apple and the team behind it, including the video’s narrator, Tom Gruber, began integrating Siri into iOS.

That was eight years ago. Inspired by a tweet, Heer tested how well Siri performs when given the same commands today. As Heer acknowledges, the results will vary depending on your location, and the test is by no means comprehensive, but Siri's performance is an eye-opener nonetheless.

What’s clear to me is that the Siri of eight years ago was, in some circumstances, more capable than the Siri of today. That could simply be because the demo video was created in Silicon Valley, and things tend to perform better there than almost anywhere else. But it’s been eight years since that was created, and over seven since Siri was integrated into the iPhone. One would think that it should be at least as capable as it was when Apple bought it.

Eight years is an eternity in the tech world. Siri has been fairly criticized recently for gaps in the domains it supports and their balkanization across different platforms, but Heer’s tests are a reminder that Siri still has plenty of room for improvement in how it handles existing domains too. Of course, Siri can do things in 2018 that it couldn’t in 2010, but it still struggles with requests that require an understanding of contexts like location or the user’s last command.

Voice controlled assistants have become a highly competitive space. Apple was one of the first to recognize their potential with its purchase of Siri, but the company has allowed competitors like Amazon and Google catch up and pass it in many respects. The issues with Siri aren’t new, but that’s the heart of the problem. Given the current competitive landscape, 2018 feels like a crucial year for Apple to sort out Siri’s long-standing limitations.


Smart Speakers and Speech Impairment

Steven Aquino covers an important accessibility angle of smart speakers that I've never truly considered:

Since the HomePod started shipping last week, I’ve taken to Twitter on multiple occasions to (rightfully) rant about the inability of Siri—and its competitors—to parse non-fluent speech. By “non-fluent speech,” I’m mostly referring to stutterers because I am one, but it equally applies to others, such as deaf speakers.

This is a topic I’ve covered before. There has been much talk about Apple’s prospects in the smart speaker market; the consensus seems to be the company lags behind Amazon and Google because Alexa and Google Home are smarter than Siri. What is missing from these discussions and from reviews of these products is the accessibility of a HomePod or Echo or Sonos.

As I see it, this lack of consideration, whether intentional or not, overlooks a crucial part of a speaker product’s story. Smart speakers are a unique product, accessibility-wise, insofar as the voice-first interaction model presents an interesting set of conditions. You can accommodate for blindness and low vision with adjustable font sizes and screen readers. You can accommodate physical motor delays with switches. You can accommodate deafness and hard-of-hearing with closed captioning and using the camera’s flash for alerts.

But how do you accommodate for a speech impairment?

A human assistant would know how to deal with stuttering, dialects, or even just the need to repeat a part of a sentence you got wrong. None of the modern digital assistants currently goes beyond being a slightly humanized command line activated by voice, and I wonder who will get there first.


Loup Ventures’ HomePod Siri Tests

Loup Ventures, a US-based venture capital firm, ran a series of Siri tests on the HomePod to evaluate the assistant's capabilities on Apple's new speaker. After 782 queries, Siri understood 99% of questions but only answered 52% of them correctly – meaning, Siri on the HomePod failed to answer one out of two questions. I'd love to see a full data set of the questions asked by Loup Ventures, but, overall, it doesn't surprise me that the Google Assistant running on the Google Home speaker was the most accurate in every category.

While Apple has clearly a lot of work ahead for Siri on the HomePod (this was the consensus of all the reviews, too), it also appears that Siri simply performs worse than other assistants because it doesn't support certain domains. Here's Gene Munster (whom you may remember for his Apple TV set predictions), writing on the Loup Ventures blog:

Adding domains will quickly improve Siri’s score. Some domains like navigation, calendar, email, and calling are simply not supported. These questions were met with, “I can’t ___ on HomePod.” Also, in any case that iPhone-based Siri would bring up Google search results, HomePod would reply, “I can’t get the answer to that on HomePod,” which forces you to use your phone or give up on the question altogether. Removing navigation, calling, email, and calendar-related queries from our question set yields a 67% correct response, a jump from overall of 52.3% correct. This means added support for these domains would bring HomePod performance above that of Alexa (64%) and Cortana (57%), though still shy of Google Home (81%). We know Siri has the ability to correctly answer a whole range of queries that HomePod cannot, evidenced by our note here. Apple’s limiting of HomePod’s domains should change over time, at which point we expect the speaker to be vastly more useful and integrated with your other Apple devices.

Adding new supported domains would make Siri's intelligence comparable to Alexa (at least according to these tests), but Apple shouldn't strive for a honorable second place. Siri should be just as intelligent (if not more) than the Google Assistant on every platform. I wonder, though, if this can be achieved in the short term given Siri's fragmentation problems and limited third-party integrations.


The Problem of Many Siris

Bryan Irace writes about one of the biggest challenges Apple faces with Siri:

It’s no easy task for a voice assistant to win over new users in 2018, despite having improved quite a great deal in recent years. These assistants can be delightful and freeing when they work well, but when they don’t, they have a tendency to make users feel embarrassed and frustrated in a way that GUI software rarely does. If one of your first voice experiences doesn’t go the way you expected it to – especially in front of other people – who could blame you for reverting back to more comfortable methods of interaction? Already facing this fundamental challenge, Apple is not doing themselves any favors by layering on the additional cognitive overhead of a heavily fragmented Siri experience.

I think Irace is right on in this observation – Siri's fragmentation is a real problem.

On the more optimistic side, it could be taken as good news that the fix appears fairly obvious: create a single Siri that's consistent across all platforms. This seems like it would be a clear net positive, even though such a change could reduce Siri's accuracy in some cases; for example, I'm guessing Siri on the Apple TV is currently tuned to expect TV and movie queries more than anything else, so it can more effectively produce the right kind of results – tweak that tuning, and Apple will have to work even harder at helping Siri understand context.

One thing that's concerning about the apparent simplicity of this fix is that Apple hasn't made it yet, meaning, perhaps, that the company thinks there's nothing wrong with Siri's current fragmentation. This conversation would be different entirely if Apple had begun showing an increased effort to unify Siri across its platforms, but recently, the opposite has been true instead. The latest major Apple product, HomePod, includes a stripped-down Siri that can't even handle calendar requests. And SiriKit, which launched less than two years ago, was designed in a way that fundamentally increases fragmentation. Irace remarks:

If the Lyft app is installed on your iPhone, you can ask Phone Siri to order you a car. But you can’t ask Mac Siri to do the same, because she doesn’t know what Lyft is. Compare and contrast this with the SDKs for Alexa and the Google Assistant – they each run third-party software server-side, such that installing the Lyft Alexa “skill” once gives Alexa the ability to summon a ride regardless of if you’re talking to her on an Echo in your bedroom, a different Echo in your living room, or via the Alexa app on your phone.

The only recent occasion that comes to mind when Siri has moved in the right direction – gaining knowledge on one platform that previously existed only on another – was when iOS 10.2 brought the full wealth of Apple TV Siri's movie and TV expertise to iOS. This only happened, though, because iOS 10.2 introduced the TV app.

Until Siri can answer the same requests regardless of what platform you're on, most people simply won't learn to trust it. Users shouldn't have to remember which device's Siri can answer which questions – all they should have to remember is those two key words: "Hey Siri."


HomePod Review Roundup

Initial orders of Apple’s new HomePod smart speaker will arrive on doorsteps and in Apple stores beginning Friday in the US, UK, and Australia. Today, reviews were published by several media outlets that have had about a week to test the HomePod. Apple also invited several journalists for a tour of its audio labs in Cupertino with Phil Schiller, hardware VP Kate Bergeron, and senior director of audio design and engineering Gary Greaves.

The consensus of the first wave of reviews is that the HomePod sounds fantastic. Apple has brought its engineering expertise and computing power to bear in a way that reviewers say produces remarkable sound for the HomePod’s size and price.

However, Siri’s limitations and the lack of support for third-party music streaming services also mean that the HomePod’s voice assistant features lag behind those of the Amazon Echo and Google Home. As a result, the HomePod’s appeal will likely be limited to people who already subscribe to Apple Music, use iOS devices, and care about high-quality audio.

Matthew Panzarino of TechCrunch:

Apple’s HomePod is easily the best sounding mainstream smart speaker ever. It’s got better separation and bass response than anything else in its size and boasts a nuance and subtlety of sound that pays off the 7 years Apple has been working on it.

As a smart speaker, it offers best-in-class voice recognition, vastly outstripping the ability of other smart speakers to hear you trying to trigger a command at a distance or while music is playing, but its overall flexibility is limited by the limited command sets that the Siri protocol offers.

Buy a HomePod if you already have Apple Music or you want to have it and you’re in the market for a single incredibly over-designed and radically impressive speaker that will give you really great sound with basically no tuning, fussing, measuring or tweaking.

Nilay Patel sums up what that means for everyone else:

The Apple engineers I talked to were very proud of how the HomePod sounds, and for good reason: Apple’s audio engineering team did something really clever and new with the HomePod, and it really works. I’m not sure there’s anything out there that sounds better for the price, or even several times the price.

Unfortunately, Apple’s audio engineering team wasn’t in charge of just putting out a speaker. It was in charge of the audio components of a smart speaker, one that simply isn’t as smart as its competitors.

That’s really the crux of it: the HomePod sounds incredible, but not so world-bendingly amazing that you should switch away from Spotify, or accept Siri’s frustrating limitations as compared to Alexa.

Read more