Erik Slivka:

Piper Jaffray analyst Gene Munster, who has regularly assessed Siri’s accuracy in terms of correctly interpreting and answering queries, has issued the latest version of his Siri report card, noting that Siri has continued to improve under iOS 7, particularly in terms of being able to properly interpret questions being asked.

My experience in the past four months has been the opposite of what Marco describes: the Italian Siri of iOS 7 fails less than before, is faster (even on 3G), and it understands my queries better. Is it because of different servers and the amount of requests that Italian Siri gets? I have no idea.

As I noted in September:

A feature that I didn’t initially like and that I’ve criticized on multiple occasions, Siri, is much improved in iOS 7. I actually am using Siri quite a bit more now, and I was surprised by the quality of the Italian voice, its increased speed, clean new design, and new functions.

It’s still far from perfect, but I’ve been using Siri on a daily basis for phone calls, directions, and Wikipedia integration. I particularly appreciate how iOS 7 made Siri smarter in understanding pronouns, indirect speech, and verb conjugations.

I’m not a “Siri power user” (I don’t know all the possible tricks and commands), but I’m happy with the improvements in iOS 7.

Great story by CNN’s Jessica Ravitz, who found, almost by accident, the woman who says she’s “100% sure” she’s the voice of the original Siri (the one that debuted with iOS 5 exactly two years ago).

Behind this groundbreaking technology there is a real woman. While the ever-secretive Apple has never identified her, all signs indicate that the original voice of Siri in the United States is a voiceover actor who laid down recordings for a client eight years ago. She had no idea she’d someday be speaking to more than 100 million people through a not-yet-invented phone.

Her name is Susan Bennett and she lives in suburban Atlanta.

Phil Dzikiy:

In a quiet server-side update, Apple has given Siri the ability to respond to requests with quotes, notably to suggest that the user is being too long-winded. When asking the assistant a question — presumably one that Apple’s servers find too long or difficult to parse — Siri responds with William Strunk and Thomas Jefferson quotes alluding to brevity.

Certainly a better user experience than simply returning an error for longer questions.

Unsurprisingly, Italian Siri doesn’t come with quotes from renowned Italian authors or historical figures. Siri does have a similar behavior, though: in my tests, Italian Siri always commented on the length of my questions, and even told me how one of them was “kilometric”.

Smart piece by Jason Snell.

There are two key factors involved here: old interface patterns and constant data collection. New designs can be experimented with; parsing data introduces layers of complexity that go deeper than providing a new month view.

Google is working on this kind of technology with Now. It’s plausible to assume Apple is, too.

Brett Terpstra:

I also added a cheat sheet for Siri. I know it’s not the most useful thing in the world; how often are you sitting at your laptop or desktop when you want to use Siri? Still, I’d spent some time exploring and needed a way to practice the various command syntaxes so that I’d be able to use them without thinking so much after I hit the button.

While I always check out every project by Brett, I somehow missed this one. The Siri cheat sheet is part of a bigger collection called Cheaters:

Cheaters is a collection of HTML-based cheat sheets meant for display in an Automator-based popup browser which can float on your screen while you work in other apps.

I have already found some commands I didn’t know Siri supported. I am now wondering if any Italian ever made an Italian Siri cheat sheet as thorough as this one. (note: this is sad, but unsurprising)

I look forward to playing more with Cheaters. The way Brett calls the HTML file with Automator is particularly clever and extensible. Make sure to check out the other cheat sheets, which include MultiMarkdown and Sublime Text 2.

Textual Siri


Here's a good article by Rene Ritchie from June 2012 about a textual interface for Siri:

If Spotlight could access Siri's contextually aware response engine, the same great results could be delivered back, using the same great widget system that already has buttons to touch-confirm or cancel, etc.

I completely agree. Spotlight lets you find apps and data to launch on your device; aside from its “assistant” functionality, Siri lets you search for specific information (either on your device or the web). There's no reason find and search shouldn't be together. Siri gained app-launching capabilities, but Spotlight still can't accept Siri-like text input.

The truth is, I think using Siri in public is still awkward. My main use of Siri is adding calendar events or quick alarms when I'm a) cooking or b) driving my car. When I'm working in front of an iPad, I just don't see the point of using voice input when I have a keyboard and the speech recognition software is still failing at recognizing moderately complex Italian queries. When I'm waiting for my doctor or in line at the grocery store, I just don't want to be that guy who pulls out his phone and starts talking with a robotic assistant. Ten years after my first smartphone, I still prefer avoiding phone calls in public because a) other people don't need to know my business and b) I was taught that talking on the phone in public can be rude. How am I supposed to tell Siri to “read me” my schedule when I have 10 people around me?

I think a textual Siri, capable of accepting written input instead of spoken commands, would provide a great middle ground for those situations when you don't want to/can't talk in public. Like Rene, I think putting the functionality in Spotlight would be a fine choice; apps like Fantastical have shown that “natural language input” with text can still be a modern, useful addition to our devices.

Text input brings different challenges: how would Siri handle typos? Would it wait until you've finished writing a sentence or refresh with results as-you-type? Would Siri lose its “conversational” approach, or provide butttons to reply with “Yes” or “No” to its further questions?

Text, however, has also its advantages: text is universal, free of voice alterations (think accents and dialects), independent from surrounding noise and/or microphone proximity. With a textual Siri, Apple could keep its users within its control by letting them ask for restaurant suggestions, weather information, unit conversions, or sports results without having to open other apps and/or launch Google.

It's just absurd to think semantic search integration can only be applied to voice recognition, especially in the current version of Siri. I agree with Kontra: Siri isn't really about voice.

More importantly: if Google can do it, why can't Apple?

Siri Date Calculations and WolframAlpha


Speaking of Siri, David Sparks posted a great overview of how you can perform date calculations with Siri. I didn't know any of those tips, and I was surprised to find out they are based on WolframAlpha. I have been doing date calculations in WolframAlpha for years, and I didn't even think about using Siri for that purpose.

David's post convinced me to do the same with Siri's Italian sister. Unfortunately, but unsurprisingly, the results were disappointing. First, I asked Siri to calculate the days between April 3, 2010 and September 1, 2010. The query was parsed correctly, but Siri said she couldn't find a contact in my Address Book.

For the second test, I asked which day it'll be in 20 days from now, and Siri replied with the following mix of Italian and English:

E' Tuesday, February 26, 2013

It basically told me that today is (“E'” in Italian) February 26th, completely ignoring my date query. Last, I asked which day it was 17 days ago, and this time Siri didn't combine languages, but it replied with 17 days from now – March 15th, 2013.

I believe part of the culprit is that iOS 6 can still get confused if you use Siri in Italian but keep your device's settings to English. Another example is how, with a device set to English and Siri in Italian, Maps navigation in iOS 6 still speaks Italian directions…in English. You can imagine how that sounds. But generally, it's Siri's own parsing engine that's inferior to the “real” English Siri.

Like I said many times in the past, Siri has still a long way to go with the Italian language, and the software hasn't improved much since I last checked in November 2012.


What I have been using for quick and reliable date calculations is WolframAlpha. On iOS, the company has a native Universal app that understands my queries just fine 99% of the time and that allows me to type characters faster with series of extra keyboard rows. It's not pretty, but it is efficient and it also displays additional information related to your date query – such as date formats, events on a specific day, and time difference from today. I may not have the same date calculation skills of Dr. Drang, but WolframAlpha never disappointed me.

The WolframAlpha app is $2.99 on the App Store.

Siri Vs. Google Voice Search, Four Months Later

Rob Griffiths, comparing Siri to Google Voice Search at Macworld:

Because of the speed, accuracy, and usefulness of Google’s search results, I’ve pretty much stopped using Siri. Sure, it takes a bit of extra effort to get started, but for me, that effort is worth it. Google has taken a key feature of the iOS ecosystem and made it seem more than a little antiquated. When your main competitor is shipping something that works better, faster, and more intuitively than your built-in solution, I'd hope that'd drive you to improve your built-in solution.

When the Google Search app was updated with Voice Search in October 2012, I concluded saying:

Right now, the new Voice Search won’t give smarter results to international users, and it would be unfair to compare it to Siri, because they are two different products. Perhaps Google’s intention is to make Voice Search a more Siri-like product with Google Now, but that’s another platform, another product, and, ultimately, pure speculation.

When Clark Goble posted his comparison of Siri Vs. Google Voice Search in November, I summed up my thoughts on the “usefulness” of both voice input solutions:

I’m always around a computer or iOS device, and the only times when I can’t directly manipulate a UI with my hands is when I’m driving or cooking. I want to know how Siri compares to Google in letting me complete tasks such as converting pounds to grams and texting my girlfriend, not showing me pictures of the Eiffel Tower.

From my interview with John Siracusa:

And yet the one part of Google voice search that Google can control without Apple’s interference — the part where it listens to your speech and converts it to words — has much better perceptual performance than Siri. Is that just a UI choice, where Apple went with a black box that you speak into and wait to see what Siri thinks you said? Or is it because Google’s speech-to-text service is so much more responsive than Apple’s that Google could afford to provide much more granular feedback? I suspect it’s the latter, and that’s bad for Apple. (And, honestly, if it’s the former, then Apple made a bad call there too.)

Now, four months after Google Voice Search launched, I still think Google's implementation is, from a user experience standpoint, superior. While it's nice that Siri says things like “Ok, here you go”, I just want to get results faster. I don't care if my virtual assistant has manners: I want it to be neutral and efficient. Is Siri's distinct personality a key element to its success? Does the way Siri is built justify the fact that Google Voice Search is almost twice as fast as Siri? Or are Siri's manners just a way to give some feedback while the software is working on a process that, in practice, takes more seconds than Google's?

I still believe that Siri's biggest advantage remains its deep connection with the operating system. Siri is faster to invoke and it can directly plug into apps like Reminders, Calendar, Mail, or Clock. Google can't parse your upcoming schedule or create new calendar events for you. It's safe to assume Apple's policy will always preclude Google from having that kind of automatic, invisible, seamless integration with iOS.

But I have been wondering whether Google could ever take the midway approach and offer a voice-based “assistant” that also plays by Apple's rules.

Example: users can't set a default browser on iOS but Google shipped Chrome as an app; the Gmail app has push notifications; Google Maps was pulled from iOS 6 and Google released it as a standalone app. What's stopping Google from applying the same concept to a Google Now app? Of course, such app would be a “watered down” version of Google Now for Android, but it could still request access to your local Calendar and Reminders like other apps can; it would be able to look into your Contacts and location; it would obviously push Google+ as an additional sharing service (alongside the built-in Twitter and Facebook). It would use the Google Maps SDK and offer users to open web links in Google Chrome. Search commands would be based on Voice Search technology, but results wouldn't appear in a web view under a search box – it would be a native app. The app would be able to create new events with or without showing Apple's UI; for and Messages integration, it would work just like Google Chrome's Mail sharing: it'd bring up a Mail panel with the transcribed version of your voice command.

Technically, I believe this is possible – not because I am assuming it, but because other apps are doing the exact same thing, only with regular text input. See: Drafts. What I don't know is whether this would be in Google's interest, or if Apple would ever approve it (although, if based on publicly-available APIs and considering Voice Search was approved, I don't see why not).

If such an app ever comes out, how many people would, like Rob, “pretty much stop using Siri”? How many would accept the trade-off of a less integrated solution in return of speed and more reliability?

An earlier version of this post stated that calendar events can’t be created programmatically on iOS. That is possible without having to show Apple’s UI, like apps such as Agenda and Fantastical have shown .

iWatch Potential

Bruce “Tog” Tognazzini, Apple employee #66 and founder of the Human Interface Group, has published a great post on the potential of the “iWatch” – a so-called smartwatch Apple could release in the near future (via MG Siegler). While I haven't been exactly excited by the features offered by current smartwatches – namely, the Pebble and other Bluetooth-based watches – the possibilities explored by Bruce made me think about a future ecosystem where, essentially, the iPhone will “think” in the background and the iWatch will “talk” directly to us. I believe that having bulky smartwatches with high-end CPUs won't be nearly as important as ensuring a reliable, constant connection between lightweight wearable devices and the “real” computers in our pocket – smartphones.

The entire post is worth a read, so I'll just highlight a specific paragraph about health tracking:

Having the watch facilitate a basic test like blood pressure monitoring would be a god-send, but probably at prohibitive cost in dollars, size, and energy. However, people will write apps that will carry out other medical tests that will end up surprising us, such as tests for early detection of tremor, etc. The watch could also act as a store-and-forward data collector for other more specialized devices, cutting back the cost of specialized sensors that would then need be little more than a sensor, a Blue Tooth chip, and a battery. Because the watch is always with us, it will be able to deliver a long-term data stream, rather than a limited snapshot, providing insight often missing from tests administered in a doctor’s office.

Dealing with all sorts of blood, temperature, and pressure tests on a regular basis, I can tell you that data sets that span weeks and months – building “archives” of a patient with graphs and charts, for instance – has, nowadays, too much friction. Monitoring blood pressure is still done with dedicated devices that most people don't know how to operate. But imagine accurate, industry-certified, low-energy sensors capable of monitoring this kind of data and sending it back automatically to an iPhone for further processing, and you can see how friction could be removed while a) making people's lives better and b) building data sets that don't require any user input (you'd be surprised to know how much data can be extrapolated from the combination of “simple” tests like blood pressure monitoring and body temperature).

The health aspect of a possible “iWatch” is just a side of a device that Apple may or may not release any time soon. While I'm not sure about some of the ideas proposed by Bruce (passcode locks seem overly complex when the devices themselves could have biometric scanners built-in; Siri conversations in public still feel awkward and the service is far from responsive, especially on 3G), I believe others are definitley in the realm of technologically feasible and actually beneficial to the users (and Apple). Imagine crowdsourced data from the iWatch when applied to Maps or the iWatch being able to “tell us” about upcoming appointments or reminders when we're driving so we won't have to reach out to an iPhone (combine iWatch vibrations and “always-on” display with Siri Eyes Free and you get the idea).

As our iPhones grow more powerful and connected on each generation, I like to think that, in a not-so distant future, some of that power will be used to compute data from wearable devices that have a more direct connection to us and the world around us.