THIS WEEK'S SPONSOR:

Sketch

The Design Platform Trusted by Over One Million People


Posts tagged with "siri"

Hands-On with the HomePod’s New Intercom Feature, Alarms, and Siri Tricks

With yesterday’s releases of iOS 14.1 and HomePod Software Version 14.1, which could really use a catchier name, Apple has introduced several new features announced last week at its iPhone 12 and HomePod mini event. Most readers are probably already familiar with what’s in the updates based on our iPhone 12 and HomePod mini overviews, so I thought I’d update my HomePods and devices to provide some hands-on thoughts about the changes.

Most of the new features are related to the HomePod. Although proximity-based features are exclusive to the HomePod mini, which features Apple’s U1 Ultra Wideband chip, some of the other functionality revealed last week is available on all HomePod models.

Read more


Apple’s HomePod mini: The MacStories Overview

I have two HomePods: one in our living room and another in my office. They sound terrific, and I’ve grown to depend on the convenience of controlling HomeKit devices, adding groceries to my shopping list, checking the weather, and being able to ask Siri to pick something to play when I can’t think of anything myself. My office isn’t very big, though, and when rumors of a smaller HomePod surfaced, I was curious to see what Apple was planning.

Today, those plans were revealed during the event the company held remotely from the Steve Jobs Theater in Cupertino. Apple introduced the HomePod mini, a diminutive $99 smart speaker that’s just 3.3 inches tall and 3.8 inches wide. In comparison, the original HomePod is 6.8 inches tall and 5.6 inches wide. At just .76 pounds, the mini is also considerably lighter than the 5.5-pound original HomePod.

Read more


John Giannandrea on the Broad Reach of Machine Learning in Apple’s Products

Today Samuel Axon at ArsTechnica published a new interview with two Apple executives: SVP of Machine Learning and AI Strategy John Giannandrea and VP of Product Marketing Bob Borchers. The interview is lengthy yet well worth reading, especially since it’s the most we’ve heard from Apple’s head of ML and AI since he departed Google to join the company in 2018.

Based on some of the things Giannandrea says in the interview, it sounds like he’s had a very busy two years. For example, when asked to list ways Apple has used machine learning in its recent software and products, Giannandrea lists a variety of things before ultimately indicating that it’s harder to name things that don’t use machine learning than ones that do.

There’s a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we’ve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we’re not using machine learning. It’s hard to find a part of the experience where you’re not doing some predictive [work].

One interesting tidbit mentioned by both Giannandrea and Borchers is that Apple’s increased dependence on machine learning hasn’t led to the company talking about ML non-stop. I’ve noticed this too – whereas a few years ago the company might have thrown out ‘machine learning’ countless times during a keynote presentation, these days it’s intentionally more careful and calculated in naming the term, and I think for good reason. As Giannandrea puts it, “I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart sort of disappear.” Borchers expounds on that idea:

This is clearly our approach, with everything that we do, which is, ‘Let’s focus on what the benefit is, not how you got there.’ And in the best cases, it becomes automagic. It disappears… and you just focus on what happened, as opposed to how it happened.

The full interview covers subjects like Apple’s Neural Engine, Apple Silicon for Macs, the benefits of handling ML tasks on-device, and much more, including a fun story from Giannandrea’s early days at Apple. You can read it here.

Permalink

Castro Debuts Extensive Siri and Shortcuts Podcast Controls

One of my favorite podcast clients, Castro, debuted a big update today that adds a host of Siri commands and strong Shortcuts support.

There are now 30 requests you can make of Castro through Siri, which can access all the world’s open podcasts. We know it can be hard to remember them all, so we made a handy reference guide in Settings → Siri where you can find what you’re looking for to make your day a little easier.

Besides the wide extent of possible commands in Castro, what’s especially impressive is the guide referenced above: Castro’s team has built an excellent Siri Guide and a related in-app Shortcuts Gallery, both of which are accessible via settings and highlight simply and beautifully what all is possible with Siri and Shortcuts.

Castro’s Siri Guide and Shortcuts Gallery.

Castro’s Siri Guide and Shortcuts Gallery.

Discovery is one of the biggest challenges I’ve found with apps that support Siri and Shortcuts, as apps seldom make a list available of all supported voice commands and actions. With both Siri and Shortcuts, I’ve struggled in the past to find great podcast-related uses for these features, but Castro solved that problem for me.

On the Siri front, skipping chapters and managing my queue via voice works great. With Shortcuts, Castro offers some great pre-built shortcuts that do things like import your full Apple Podcasts library, clear all your queued episodes, subscribe to a new show even when you don’t have a proper Castro link, and more. While it’s always nice having the tools to build something custom, as someone who isn’t a heavy Shortcuts tinkerer I appreciate the work put in by Castro’s team to offer users extra functionality with minimal effort.


Apple, Google, Amazon, and the Zigbee Alliance Announce Project Connected Home Over IP

In a press release today, Apple announced that it is part of a new working group with Google, Amazon, and the Zigbee Alliance called Project Connected Home over IP. According to Apple’s press release:

The goal of the Connected Home over IP project is to simplify development for manufacturers and increase compatibility for consumers. The project is built around a shared belief that smart home devices should be secure, reliable, and seamless to use. By building upon Internet Protocol (IP), the project aims to enable communication across smart home devices, mobile apps, and cloud services and to define a specific set of IP-based networking technologies for device certification.

Apple says smart home device makers IKEA, Legrand, NXP Semiconductors, Resideo, Samsung SmartThings, Schneider Electric, Signify (formerly Philips Lighting), Silicon Labs, Somfy, and Wulian will also contribute to the project. The group wants to make it easier for manufacturers of smart home devices to integrate with Amazon’s Alexa, Apple’s Siri, and Google’s Assistant and will take an open source approach to the development of the joint protocol.

This is fantastic news. To date, many smart home devices have adopted support for some, but not all, smart assistants. It has also become all too common for companies to announce support for certain assistants was coming to their products only to delay or abandon it altogether. With a unified approach across the three major companies with smart assistants, support will hopefully be more consistent in the future.


Apple to Open Siri Up to Third-Party Messaging and Phone Apps

Bloomberg reports that Apple will open up Siri to third-party messaging apps with a software update later this year. Third-party phone apps will be added later. According to Bloomberg’s Mark Gurman:

When the software refresh kicks in, Siri will default to the apps that people use frequently to communicate with their contacts. For example, if an iPhone user always messages another person via WhatsApp, Siri will automatically launch WhatsApp, rather than iMessage. It will decide which service to use based on interactions with specific contacts. Developers will need to enable the new Siri functionality in their apps. This will be expanded later to phone apps for calls as well.

As Gurman notes, the company’s change in approach comes as Apple is facing scrutiny over the competitive implications of its dual role as app maker and App Store gatekeeper in the US and elsewhere.

It’s interesting that the update is a Siri-only change. Users will still not be able to replace Messages with WhatsApp or Phone with Skype as their default messaging and phone apps for instance, but it strikes me as a step in the right direction and a change that I hope leads to broader customization options on iOS and iPadOS.

Permalink

Siri in iOS 13: SiriKit for Media, New Suggestions, and a Better Voice

Another year, another batch of Siri improvements aimed at enhancing what’s already there, but not radically transforming it. Siri in iOS 13 comes with a handful of changes, all of which are in line with the types of iteration we’re used to seeing for Apple’s intelligent assistant. Siri now offers suggested actions in more places and ways than before, its voice continues becoming more human, and perhaps this year’s biggest change is a new SiriKit domain for media, which should enable – after the necessary work by third-party developers – audio apps like Spotify, Overcast, and Audible to be controlled by voice the way Apple’s native Music, Podcasts, and Books apps can be.

Read more


Apple Announces Changes to Siri Grading Program

Earlier this month, Apple suspended its Siri grading program, in which third-party contractors listened to small snippets of audio to evaluate Siri’s effectiveness. Today in a press release, Apple explained its Siri grading program and changes the company is making:

We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading. We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.

Apologizing for not living up to the privacy standards customers expect from it, Apple outlined three changes that will be implemented this fall when operating system updates are released:

First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve.

Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time.

Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.

This is a sensible plan. It’s clear, concise, and has the benefit if being verifiable once implemented. It’s unfortunate that Siri recordings were being handled this way in the first place, but I appreciate the plain-English response and unambiguous plan for the future.


Apple Suspends Program In Which Contractors Listened to Recorded Siri Snippets

Last week, The Guardian reported on Apple’s Siri grading program in which contractors listen to snippets of audio to evaluate the effectiveness of Siri’s response to its trigger phrase. That article quoted extensively from an anonymous contractor who said they and other contractors regularly heard private user information as part of the program.

In response, Apple has announced that it is suspending the Siri grading program worldwide. While suspended, Apple says it will re-evaluate the program and issue a software update that will let users choose whether to allow their audio to be used as part of the program.

In a statement to Matthew Panzarino, the editor-in-chief of TechCrunch, Apple said:

“We are committed to delivering a great Siri experience while protecting user privacy,” Apple said in a statement to TechCrunch. “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

In an earlier response to The Guardian, Apple had said that less than 1% of daily Siri requests are sent to humans as part of the grading program. However, that’s not very comforting to users who are left wondering whether snippets of their daily life are part of the audio shared with contractors. Consequently, I’m glad to see that Apple is re-examining its Siri quality-control efforts and has promised to give users a choice of whether they participate.