WWDC keynotes cover a lot of ground, hitting the highlights of the OS updates Apple plans to release in the fall. However, as the week progresses, new details emerge from session videos, developers trying new frameworks, and others who bravely install the first OS betas. So, as with past WWDCs, we’ve supplemented our iOS and iPadOS 15, macOS Monterey, and watchOS 8, and tvOS 15 coverage with all the small things we’ve found interesting this week:
Posts tagged with "siri"
Matthew Panzarino, reporting for TechCrunch says the latest beta version of iOS and iPadOS 14.5 includes two new English Siri voices. The report elaborates that the existing female voice is no longer the default and that users will choose the voice they want to use with Apple’s voice assistant when setting up a device for the first time.
In a statement to TechCrunch, an Apple said:
We’re excited to introduce two new Siri voices for English speakers and the option for Siri users to select the voice they want when they set up their device. This is a continuation of Apple’s long-standing commitment to diversity and inclusion, and products and services that are designed to better reflect the diversity of the world we live in.
Panzarino says he’s heard the new voices and likes them a lot and will be embedding samples in his story once he has the sixth iOS 14.5 beta installed.
I’m surprised that Apple is adding new Siri voices this late in the iOS 14 cycle, but it’s a welcome change that eliminates bias and makes Siri a more diverse and inclusive service.
As a smaller, affordable smart speaker tightly integrated with Apple services, the HomePod mini is a compelling product for many people. The mini is little enough to work just about anywhere in most homes. At $99, the device’s price tag also fits more budgets and makes multiple HomePod minis a far more realistic option than multiple original HomePods ever were. Of course, the mini comes with tradeoffs compared to its larger, more expensive sibling, which I’ll get into, but for many people, it’s a terrific alternative.
As compelling as the HomePod mini is as a speaker, though, its potential as a smart device reaches beyond the original HomePod in ways that have far greater implications for Apple’s place in customers’ homes. Part of the story is the mini’s ability to serve as a border router for Thread-compatible smart devices, forming a low-power, mesh network that can operate independently of your Wi-Fi setup. The other part of the story is the way the mini extends Siri throughout your home. Apple’s smart assistant still has room to improve. However, the promise of a ubiquitous audio interface to Apple services, apps, HomeKit devices, and the Internet is more compelling than ever as Siri-enabled devices proliferate.
For the past couple of months, I’ve been testing a pair of HomePod minis that Apple sent me. That pair joined my original HomePods and another pair of minis that I added to the setup to get a sense of what having a whole-home audio system with Siri always within earshot would be like. The result is a more flexible system that outshines its individual parts and should improve over time as the HomeKit device market evolves.
With yesterday’s releases of iOS 14.1 and HomePod Software Version 14.1, which could really use a catchier name, Apple has introduced several new features announced last week at its iPhone 12 and HomePod mini event. Most readers are probably already familiar with what’s in the updates based on our iPhone 12 and HomePod mini overviews, so I thought I’d update my HomePods and devices to provide some hands-on thoughts about the changes.
Most of the new features are related to the HomePod. Although proximity-based features are exclusive to the HomePod mini, which features Apple’s U1 Ultra Wideband chip, some of the other functionality revealed last week is available on all HomePod models.
I have two HomePods: one in our living room and another in my office. They sound terrific, and I’ve grown to depend on the convenience of controlling HomeKit devices, adding groceries to my shopping list, checking the weather, and being able to ask Siri to pick something to play when I can’t think of anything myself. My office isn’t very big, though, and when rumors of a smaller HomePod surfaced, I was curious to see what Apple was planning.
Today, those plans were revealed during the event the company held remotely from the Steve Jobs Theater in Cupertino. Apple introduced the HomePod mini, a diminutive $99 smart speaker that’s just 3.3 inches tall and 3.8 inches wide. In comparison, the original HomePod is 6.8 inches tall and 5.6 inches wide. At just .76 pounds, the mini is also considerably lighter than the 5.5-pound original HomePod.
Today Samuel Axon at ArsTechnica published a new interview with two Apple executives: SVP of Machine Learning and AI Strategy John Giannandrea and VP of Product Marketing Bob Borchers. The interview is lengthy yet well worth reading, especially since it’s the most we’ve heard from Apple’s head of ML and AI since he departed Google to join the company in 2018.
Based on some of the things Giannandrea says in the interview, it sounds like he’s had a very busy two years. For example, when asked to list ways Apple has used machine learning in its recent software and products, Giannandrea lists a variety of things before ultimately indicating that it’s harder to name things that don’t use machine learning than ones that do.
There’s a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff we’ve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where we’re not using machine learning. It’s hard to find a part of the experience where you’re not doing some predictive [work].
One interesting tidbit mentioned by both Giannandrea and Borchers is that Apple’s increased dependence on machine learning hasn’t led to the company talking about ML non-stop. I’ve noticed this too – whereas a few years ago the company might have thrown out ‘machine learning’ countless times during a keynote presentation, these days it’s intentionally more careful and calculated in naming the term, and I think for good reason. As Giannandrea puts it, “I think that this is the future of the computing devices that we have, is that they be smart, and that, that smart sort of disappear.” Borchers expounds on that idea:
This is clearly our approach, with everything that we do, which is, ‘Let’s focus on what the benefit is, not how you got there.’ And in the best cases, it becomes automagic. It disappears… and you just focus on what happened, as opposed to how it happened.
The full interview covers subjects like Apple’s Neural Engine, Apple Silicon for Macs, the benefits of handling ML tasks on-device, and much more, including a fun story from Giannandrea’s early days at Apple. You can read it here.
There are now 30 requests you can make of Castro through Siri, which can access all the world’s open podcasts. We know it can be hard to remember them all, so we made a handy reference guide in Settings → Siri where you can find what you’re looking for to make your day a little easier.
Besides the wide extent of possible commands in Castro, what’s especially impressive is the guide referenced above: Castro’s team has built an excellent Siri Guide and a related in-app Shortcuts Gallery, both of which are accessible via settings and highlight simply and beautifully what all is possible with Siri and Shortcuts.
Discovery is one of the biggest challenges I’ve found with apps that support Siri and Shortcuts, as apps seldom make a list available of all supported voice commands and actions. With both Siri and Shortcuts, I’ve struggled in the past to find great podcast-related uses for these features, but Castro solved that problem for me.
On the Siri front, skipping chapters and managing my queue via voice works great. With Shortcuts, Castro offers some great pre-built shortcuts that do things like import your full Apple Podcasts library, clear all your queued episodes, subscribe to a new show even when you don’t have a proper Castro link, and more. While it’s always nice having the tools to build something custom, as someone who isn’t a heavy Shortcuts tinkerer I appreciate the work put in by Castro’s team to offer users extra functionality with minimal effort.
In a press release today, Apple announced that it is part of a new working group with Google, Amazon, and the Zigbee Alliance called Project Connected Home over IP. According to Apple’s press release:
The goal of the Connected Home over IP project is to simplify development for manufacturers and increase compatibility for consumers. The project is built around a shared belief that smart home devices should be secure, reliable, and seamless to use. By building upon Internet Protocol (IP), the project aims to enable communication across smart home devices, mobile apps, and cloud services and to define a specific set of IP-based networking technologies for device certification.
Apple says smart home device makers IKEA, Legrand, NXP Semiconductors, Resideo, Samsung SmartThings, Schneider Electric, Signify (formerly Philips Lighting), Silicon Labs, Somfy, and Wulian will also contribute to the project. The group wants to make it easier for manufacturers of smart home devices to integrate with Amazon’s Alexa, Apple’s Siri, and Google’s Assistant and will take an open source approach to the development of the joint protocol.
This is fantastic news. To date, many smart home devices have adopted support for some, but not all, smart assistants. It has also become all too common for companies to announce support for certain assistants was coming to their products only to delay or abandon it altogether. With a unified approach across the three major companies with smart assistants, support will hopefully be more consistent in the future.
Bloomberg reports that Apple will open up Siri to third-party messaging apps with a software update later this year. Third-party phone apps will be added later. According to Bloomberg’s Mark Gurman:
When the software refresh kicks in, Siri will default to the apps that people use frequently to communicate with their contacts. For example, if an iPhone user always messages another person via WhatsApp, Siri will automatically launch WhatsApp, rather than iMessage. It will decide which service to use based on interactions with specific contacts. Developers will need to enable the new Siri functionality in their apps. This will be expanded later to phone apps for calls as well.
As Gurman notes, the company’s change in approach comes as Apple is facing scrutiny over the competitive implications of its dual role as app maker and App Store gatekeeper in the US and elsewhere.
It’s interesting that the update is a Siri-only change. Users will still not be able to replace Messages with WhatsApp or Phone with Skype as their default messaging and phone apps for instance, but it strikes me as a step in the right direction and a change that I hope leads to broader customization options on iOS and iPadOS.