Today the HomePod is all about music, but it could be so much more.
From its debut last June at WWDC to launch day this February, HomePod’s primary purpose has been clear: it’s an Apple Music accessory. Music has been the sole focus of Apple’s marketing, including the recent Spike Jonze short film – yet it’s an angle many have trouble accepting.
In a pre-Amazon Echo world, HomePod being a great Apple Music speaker would have been enough. But in 2018 we expect more from smart speakers, and we expect more from Apple.
HomePod succeeds as a music speaker, but it’s not the device we expected – at least not yet. Due to its arrival date more than three years after the birth of Alexa, we expected a smarter, more capable product. We expected the kind of product the HomePod should be: a smart speaker that’s heavy on the smarts. Apple nailed certain aspects with its 1.0: the design, sound quality, and setup are all excellent. But that’s not enough.
HomePod isn’t a bad product today, but it could become a great one.
By becoming a true hub for all our Apple-centric needs.
Third-party support is part of that – we expect great third-party apps on our Apple devices, and should get them on HomePod. But that will only help the platform catch up with competitors; it’s not a differentiator.
To truly stand out, and become a capable central hub of our Apple lives, the HomePod needs to double down on integrating with the Apple ecosystem. Meaning, Siri’s reach should extend to everything branded Apple; if a service or device is made by Apple, Siri needs to have deep ties into it. For this to happen, Siri on the HomePod not only needs access to all the knowledge of every other Siri, but it also needs to gain an understanding of all Apple services and devices – thus becoming a veritable center of all Apple-related intelligence.
Ideally, this kind of Siri 2.0 would exist across all platforms for a consistent experience. But even if it did, the most important one would be HomePod. Why HomePod? Because that’s what the smart speaker category exists for. There may always be more iPhones in the world than HomePods, but as long as the iPhone remains touch-first, Siri’s presence on HomePod will be the most critical.
HomePod is best equipped to hear requests, with its six-microphone array always at the ready; no iPhone or other device can compete with that. Also, those microphones possess the power to ignore what’s currently playing from the HomePod to focus on your voice. Plus, it’s already the device charged with handling most ‘Hey Siri’ inquiries, even when your iPhone or iPad might be closer; that can be frustrating now, but if HomePod’s Siri had no domain limitations, we wouldn’t mind it taking all requests. Finally, it’s the only Siri device that sits in one place all day, every day, so you know exactly where to speak. With HomePod, you don’t have to first locate a device (iPhone and iPad), press a button (Mac and Apple TV), or turn your wrist (Apple Watch) before making your request: just speak into the air, and Siri will hear you.
If Siri knew all things about your Apple devices and services, and could interact with them all, then HomePod would be the perfect vehicle to tap into that power. You could ask Siri on the HomePod to:
- Check your iPhone’s battery charge.
- Play an audiobook.
- Add a show to your Up Next queue.
- Download a specific app to your iPhone.
- Pause or resume Apple TV playback.
- List upcoming birthdays for your contacts.
- Provide a delivery status on your Apple Store order.
- Put all your devices in Do Not Disturb mode.
- Play a specific movie or show on the Apple TV.
- Or on the bedroom TV, or the iPad, or iPhone.
- Locate your iPhone or iPad.
- Each device could play a ding if it’s nearby, and if not, HomePod could offer to load a map on your nearest device.
- Make a phone call.
- Switch your AirPods to the Apple TV.
- Set an Apple Store support appointment.
- Open an app on a certain device.
- Access files stored in iCloud Drive on the device of your choice:
- “Put my Release Notes presentation on the TV.”
- “Open the Budget spreadsheet on my iMac.”
- Put a screensaver on the TV.
None of these things can currently be done by HomePod, but I think they would all be reasonable to expect from an upgraded Siri. None of these would infringe on the company’s user privacy stance, because the data at play in these requests is already available to Apple.
It may not seem like too extensive a list, but I think it covers all the reasonable gaps in Siri’s Apple-related knowledge. Some items listed are impossible with Siri now, but plenty already work on some devices – just not HomePod.
The list is fairly concise because Siri on HomePod already can do a lot. It has most of the bases covered. The problem is that as long as Siri can’t do everything, we’ll avoid relying on it for much of anything. Every time Siri responds to a query with, “I can’t do that,” users learn to doubt Siri’s capabilities. By expanding Siri’s reach in a few key areas – to all the Apple-related requests a person might reasonably have – every existing Siri domain will benefit, because that element of doubt will be removed.1
There is one caveat to all of this: deep ties with everything Apple will only work if HomePod gains proper multi-user support. Until the HomePod can match a voice to a person, and thus a person’s own apps, services, and devices, advancement in this area will be pointless. But once that’s accomplished, then with all these requests, Siri would know which iPhone is yours, which iCloud Drive storage to check, whose AirPods you’re referring to, and more. Its access to your devices and services would make it like Marvel’s Jarvis for all things Apple.
It would take a lot of effort to make this kind of supercharged Siri a reality, but it can be done, and it should be done – for the sake of user experience, but also because, in some ways, only Apple can do this.
Tim Cook and other senior executives frequently point out when a product is “something only Apple can do.” HomePod has the potential to be that product, but it’s nowhere near there yet.
Apple does have a big head start though. Not in the smart speaker category specifically, but in its ecosystem – a core factor that will be increasingly essential to smart speakers in the future.
Here’s how the rest of the ecosystem landscape sits currently:
- Amazon. Missing the phone, watch, PC, and enterprise-capable tablets. Its vibrant third-party Alexa ecosystem is a strength, but without attractive computing hardware outside of speakers, it will never achieve the kind of seamless user experience people want.
- Microsoft. Failures in the smartphone market come back to bite it; it could be a dark-horse candidate, but success will be really hard without a competitive smartphone.
- Sonos. Not a chance. Besides the fact that it only makes speakers, Sonos is also entirely dependent on third parties. It’s trying to offer the best of all ecosystems by integrating with as many existing platforms as possible, but the big players will always reserve the best features and experiences for their own hardware.
- Google. It has a much better chance than anyone else to get there, but I don’t know that it will. When your ecosystem is as fragmented as Google’s, it’s hard to nail a seamless experience across devices and services. The Pixel and Home line of products is a good start, they just need greater market penetration; also, Chrome OS-powered computers need to become legitimate options for getting work done.
A full ecosystem – complete with smartphones and traditional computers – matters so much to a smart speaker’s success because voice input will never replace touch entirely, only supplement it.
We’re going to continue using touch input computers far into the future. Much of our computing will continue taking place on smartphones, tablets, and laptops; but for those tasks better managed by voice, devices like the HomePod, Amazon Echo, and Google Home will grow more important.
Just like the average user shouldn’t have to remember which device’s Siri can handle which tasks, so should they never have to carry the mental strain of switching ecosystems when using different modes of computing.
Voice computing has significant potential for expansion in our lives, but as it takes that extra load, I suspect we’ll find it less and less tolerable for voice-activated computers to live in an entirely different ecosystem than touch-input devices. With different platforms comes not only different apps and services, but also different terminology and user experience. The best computing experience will be offered by a family of related devices designed to complement each other.
Right now, Apple has the best shot at making that happen.
HomePod can be much more than it is today. Music, HomeKit, and basic trivia are all important, but they’re just the beginning. The end is a truly smart speaker, powered by a truly smart Siri – our personal hub of Apple computing.
- Siri still needs closer to 100% consistency handling requests correctly every time, but the current lack of domain knowledge amplifies that problem. ↩︎