Alongside the announcement today of a new Echo product coming soon, the screen-equipped Echo Show, Amazon has launched a redesigned Alexa app for iOS. The new app's highlight feature, apart from a much-improved interface, is the addition of messaging functionality.
Posts tagged with "Amazon Echo"
Newton, the email client for power users, today launched Amazon Echo integration with an Alexa skill. The skill enables email management with nothing but your voice; in addition to having Alexa read emails to you, you can perform the following list of actions by voice:
- Mark as read
- Mark as spam
Replying to or composing new emails is not possible with Alexa, but personally, I don't think I would trust a voice assistant to write my emails anyways – at least not until the technology grows more foolproof. The option to perform simple actions by voice, like archiving or snoozing messages, is much more appealing.
Newton's expansion to Alexa-equipped devices follows the introduction of a Windows version of the client in beta form earlier this week. As a daily Newton user, I wrote about the iOS and macOS versions last Friday for Club MacStories members, and look forward to seeing the service continue to grow and improve.
Using the app's current microphone button, which is available to the right of the search bar, users can make nearly any type of Alexa request. This request can consist of things you might ask of an Amazon Echo, such as playing music, turning on smart lights, checking the weather, and so on.
Both first-party and third-party skills will work from within the Amazon app. The one limitation so far, noted by Khari Johnson of VentureBeat, is that the Door Lock API is not currently available, so smart locks can't be controlled through the app. Johnson also shares that while Alexa will be available to some users in the Amazon app today, it will be rolling out to all users over the next week.
Today's announcement hopefully means that existing Amazon Echo users will have a solid first-party experience on iOS, something that surprisingly has not been provided by the company's current Amazon Alexa app. It also opens up Alexa to any Amazon customer who doesn't currently own an Echo.
Starbucks has started a limited beta test of voice-assisted ordering via its iOS app. The beta is currently limited to 1000 users but will expand through the summertime. Android support is slated for later this year.
The feature, called My Starbucks barista, is part of the Starbucks iOS app and gives customers the ability to order, make changes to their order, and pay via voice. The feature’s interface is reminiscent of a messaging app and lets you interact by typing into a text field if you prefer that to voice.
Starbucks also announced the Starbucks Reorder Skill for the Amazon Echo. Customers can say ‘Alexa, order my Starbucks’ to order items designated as their ‘usual’ food and beverage order.
What Starbucks is implementing in its iOS app isn’t possible with Siri yet. Hopefully, this sort of experimentation will push Apple to open Siri faster to avoid the fragmentation that could result in multiple solutions being implemented across many vendors.
I try not to obsess over every single announcement from CES, but it seems like "Alexa everywhere" is a common theme of this year's event. Jacob Kastrenakes has a useful roundup of Alexa devices and integrations at The Verge – but there are also smartphones and cars launching support for Amazon's assistant.
It feels like Amazon is taking the "Netflix approach" with Alexa – to be on as many devices as possible and gain mindshare through convenience and simple user interactions (like Netflix, primarily in English-speaking countries in the first couple of years). I wonder if we're going to see a proper Alexa app for iOS this year to issue commands from an iPhone. I wouldn't be surprised to see something along the lines of Astra, only made by Amazon itself and integrated with most of the skills supported by the Echo speakers.
Astute take by Ben Thompson on how Amazon is building an operating system for the home with Alexa:
Amazon seized the opportunity: first, Alexa was remarkably proficient from day one, particularly in terms of speed and accuracy (two factors that are far more important in encouraging regular use than the ability to answer trivia questions). Then, the company moved quickly to build out its ecosystem in two directions:
- First, the company created a simple “Skills” framework that allowed smart devices to connect to Alexa and be controlled through a relatively strict verbal framework; in a vacuum it was less elegant than, say, Siri’s attempt to interpret natural language, but it was far simpler to implement. The payoff was already obvious at last year’s CES: Alexa support was everywhere.
- Secondly, “Alexa” and “Echo” are different names because they are different products: Alexa is the voice assistant, and much like AWS and Amazon.com, Echo is Alexa’s first customer, but hardly its only one. This year CES announcements are dominated by products that run Alexa, including direct Echo competitors, lamps, set-top boxes, TVs, and more.
"Works with Alexa" sure feels like this year's CES motto (I try not to pay too much attention to CES announcements, but the underlying trends are interesting).
I use both HomeKit/Siri and Alexa. There are advantages and problems to both ecosystems: Apple's approach is slower, perhaps more careful, and Siri works internationally; Alexa and the Echo are only available in a few countries, but the experience is leaner, generally faster, and there are dozens of compatible devices and skills launching every week. It's a complicated comparison: Alexa works with web services while Siri integrates with native apps and hardware (like Touch ID); Alexa is expanding to a variety of accessories and third-party services, but Siri and HomeKit are more directly tied into your iOS devices.
I expect Apple to continue opening up SiriKit to developers to match Amazon's rich ecosystem of skills, but even with more domains and apps, I think the idea of a dedicated assistant for the home is a winning one. On the other hand, I wonder how quickly Amazon can launch Alexa/Echo in other countries and build richer conversational experiences that go beyond simple commands. This will be fun to watch.
I use my Amazon Echo a lot. Since importing one from the U.S. last year, I've started using web services that provide native integration with Alexa, the platform that powers Amazon's speaker. Whenever I come across a new web service I could use, I check if they have an Alexa skill too. I like Amazon's take on the home assistant so much, I recently added an Echo Dot to my setup, which has further increased my usage of Alexa and connected services.
There's one big problem with the Amazon Echo, though: Alexa has no iPhone presence, and Apple is never going to give up the prime spot of Siri on their devices. Amazon has an Alexa app, but it's a clunky wrapper for a web view that has no voice functionality whatsoever. So while Siri has improved with iOS 10, it's still behind Alexa in terms of third-party integrations. I often find myself wishing I could ask Siri what I ask Alexa to do for me at home. I have to confess that I even considered an Amazon Tap – the poorly reviewed portable speaker with Alexa support – only to have some way to summon Alexa when driving.
Thankfully, developer Thaddeus Ternes sees this as a problem as well, and he created Astra, an iPhone app to issue requests to Alexa via voice. You might remember Ternes from Lexi, the predecessor of Astra that also allowed you to use Alexa on the iPhone. Lexi was pulled from the App Store and it's coming back as Astra, which sports a new design, support for timers and alarms, and background audio. After testing Astra for the past two weeks, I decided to put it on my Home screen and it's quickly become one of my most used iPhone apps when I'm not at home.
Interesting announcements from Amazon at its AWS event this week: the company is rolling out a suite of artificial intelligence APIs for developers to plug their apps into. These tools are based on the AWS cloud (which a lot of your favorite apps and services already use) and they leverage the same AI and deep learning that has also powered Alexa, the software behind the Amazon Echo.
Here's April Glaser, writing for Recode:
Drawing on the artificial intelligence that powers Amazon’s popular home assistant Alexa, the new tools will allow developers to build apps that have conversational interfaces, can turn text into speech and use computer vision that is capable of recognizing faces and objects.
Amazon’s latest push follows moves from Google and Microsoft, both of which have cloud computing platforms that already use artificial intelligence.
Google’s G Suite, for example, uses AI to power Smart Reply in Gmail, instant translation and smart scheduling functions in its calendar. Likewise, Microsoft recently announced it’s bringing artificial intelligence to its Office 365 service to add search within Word, provide productivity tracking and build maps from Excel with geographic data.
It's increasingly starting to look like "AI as an SDK" will become a requirement for modern apps and services. Deep learning and AI aren't limited to playing chess and recognizing cat videos anymore; developers are using this new kind of computing power for all kinds of features – see Plex, Spotify, and Todoist for two recent examples. I've also been hearing about iOS apps using Google's Cloud Vision a lot more frequently over the past few months.
I think this trend will only accelerate as AI reshapes how software gets more and better work done for us. And I wonder if Apple is considering an expansion of their neural network APIs to match what others are doing – competition in this field is heating up quickly.
Earlier this week, Logitech announced support for a new Alexa skill that lets Echo owners control their Harmony hubs and associated devices and services.
Today Logitech announced a new Amazon Alexa skill that enables voice control of your entire living room entertainment experience using a Logitech Harmony Hub with Alexa-enabled devices such as the Amazon Echo or Echo Dot.
When the skill is enabled on Amazon Echo or Amazon Echo Dot, you can start and stop Harmony Activities, control your entertainment devices, or even turn directly to your favorite channels, hands free, using only your voice. Harmony users can simply say “Alexa, turn on the TV,” or “Alexa, turn on Netflix” to control the TV as well as other entertainment and smart home devices, and Harmony makes it happen.
As those who listen to Connected may know, I've spent the past few months building a home automation setup based on the Amazon Echo and Alexa (more on this in the future). Connecting my TV to voice commands was the missing piece.
Here's Dan Moren, writing for Six Colors:
I set up a similar system a while back, using a combination of other services like IFTTT and Yonomi, but Logitech’s first-party integration definitely puts it in the reach of anybody with an Echo and a Harmony Hub who doesn’t want to muck around with nitty-gritty technical details.
Logitech’s integration mostly delivers what I could already do with those other services, but there are a couple of nice additions. For one thing, it gets rid of the “trigger” nomenclature imposed by IFTTT. Additionally, it lets you declare “friendly names” for your devices, so even if your Harmony Activity is “Watch Apple TV” you can just say “turn on Apple TV”, or you can use “turn on game console” or “turn on Xbox.” Other smart home devices that work with the Harmony Hub, like Hue lights, can also be triggered, though of course the Echo already has built-in control for those devices as well.
This sounds great. I ordered a Harmony hub + remote yesterday, and it's coming next week.