Following years of development, Amazon’s next-generation digital assistant is nearly ready for public use. Panos Panay, senior vice president of devices and services, demoed Alexa+ at the company’s 2025 devices event, and it gave a glimpse of how generative AI could supercharge a product millions of people use.
The model powering Alexa+ can detect tone and mood and respond accordingly, with a completely new voice — one that sounds more natural. Moreover, it’s only necessary to say “Alexa” once to wake the assistant, and it will then follow the conversation. Panay said Alexa+ has contextual awareness, with the ability to “remember” earlier parts of a conversation. “You can have almost any conversation — that intimidation factor of AI is gone.”
In one of the more impressive demos Amazon showed off, Panay asked Alexa+ to play a song without actually naming it properly. “What’s the song Bradley Cooper sings.. it’s like a duet?” Alexa+ correctly answered “Shallow,” and said Cooper sings it with Lady Gaga in A Star is Born. Panay then asked Alexa to “move” the music to the “right side of the room,” and the assistant properly identified the correct speaker and played the music there. According to Panay, it will even understand requests like “play the music everywhere but don’t wake the baby.” In that case, Alexa+ will be able to reason that it shouldn’t cast music to the nursery.
On first glance, Alexa+ also offers much deeper (and smarter) integration with Amazon’s disparate services. For instance, when watching Prime Video it’s possible to jump to a specific scene using details like the name of an actor or character, with no need to manually fast forward or rewind through the footage. It’s possible to search through Ring footage in much the same way. During his demo, Panay asked Alexa+ to help him remember if someone walked the family dog recently, and the assistant correctly jumped to the correct clip.
That level of integration should extend to third-party apps with Amazon offering new tools to companies like Uber, Grubhub and OpenTable to allow Alexa+ to access information from their platforms intelligently. In one demo, Amazon showed how Alexa+ was able to make a reservation on OpenTable and then add a reminder to the user’s calendar. Moments later, the assistant booked an Uber ride for a person and sent them a text message notifying them of the upcoming ride. It will be interesting to see how this capability works in real life; the demo involved a hypothetical pickup at JFK in New York, and if you’ve ever been to that airport, you know finding the correct Uber can involve complicated pickup zones and even a shuttle train along the way.
Multi-modality with Alexa+ extends to documents, and this is where Amazon’s demo didn’t go quite according to plan. When Mara Segal, director of Alexa, asked Alexa+ a question about a HOA document she shared with the assistant, Alexa talked over Mara before correctly responding after a second request. Amazon says Alexa+ will be able to act on information from documents to provide helpful summaries and add events to your calendar.
Alexa+ will come included with Amazon Prime. Amazon will also offer the enhanced digital assistant separately for $20 per month. For context, Prime currently costs $15 per month in the US. The company will begin rolling out early access to Alexa+ starting next month, with availability expanding over the coming months in waves. Initially, Amazon is bringing Alexa+ to devices with screens. If you don’t own an Echo Show 8, 10, 15 or 21, you can buy one of those smart displays now and you’ll be among the first to get early access as Amazon rolls out Alexa+ to more people.