This week, Amazon launched Alexa+, the company’s chatbot-powered sequel to regular, old Alexa. This means everyone with a compatible Echo speaker can switch to Amazon’s new voice assistant, which is now paired with a large language model (LLM) similar to the one used by OpenAI’s ChatGPIT. It was a long road of early access lasting almost a year, which you’d think would help improve the end results, but so far, reactions have been disappointing.
Comments on Reddit are filled with complaints of slow response times, low accuracy, and frustration with the general user experience. In other words, the same complaints that plagued voice assistants before the great merger of generic AI. Although I haven’t had a chance to test Alexa+ in detail myself, tech reviewers like our guys at The Verge have raved about it, and… yeah, not so great.
Deflating as it is, Alexa+ and its anticlimax is not the exception – it appears to be the rule. Google also recently fully launched its new AI-infused voice assistant, and the results have been nothing short of fireworks. I’ve been using Gemini for Home (Google’s next-generation voice assistant) for the past few months and can confirm its collective lack of enthusiasm.

While Gemini for Home’s voice and manner of speaking may be more natural, the performance is almost the same and sometimes worse than before. One significant aspect of friction is that Gemini for Home takes significantly longer to process commands than the previous version. It would almost be fine if it were more accurate and better at doing exactly what you want, but in my experience, that’s not always the case. Like Alexa+, Gemini for Home sometimes gets confused, overthinks, or simply misunderstands what you want—all major problems with previous versions of the voice assistant.
there are Some? Features, like being able to ask Gemini for Home to do multiple things with a single command, are great, but I’d be lying if I said those features have changed the smart home/voice assistant experience in the way that Google predicted.
What I’m trying to say is that right now, the next generation of voice assistants are looking a little strange. I say “now” because there’s always room for improvement – perhaps a technological breakthrough will usher in a new way of marrying voice assistants and LLMs, which will eventually make them feel like a significant upgrade. Who knows? However, I would be lying if I said I was optimistic about that possibility. One thing that diminishes my confidence is the only major voice assistant I haven’t mentioned so far: Siri.
If one thing is clear about AI Siri, it’s that making it do all the things Apple wants it to do is harder than the company expected. While a new Siri was announced for 2024, Apple has yet to introduce the full capabilities of its voice assistant, with its launch being delayed due to concerns over its performance and readiness to function at scale. Apple may eventually join the ranks with Alexa+ and Gemini soon — possibly this spring — but for now, it stands as a reminder that this whole voice assistant thing is a lot harder than it looks. Even if it comes soon, it’s worth noting Gemini It will help power the new voice assistant, thanks to a deal struck between Google and Apple in January.
And until someone cracks the code, we’re left with LLM-powered voice assistants that look exactly like non-LLM-powered voice assistants, and something tells me Not there. What are Google, Amazon and Apple going to do? Friends, it is better to drink some hot tea with honey, because you will be constantly yelling at your smart speaker for the foreseeable future.
<a href