According to reports, the recent outage of Amazon Web Services (AWS) for 13 hours was reportedly caused by one of its own AI tools. financial Times. Four people familiar with the matter say it happened after engineers deployed the Kiro AI coding tool to make some changes in December.
Kiro is an agentic tool, meaning it can take autonomous actions on behalf of users. In this case, the bot reportedly determined that it needed to “delete and recreate the environment.” Reportedly, this is the reason why there was power failure for a long time, which mainly affected China.
Amazon says it was merely a “coincidence that AI tools were involved” and that “the same problem could have occurred with any developer tool or manual action.” The company attributed the outage to “user error, not AI error.” It said that by default the Kiro tool “requests authorization before taking any action” but that the employee involved in the December incident “had broader permissions than expected – a user access control issue, not an AI autonomy issue.”
Many Amazon employees were talked to financial Times And noted that this was “at least” the second time in recent months that the company’s AI tools were at the center of a service disruption. “The outages were small but completely predictable,” a senior AWS employee said.
The company launched Kiro in July and has since encouraged employees to use the tool. Leadership has set a goal of 80 percent weekly usage and is closely monitoring the adoption rate. Amazon also sells access to agentic tools for a monthly subscription fee.
These latest outages follow a more serious incident in October, in which a 15-hour AWS outage disrupted services like Alexa, Snapchat, and more. Fortnite and Venmo, among others. The company blamed a bug in its automation software for this.
However, Amazon disagrees with the characterization of some products and services as being unavailable as an outage. in response to financial Times In the report, the company shared the following, which it also published on its news blog:
We want to correct yesterday’s mistakes. The brief service disruption they reported was the result of user error – specifically misconfigured access controls – not AI, as the story claimed.
The disruption was an extremely limited incident last December that impacted a single service (AWS Cost Explorer—which helps customers see, understand, and manage AWS cost and usage over time) in one of our 39 geographic regions around the world. It had no impact on compute, storage, databases, AI technologies or any of the hundreds of services we run. The problem arose from a misconfigured role – the same problem that can happen with any developer tool (AI powered or not) or manual action. We did not receive any customer inquiries regarding the outage. We implemented a number of safeguards to prevent this from happening again – not because the incident had a major impact (it did not), but because we emphasize learning from our operational experience to improve our security and resilience. Additional security measures include mandatory peer review for production access. While operational incidents involving misconfigured access controls can happen with any developer tool – AI-powered or not – we think it’s important to learn from these experiences. The Financial Times’ claim that a second incident affected AWS is completely false.
For more than two decades, Amazon has achieved high operational excellence with our Correction of Errors (CoE) process. We review these together so we can learn from any incidents, regardless of customer impact, to address issues before their potential impact becomes larger.
Update, February 21, 2026, 11:58 am ET: This story has been updated to include Amazon’s full statement in response to the Financial Times report.
<a href