Google leaders including Demis Hassabis push back on claim of uneven AI adoption internally


A viral post by veteran programmer and former Google engineer Steve Yegge on

The debate began when Yegge echoed what he said was the opinion of his friend, a current and longtime Google employee (or Googler), who claimed that Gemini AI—the firm’s internal AI adoption—looks far more general and less cutting-edge than outsiders expect.

Yegge said a Googler friend claimed that Google engineering reflects the “average” industry pattern of a 20%-60%-20% split: a small group of outright AI refusers (20%), a much larger middle still relying primarily on simple chat and coding-assisted workflows (60%), and another small group of AI-first, cutting-edge engineers extensively using agentic tools and in them. Achieving mastery (20%).

A VentureBeat search of

We have contacted Google for comment on the claims and will update when we receive a response.

An experienced, outspoken Googler voice

Why was Yegge’s anonymous Googler friend so harsh in his opinion? In part because Yegge isn’t just another commentator taking shots from the sidelines.

He spent nearly 13 years at Google after first stints at Amazon and GeoWorks, later joining Grab, and then became head of engineering at Sourcegraph in 2022. He has long been known in software circles for his widely read essays on programming and engineering culture, and for an earlier internal Google memo that accidentally became public in 2011 and attracted widespread media attention.

That history helps explain why engineers and executives still take his criticisms seriously, even if they reject them.

Yegge has built a reputation over the years as an outspoken insider-outsider voice on software culture, a man with such a reputation in the industry that his judgments can spread quickly, especially when they touch nerves inside big technology companies.

Wikipedia’s summary of his career notes his long Google tenure and the immense attention his blog posts and prior Google criticisms have received.

Unraveling the argument of Yegge’s friend

In this case, Yegge’s argument was not simply that Google uses too little AI. It was that the company’s adoption may be uneven, culturally constrained, and less transformative than anticipated for its branding.

His friend reportedly argued that some Googlers could not use Anthropic’s cloud code because it was framed as “the enemy” and that Gemini was not yet good enough for a full agentic coding workflow. He compared Google to a small group of companies growing very fast.

Response from Hassabis and current Googlers

The first major pushback came from Demis Hassabis, co-founder and CEO of Google DeepMind, who responded directly and forcefully. “Maybe tell your friend to do some actual work and stop spreading complete nonsense. This post is completely false and just pure clickbait,” Hassabis wrote.

Other Google leaders followed with lengthy defenses.

Addy Osmani, a director of Google Cloud AI, wrote that Yegge’s account “does not match the state of agentic coding at our company.” “Over 40K SWEs use Agentic Coding here weekly,” he adds.

Usmani said that Googlers have access to internal tools and systems, including “custom models, skills, CLIs, and MCPs”, and pushed back on the idea that Google employees are isolated from external models, writing that “people can also use @AnthropicAI’s models at Vertex” and concluding that “Google is anything but average.”

Other current Google employees reinforced that message. Jana Dogan, a software engineer at Google, wrote in a quote tweet: “Everyone I work with uses @antigravity like every second of the day,” followed up with another X post: "Unpopular Opinion: If you think tokens burned is a productivity metric, no one should take you seriously. Imagine you are the top 0.0001% of authors and they are only counting the tokens you produce."

Paige Bailey, head of DevX engineering at Google DeepMind, said that Teams has agents “running 24/7.”

Several other Google and DeepMind figures also challenged Yegge’s characterization, with some disputing the factual basis of his claims and others suggesting that he lacked visibility into current internal usage.

Yeggay’s rebuttal

Yegge, for his part, did not back down. In a follow-up to Hassabis, he wrote, “I’m not trying to misrepresent anyone,” but argued that by its own standard for advanced AI adoption, Google is still not performing particularly well.

He pointed to the use of tokens and the replacement of old development habits with actual agent workflows as more meaningful benchmarks, and said he would be willing to withdraw his criticism if Google could show that its engineers are working at that level.

AI adoption vs AI transformation

This leaves the core dispute unresolved, but clear. This is less a battle over whether Google engineers use AI, but rather a battle over what should count as meaningful adoption.

Googlers are pointing to scale, weekly usage, and availability of internal and external devices. Yegge is arguing that these measures can achieve broad performance gains without proving AI transformation, a deep change in how engineering work is done. This clash reflects a broader industry divide between visual usage metrics and more transformative, power-user behavior.

For Google, the topic is particularly sensitive. Yegge has criticized the company before, including a 2018 essay explaining why he left the company, where he argued that Google had become too risk-taking and had lost its ability to innovate.

If his latest criticism had come from a lesser-known poster, it might have fallen flat. Coming from a former Google engineer with a long, memorable record of public criticism, the responses instead came directly from some of the company’s top AI figures — and turned one post into a broader public argument about whether Google’s AI leadership is as deeply entrenched internally as it appears from the outside.



<a href

Leave a Comment