Anonymous Sources Detail Sam Altman’s Alleged Untrustworthiness in New Report

GettyImages 2265991537

On Monday, The New Yorker published a lengthy investigation detailing the days before and after Sam Altman’s brief ouster as CEO of OpenAI.

In late 2023, OpenAI’s board of directors shocked Silicon Valley by suddenly firing Sam Altman. After a five-day media attack by Altman and his supporters and a public letter demanding his return, Altman returned to the company as CEO. The board members who orchestrated the coup were ousted and replaced by Altman associates such as economist Larry Summers and former Facebook CTO Brett Taylor, who is currently chairman of the board at OpenAI.

When Altman was reinstated as CEO, OpenAI employees began referring to those turbulent few days as “The Blip”, in reference to the blip in the Marvel Cinematic Universe when supervillain Thanos caused half the world’s population to disappear for five years.

According to the New Yorker report, citing interviews with dozens of people in the know, including Altman, the OpenAI executive was ousted because his own board members did not find him trustworthy enough to “put his finger on the button” of artificial superintelligence, a theoretical and highly contested super-powered future AI system that could outperform human intelligence on all fronts. The term is sometimes used interchangeably with artificial general intelligence (AGI), although it describes a step further than that.

Following secret memos sent to fellow board members by OpenAI’s then-chief scientist Ilya Sutskever, the board compiled a nearly seventy-page document reportedly evidencing Altman’s “consistent pattern” of lying, including internal security protocols.

The report says Altman’s alleged history of lying even predates OpenAI. According to the investigation, senior employees at Altman’s previous startup, the now-defunct location-sharing service Loopt, asked the board to remove him as CEO due to concerns about a lack of transparency.

According to sources cited in the article, the allegations followed him to startup accelerator Y Combinator, which Altman led for five years until he was ousted due to antitrust. Y Combinator leadership has stated that he was not fired, but was merely asked to choose between the startup accelerator and OpenAI. The late hacktivist and former Reddit co-owner Aaron Swartz, who was in Altman’s group when he first joined Y Combinator as an entrepreneur along with Loopt, reportedly described him as “a sociopath” who “can never be trusted.”

At OpenAI, Altman was accused of lying to executives and even government officials. The report details an example in which Altman told US intelligence officials that China had launched a major AGI development project and sought government funding to launch a countermeasure, but failed to show any evidence when asked.

The report also details instances of Altman allegedly gaslighting Anthropic co-founder and then-OpenAI employee Dario Amodei in connection with a provision in the billion-dollar Microsoft deal OpenAI signed in 2019 that would have overruled philanthropic clauses Amodei had included in the company’s charter. The section in question was about AGI, and stated that if another company found a way to build it safely, OpenAI would “stop competing with this project and start assisting” it as a non-profit with a security-first purpose. OpenAI has since changed its structure to become a for-profit corporation.

Even some senior executives at Microsoft, with whom OpenAI has had a long partnership since its 2019 deal, described Altman as someone who “misrepresented, distorted, renegotiated, reneged on agreements.” A senior executive also said candidly about Altman: “I think there’s a small but real possibility that he’ll eventually be remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”

These are worrying words to read from any executive in charge of a company as large and consequential as OpenAI, but their significance is even greater given that OpenAI is the leading company creating a technology that many, including its early employees, defined as a potential existential threat to humanity.

Under the leadership of Sam Altman, OpenAI’s technology has infiltrated nearly all aspects of modern life. OpenAI’s AI is used by millions of people around the world for health advice, and by many others in everything from automating work across industries to completing homework for students and even providing vague companionship to some lonely people who want it. ChatGPT is also used throughout the federal government, and Altman recently sold the technology to the Pentagon.

It’s all inspired by Altman’s salesmanship. They have sold the potential and perceived realities of ChatGPT to many, setting off an unprecedented and potentially fragile dealmaking spree that has garnered so much investment that some experts say it is propping up the entire US economy right now.

The New Yorker report also claimed that Altman assured the board that GPT-4 had been approved by a security panel, which was proven to be a misrepresentation when a board member requested documentation of the approval. Sutskever claimed in the memo that Altman also downgraded the security approval requirement in conversations with former OpenAI CTO Mira Murati, citing the company’s general counsel. But when Murati asked the General Counsel about it, he said he was “confused as to where Sam got this notion from.”

The allegations surrounding ChatGPT’s security features are particularly damaging given the results of GPT-4O, ChatGPT’s iteration after GPT-4. The model’s habit of sycophancy reportedly led to cases of “AI psychosis” in vulnerable users, with some cases resulting in deaths.

Some of Altman’s inconsistencies have also been well documented publicly. Repeatedly, the OpenAI chief has published contradictory statements on things like the ability to insert ads into AI chatbots, the need for AI regulation, or whether ChatGPT’s voice feature, unveiled in 2024, was inspired by Scarlett Johansson’s performance in the film “Her.” Altman was also recently scrutinized over the massive $100 billion Nvidia deal, which did not go through as initially announced.

The report also details how the company’s culture changed drastically following Altman’s reinstatement as CEO. Before “The Blip”, the company had cautiously embraced the concept of AGI, while after it, AGI reportedly became the North Star for the company, with slogans such as “Feel the AGI” seen on merchandise around its offices. The perceived gap was also seen in practice, as OpenAI disbanded some key teams focused on chatbot safety, such as the existential AI risk team and the SuperAlignment team, which was co-led by Sutskever.

The report comes as Altman’s leadership is put under the microscope as the company begins preparations for a potential IPO.

According to a recent report from The Information, Altman appears to be at odds with executives once again, this time over OpenAI’s preparation for an IPO. Altman reportedly wants to go public in the fourth quarter of this year and has committed to spending $600 billion over the next five years, despite expectations that OpenAI will spend more than $200 billion before it starts making money. Meanwhile, the report claims that OpenAI CFO Sarah Fryer doesn’t believe the company is at all ready to go public this year due to risky spending commitments. Unlike Altman, Fryer reportedly does not yet believe that OpenAI’s revenue growth can support its financial commitments, nor is he certain that the company will even need to pour so much money into AI servers.



<a href

Leave a Comment