OpenAI made economic proposals — here’s what DC thinks of them

happy armistice day and you’re welcome regulatora newspaper for the verge Client about Big Tech’s difficult journey into the world of politics. If you are not a customer yet, You can do so hereBut my only request is that you sign up before Donald Trump revisits his previous threats towards Iran and decides to start World War III.

I’m back after suffering last week’s lethal combination of moderate cold and the start of pollen season. (Twenty-one percent of the district’s area is occupied by public green space, and DC is consistently ranked.) Best City Park System in America. Unfortunately, I’m allergic to every tree and grass.) If you have suggestions on anything I missed or something I should know about in the coming weeks, send them tina.nguyen+tips@theverge.com.

Do you really believe anything OpenAI says?

On Monday, OpenAI published a 13-page policy paper addressing the impact of artificial intelligence on the US workforce. The company also proposed what it thought was its solution: imposing higher capital gains taxes on corporations and replacing their workers with AI and using that money to create a larger public safety net. Its solutions include a public wealth fund, a four-day workweek funded by an “efficiency dividend”, and government programs to help workers transition to “human-centric” work, all financed by the abundance provided by artificial intelligence.

Unfortunately, it was released on the same day the new Yorker‘S Ronan Farrow And Andrew Marantz A carefully reported article of over 17,000 words was published Sam Altman’s A history of lying to everyone around him, including his Silicon Valley backers, his employees, his board, and – relevant in this case – lawmakers trying to regulate AI. New Yorker The article reinforced a long-standing story about Altman and OpenAI in detail: They may preach idealistic values, but will quickly discard them for financial and political gain.

People I spoke to said that in itself, the paper was positive for AI governance overall, introducing new ideas into the political discussion around the emerging technology. But unless the company’s policy and political influence live up to those promises, OpenAI’s critics said, it may just be a piece of paper.

“My guess is that there are people on the team who care about stuff, who have really thought a lot about this document and are proud of it, and have done a good job, even if it’s not addressing all the questions I want it to address,” malo bourgonthe CEO of the Machine Intelligence Research Institute (MIRI) told me. “And there’s still this question: Will those people find themselves in the position that many previous people at OpenAI were in, where they thought the company had certain values ​​or was aligned with things they cared about, and then they found out that wasn’t the case, they became disillusioned and left?”

With OpenAI’s proposed policy, it is worth taking a look at its history with the government, which New Yorker In-depth description of the piece. Altman was one of the first major CEOs to publicly advocate federal oversight for AI, even proposing a federal agency to oversee advanced models in 2023 — but privately he worked to suppress laws containing his own security proposals. A state legislative aide in California accused OpenAI of engaging in “increasingly cunning, deceptive behavior” to derail the 2023 AI safety bill it was publicly supporting. In 2025, the company subpoenaed supporters of a California state-level AI bill in an effort, as one such supporter put it the new Yorker“Basically scare them into quieting down.” And although Altman once worked extensively with the Biden administration to create AI safety standards, the moment Donald Trump became president, Altman successfully convinced him to end the initiatives he once advocated.

nathan calvinThe general counsel of Encode, an AI policy nonprofit where he focuses on state legislative initiatives, received one of those subpoenas. “What I have seen of his policy and government affairs is extremely disappointing,” he told me. Although he believed that the team that wrote the OpenAI proposal was working with good intentions, primarily from the technical security research side, he reserved judgment. “Will those people remain engaged as we move from general policy principles to the many other ways in which lobbying and government influence actually occurs? Part of me is hopeful, but a Very I am also very skeptical about whether this will happen.” (OpenAI did not return requests for comment.)

A minor one, of course No Craving Request:

Next week I’m planning to run an issue of regulator Listing the weirdest events that happen during Nerd Prom, aka the White House Correspondents’ Dinner Party circuit. If you’re a tech founder, tech company or someone who does something technology related and you’re hosting an event during WHCD week, please let me know what you’re doing! From what I’ve heard so far, the tech world is going to be shaking up the usual social dynamics this week – I’ve already discovered the Grindr party at Georgetown, and the Substack party, home of the famous Luxmaxer. clavicle is participating – and I’m very excited to put together the most fascinating “Spotted” column Washington has ever experienced.

(Again, this depends on whether we are at war with Iran by the end of April, in which case, I think no one will be prepared for trivialities.)

Speaking of DC journalists, this is absolutely true for all of us:

Screenshot via @jakewilkns/X.

Screenshot via @jakewilkns/X.

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.




<a href

Leave a Comment