OpenAI Accused of Hiding Bad AI News, Losing Staff

OpenAI Accused of Hiding Bad AI News, Losing Staff - Professional coverage

According to Gizmodo, a new report from WIRED alleges that OpenAI has become “more guarded” about publishing negative findings from its economic research team, such as data on jobs AI might replace. This alleged self-censorship has led to the departure of at least two employees, including data scientist Tom Cunningham, who now works at safety nonprofit METR. In an internal message, Cunningham reportedly stated the economic research team was functioning as OpenAI’s “advocacy arm.” The team is now managed by chief economist Aaron Chatterji, who reports to “master of disaster” Chris Lehane, the chief global affairs officer. Under this new structure, the team recently published a finding that AI could save workers 40 to 60 minutes a day. This follows a New York Times report last month accusing OpenAI of knowingly pursuing addictive AI chatbot designs despite mental health risks.

Special Offer Banner

The Advocacy Shift

Here’s the thing: this isn’t a huge shock, but it’s a stark marker of how far OpenAI has traveled. They started as a research lab, a non-profit aiming to build safe AGI for humanity. Now? They’re a commercial juggernaut with products like ChatGPT that are basically printing money. When your primary identity shifts from “lab” to “product company,” the incentives change dramatically. Suddenly, research that could spook regulators, scare off enterprise clients, or just generate bad PR isn’t just inconvenient—it’s a potential threat to the bottom line. So you hire a chief economist and a “master of disaster” communications veteran. The research output starts to look more like…well, advocacy.

A Pattern of Pressure

And Cunningham isn’t the first to bail over this. Last year, the company’s former head of policy research, Miles Brundage, left and wrote that publishing constraints had “become too much.” He said OpenAI was now so high-profile that it was hard to publish on important topics. That’s the real cost of this approach: you lose the critical, independent-minded researchers. They go to places like METR or academia, and you’re left with a team whose findings conveniently align with the corporate narrative. It creates a dangerous echo chamber inside the company that’s arguably shaping the world’s most influential technology.

Why This Matters Beyond OpenAI

This is bigger than one company’s internal drama. AI’s economic impact is colossal and poorly understood. Early research suggests it’s already crushing the entry-level job market, and even Fed Chair Jerome Powell has said it’s “probably a factor” in unemployment. If the leading AI company is filtering out the scary data, who are we supposed to believe? Policymakers, educators, and workers are trying to prepare for a seismic shift with one hand tied behind their back. It also feeds right into the industry’s nasty political fight. OpenAI President Greg Brockman backs a super-PAC that sees safety rules as innovation blockers, while the company’s own lobbying efforts are intense. How can we have a sane debate about regulating a technology when the biggest player might be hiding the evidence of its downsides?

The Trust Trap

So what’s the endgame? OpenAI is betting its reputation—and a huge chunk of the public’s trust in AI—on a tightly controlled message. They want to be the responsible stewards. But stories like this chip away at that foundation. If they’re curating economic research, what else are they soft-pedaling? The safety of their next model? The environmental cost of projects like the rumored “Stargate” data center? Once you start managing the science for PR purposes, it’s a slippery slope. Basically, they’re trying to have it both ways: be the fearless pioneer and the cautious corporate giant. I’m not sure you can do both forever. And if the people who know the tech best keep leaving because they can’t speak freely, that should tell us everything we need to know.

Leave a Reply

Your email address will not be published. Required fields are marked *