Longtime policy researcher Miles Brundage leaves OpenAI



Miles Brundage, a longtime policy researcher at OpenAI and senior advisor to the company’s AGI readiness team, has left.

In a post on X today and an essay on his personal blog, Brundage said that he believes he’ll have more impact as a policy researcher and advocate in the nonprofit sector, where he’ll have “more of an ability to publish freely and more independence.”

“Part of what made this a hard decision is that working at OpenAI is an incredibly high impact opportunity, now more than ever,” Brundage said. “OpenAI needs employees who care deeply about the mission and who are committed to sustaining a culture of rigorous decision-making about development and deployment (including internal deployment, which will become increasingly important over time).”

With Brundage’s departure, OpenAI’s economic research division, which until recently was a sub-team of AGI readiness, will move under OpenAI’s new chief economist Ronnie Chatterji. The remainder of the AGI readiness team will be distributed among other OpenAI divisions, Brundage says; Joshua Achiam, head of mission alignment at OpenAI, will take on some of AGI readiness’ ongoing projects.

We’ve reached out to OpenAI for comment and will update this post if we hear back.

Brundage joined OpenAI in 2018, where he began as a research scientist and later became the company’s head of policy research. Prior to OpenAI, Brundage was a research fellow at the University of Oxford’s Future of Humanity Institute.

On OpenAI’s AGI readiness team, Brundage had a particular focus on the responsible deployment of language generation systems like ChatGPT. In recent years, OpenAI has been accused by several former employees — and board members — of prioritizing commercial products at the expense of AI safety.

In his post on X, Brundage urged OpenAI employees to “speak their minds” about how the company can do better.

“Some people have said to me that they are sad that I’m leaving and appreciated that I have often been willing to raise concerns or questions while I’m here … OpenAI has a lot of difficult decisions ahead, and won’t make the right decisions if we succumb to groupthink,” he wrote.

OpenAI’s been shedding high-profile execs in recent weeks, the culmination of disagreements over the company’s direction. CTO Mira Murati, chief research officer Bob McGrew, and research VP Barret Zoph announced their resignations in late September. Prominent research scientist Andrej Karpathy left OpenAI in February; months later, OpenAI co-founder and former chief scientist Ilya Sutskever quit, along with ex-safety leader Jan Leike. In August, co-founder John Schulman said he would leave OpenAI. And Greg Brockman, the company’s president, is on sabbatical.

It’s been a rather unflattering day for OpenAI.

This morning, the company was the subject of a New York Times profile of former OpenAI researcher Suchir Balaji, who said that he left the company because he no longer wanted to contribute to technologies that he believed would bring society more harm than good. Balaji also accused OpenAI of violating copyright by training its models on IP-protected data without permission — an allegation others have made against the company in lawsuit.




Source