OpenAI CEO Sam Altman has said humanity is only years away from developing artificial general intelligence that could automate most human labor. If that’s true, then humanity also deserves to understand and have a say in the people and mechanics behind such an incredible and destabilizing force.
That is the guiding purpose behind “The OpenAI Files,” an archival project from the Midas Project and the Tech Oversight Project, two nonprofit tech watchdog organizations. The Files are a “collection of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.” Beyond raising awareness, the goal of the Files is to propose a path forward for OpenAI and other AI leaders that focuses on responsible governance, ethical leadership, and shared benefits.
“The governance structures and leadership integrity guiding a project as important as this must reflect the magnitude and severity of the mission,” reads the website’s Vision for Change. “The companies leading the race to AGI must be held to, and must hold themselves to, exceptionally high standards.”
So far, the race to dominance in AI has resulted in raw scaling — a growth-at-all-costs mindset that has led companies like OpenAI to hoover up content without consent for training purposes and build massive data centers that are causing power outages and increasing electricity costs for local consumers. The rush to commercialize has also led companies to ship products before putting in necessary safeguards, as pressure from investors to turn a profit mounts.
That investor pressure has shifted OpenAI’s core structure. The OpenAI Files detail how, in its early nonprofit days, OpenAI had initially capped investor profits at a maximum of 100x so that any proceeds from achieving AGI would go to humanity. The company has since announced plans to remove that cap, admitting that it has made such changes to appease investors who made funding conditional on structural reforms.
The Files highlight issues like OpenAI’s rushed safety evaluation processes and “culture of recklessness,” as well as the potential conflicts of interest of OpenAI’s board members and Altman himself. They include a list of startups that might be in Altman’s own investment portfolio that also have overlapping businesses with OpenAI.
The Files also call into question Altman’s integrity, which has been a topic of speculation since senior employees tried to oust him in 2023 over “deceptive and chaotic behavior.”
“I don’t think Sam is the guy who should have the finger on the button for AGI,” Ilya Sutskever, OpenAI’s former chief scientist, reportedly said at the time.
The questions and solutions raised by the OpenAI Files remind us that enormous power rests in the hands of a few, with little transparency and limited oversight. The Files provide a glimpse into that black box and aim to shift the conversation from inevitability to accountability.