Meta built its AI reputation on openness — that may be changing



Top members of Meta’s new Superintelligence Lab discussed pivoting away from the company’s powerful open-source AI model, Behemoth, and instead developing a closed model, reports The New York Times

Sources told The Times that Meta had completed training on Behemoth, but delayed its release due to underwhelming internal performance. When the new Superintelligence Lab launched, testing on the model reportedly halted. 

The discussions are just that – discussions. Meta CEO Mark Zuckerberg would still need to sign off on any changes, and a company spokesperson told TechCrunch that Meta’s position on open source AI is “unchanged.”

“We plan to continue releasing leading open source models,” the spokesperson said. “We haven’t released everything we’ve developed historically and we expect to continue training a mix of open and closed models going forward.”

The spokesperson did not comment on Meta’s potential shift away from Behemoth. If that happens so that Meta can prioritize closed-source models, it would mark a major philosophical change for the company.

While Meta deploys more advanced closed-source models internally, like those powering its Meta AI assistant, Zuckerberg had made open source a central part of the company’s external AI strategy — a way to keep AI development moving faster. He loudly positioned the Llama family’s openness as a differentiator from competitors like OpenAI, which Zuckerberg publicly criticized for becoming more closed after partnering with Microsoft. But Meta is under pressure to monetize beyond ads as it pours billions into AI. 

That includes paying massive signing bonuses and nine-figure salaries to poach top researchers, building out new data centers, and covering the enormous costs of developing artificial general intelligence (AGI), or “superintelligence.”

Despite having one of the top AI research labs in the world, Meta still lags behind rivals like OpenAI, Anthropic, Google DeepMind, and xAI when it comes to commercializing its AI work.

If Meta prioritizes closed models, it could suggest that openness was a strategic play, not an ideological one. Past comments from Zuckerberg hint at an ambivalence toward committing to open sourcing Meta’s models. On a podcast last summer, he said:

“We’re obviously very pro open source, but I haven’t committed to releasing every single thing that we do. I’m basically very inclined to think that open sourcing is going to be good for the community and also good for us because we’ll benefit from the innovations. If at some point, however, there’s some qualitative change in what the thing is capable of, and we feel like it’s not responsible to open source it, then we won’t. It’s all very difficult to predict.”

Closed models would give Meta more control and more ways to monetize – especially if it believes the talent it has acquired can deliver competitive, best-in-class performance. 

Such a shift could also reshape the AI landscape. Open-source momentum, largely driven by Meta and models like Llama, could slow, even as OpenAI gears up to release its still-delayed open model. Power could swing back toward the major players with closed ecosystems, while open-source development might remain a product of grassroots efforts. The ripple effects would continue across the startup ecosystem, especially for smaller companies focused on fine-turning, safety, and model alignment that rely on access to open foundation models. 

On the world stage, Meta’s retreat from open source could potentially cede ground to China, which has embraced open-source AI – like DeepSeek and Moonshot AI – as a way to build domestic capability and global influence.




Source