ai safety

Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks

In a new report, a California-based policy group co-led by Fei-Fei Li, an AI pioneer, suggests that lawmakers should consider AI risks that “have not yet been observed in the world” when crafting AI regulatory policies. The 41-page interim report released on Tuesday comes from the Joint California Policy Working Group on Frontier AI Models, […]

Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks Read More »

Eric Schmidt argues against a ‘Manhattan Project for AGI’

In a policy paper published Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with “superhuman” intelligence, also known as AGI. The paper, titled “Superintelligence Strategy,” asserts that an aggressive

Eric Schmidt argues against a ‘Manhattan Project for AGI’ Read More »

UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic

The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it’s pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the

UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic Read More »

Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

Anthropic’s CEO Dario Amodei is worried about competitor DeepSeek, the Chinese AI company that took Silicon Valley by storm with its R1 model. And his concerns could be more serious than the typical ones raised about DeepSeek sending user data back to China.  In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei said DeepSeek generated

Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test Read More »

Andrew Ng is ‘very glad’ Google dropped its AI weapons pledge

Andrew Ng, the founder and former leader of Google Brain, supports Google’s recent decision to drop its pledge not to build AI systems for weapons. “I’m very glad that Google has changed its stance,” Ng said during an on-stage interview Thursday evening with TechCrunch at the Military Veteran Startup Conference in San Francisco. Earlier this

Andrew Ng is ‘very glad’ Google dropped its AI weapons pledge Read More »

Google removes pledge to not use AI for weapons from website

Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week. Asked for

Google removes pledge to not use AI for weapons from website Read More »

Sam Altman’s ousting from OpenAI has entered the cultural zeitgeist

The lights dimmed as five actors took their places around a table on a makeshift stage in a New York City art gallery turned theater for the night. Wine and water flowed through the intimate space as the house — packed with media — sat to witness the premiere of “Doomers,” Matthew Gasda’s latest play

Sam Altman’s ousting from OpenAI has entered the cultural zeitgeist Read More »

The Pentagon says AI is speeding up its ‘kill chain’

Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a “significant advantage” in identifying, tracking,

The Pentagon says AI is speeding up its ‘kill chain’ Read More »