anthropic

Big Tech lands an early win in legal battles against publishers

This week, two major AI companies scored early wins in court, with federal judges siding with Meta and Anthropic in separate lawsuits over how their models were trained on copyrighted material. The decisions represent the first real legal validation of AI companies’ argument that training models on books, images, and other creative works can be […]

Big Tech lands an early win in legal battles against publishers Read More »

People use AI for companionship much less than we’re led to think

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude,

People use AI for companionship much less than we’re led to think Read More »

People use AI for companionship much less than we’re led to believe

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude

People use AI for companionship much less than we’re led to believe Read More »

A federal judge sides with Anthropic in lawsuit over training AI on books without authors’ permission

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train

A federal judge sides with Anthropic in lawsuit over training AI on books without authors’ permission Read More »

Anthropic appoints a national security expert to its governing trust

A day after announcing new AI models designed for U.S. national security applications, Anthropic has appointed a national security expert, Richard Fontaine, to its long-term benefit trust. Anthropic’s long-term benefit trust is a governance mechanism that Anthropic claims helps it promote safety over profit, and which has the power to elect some of the company’s

Anthropic appoints a national security expert to its governing trust Read More »