hallucinations

Anthropic’s new Citations feature aims to reduce AI errors

In an announcement perhaps timed to divert attention away from OpenAI’s Operator, Anthropic Thursday unveiled a new feature for its developer API called Citations, which lets devs “ground” answers from its Claude family of AI in source documents such as emails. Anthropic says Citations allows its AI models to provide detailed references to “the exact […]

Anthropic’s new Citations feature aims to reduce AI errors Read More »

AWS’ new service tackles AI hallucinations

Amazon Web Services (AWS), Amazon’s cloud computing division, is launching a new tool to combat hallucinations — that is, scenarios where an AI model behaves unreliably. Announced at AWS’ re:Invent 2024 conference in Las Vegas, the service, Automated Reasoning checks, validates a model’s responses by cross-referencing customer-supplied info for accuracy. AWS claims in a press

AWS’ new service tackles AI hallucinations Read More »

Microsoft claims its new tool can correct AI hallucinations, but experts advise caution

AI is a notorious liar, and Microsoft now says it has a fix for that. Understandably, that’s going to raise some eyebrows, but there’s reason to be skeptical. Microsoft today revealed Correction, a service that attempts to automatically revise AI-generated text that’s factually wrong. Correction first flags text that may be erroneous — say, a

Microsoft claims its new tool can correct AI hallucinations, but experts advise caution Read More »

Study suggests that even the best AI models hallucinate a bunch

All generative AI models hallucinate, from Google’s Gemini to Anthropic’s Claude to the latest stealth release of OpenAI’s GPT-4o. The models are unreliable narrators in other words — sometimes to hilarious effect, other times problematically so. But not all models make things up at the same rate. And the kinds of mistruths they spout depend

Study suggests that even the best AI models hallucinate a bunch Read More »