White House extracts voluntary commitments from AI vendors to combat deepfake nudes



The White House says several major AI vendors have committed to taking steps to combat nonconsensual deepfakes and child sexual abuse material.

Adobe, Cohere, Microsoft, Anthropic OpenAI, and data provider Common Crawl said that they’ll “responsibly” source and safeguard the datasets they create and use to train AI from image-based sexual abuse. These organizations — minus Common Crawl — also said that they’ll incorporate “feedback loops” and strategies in their dev processes to guard against AI generating sexual abuse images. And they (again, minus Common Crawl) said that they’ll commit to removing nude images from AI training datasets “when appropriate and depending on the purpose of the model.”

The commitments are self-policed, it’s worth noting. Many AI vendors opted not to participate (e.g. Midjourney, Stability AI, etc.). And OpenAI’s pledges in particular raise questions, given that CEO Sam Altman said in May that the company would explore how to “responsibly” generate AI porn.

The White House nonetheless touted them as a win in its broader effort to identify and reduce the harm of deepfake nudes.




Source