Google’s new SynthID Detector can help spot AI slop



Google is launching a way to quickly check whether an image, video, audio file, or snippet of text was created using one of its AI tools.

SynthID Detector, announced Tuesday at Google I/O 2025, is a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Users can upload a file, and SynthID Detector will determine whether the whole sample — or just a part of it — is AI-created.

The debut of SynthID Detector comes as AI-generated media floods the web. The number of deepfake videos alone skyrocketed 550% from 2019 to 2024, according to one estimate. Per The Times, of the top 20 most-viewed posts on Facebook in the U.S. last fall, four were “obviously created by AI.”

DeepMind SynthID
Image Credits:DeepMind

Of course, SynthID Detector has its limitations. It only detects media created with tools that use Google’s SynthID specification — mainly Google products. Microsoft has its own content watermarking technologies, as do Meta and OpenAI.

SynthID also isn’t a perfect technology. Google admits that it can be circumvented, particularly in the case of text.

To that first point, Google is arguing that its SynthID standard is already used at a massive scale. According to the tech giant, more than 10 billion pieces of media have been watermarked with SynthID since it launched in 2023.




Source