Google Now Detects Origin of AI-Manipulated Images in Search Results
Google is leading the way in using the C2PA’s new labeling standard for AI-generated content by adding it to Google Search and Ads. This makes image metadata more accurate and fights false information.
Google is stepping up its efforts to accurately label AI-generated content. Google has updated its “About This Image” tool to incorporate a global standard for identifying AI-edited image locations. As part of its work with the global Coalition for Content Provenance and Authenticity (C2PA), Google came up with the new label.
The C2PA has promised to create and use a standard process for detecting and certifying AI. A verification technology known as “content credentials” will enable this. Amazon, Meta, and OpenAI are C2PA members but have not implemented authentication standards.
Google is the first major player to adopt the C2PA’s new 2.1 standard, which will be built into Google Search and eventually Google Ads (to see the “About This Image” prompt, click on the three vertical dots above a photo found in a search). This standard includes an official “Trust List” of devices and technologies that can help check the metadata of a photo or video to see where it came from.
Google’s vice president of trust and safety, Laurie Richardson, stated to the Verge, “For instance, if the data indicates that a particular camera model took a picture, the trust list ensures the accuracy of this information.” “Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.”
Google Expands AI Content Labeling with C2PA on YouTube
TikTok was the first video platform to use the C2PA’s content credentials after joining the group in May. These included an automatic labeling system that reads a video’s metadata and marks it as AI. Today, the content credentials for Google platforms were released. Now, YouTube is going to do the same.
Google has spoken out in favour of labeling and regulating AI on a large scale, especially as a way to stop the spread of false information. In 2023, Google introduced SynthID, a digital watermarking tool, to assist in identifying and monitoring AI-generated content through Imagen, Google DeepMind’s text-to-image generator.
It began requiring (limited) AI labeling on YouTube videos earlier this year, and it has promised to deal with deepfake content made by AI in Google Search. The company joined the C2PA steering committee in February, a group that includes other well-known industry names and even news organizations like the BBC.