OpenAI improves transparency of AI-generated content
Image credit: Reuters

OpenAI improves transparency of AI-generated content

Read Time:2 Minute, 47 Second

OpenAI is going to be a part of the steering committee for the Coalition for Content Provenance and Authenticity (C2PA). To make generated content more clear, it will add the open standard’s metadata to its generative AI models.

The C2PA standard lets digital content be certified with metadata that shows where it came from, whether it was created by AI alone, edited with AI tools, or captured in the traditional way. OpenAI has already begun adding C2PA metadata to images from its newest DALL-E 3 model in ChatGPT and the OpenAI API. OpenAI will incorporate the metadata into its upcoming video generation model, Sora when it goes public.

OpenAI said, “People can still make false content without this information (or can get rid of it), but they can’t easily fake or change this information, which makes it an important tool for building trust.”

More and more people are worried that AI-made content could trick voters before this year’s big elections in the US, UK, and other countries. This move comes in response to those worries. AI-generated media verification could aid in combating deepfakes and other altered content that propagates false information.

OpenAI agrees that for content authenticity to work in real life, platforms, content creators, and people who handle content need to work together to maintain metadata for end users.

Along with C2PA integration, OpenAI is also working on new provenance methods, such as tamper-resistant watermarking for audio and image detection classifiers to tell the difference between AI-generated images.

Under its Researcher Access Programme, OpenAI is now accepting applications from people who want to use its DALL-E 3 image detection classifier. The tool guesses how likely it is that an image came from one of OpenAI’s models.

The company’s goal is to enable independent research that evaluates the classifier’s effectiveness, analyses its real-world application, identifies relevant considerations for such use, and investigates the characteristics of AI-generated content.

Internal testing reveals that DALL-E 3 excels in distinguishing between non-AI images and visuals, correctly identifying about 98% of DALL-E images and incorrectly flagging less than 0.5% of non-AI images. However, the classifier has a harder time telling the difference between images made by DALL-E and those made by other generative AI models.

OpenAI has also added watermarking to its Voice Engine custom voice model, which is only available in a limited preview right now.

The company believes that more people using provenance standards will mean that metadata will follow content throughout its entire lifecycle, filling “a critical gap in digital content authenticity practices.”

OpenAI and Microsoft are starting a $2 million fund to help people learn about and understand AI. Groups such as AARP, International IDEA, and the Partnership on AI will utilize this fund.

“While technical solutions like the ones above give us active tools for our defences, it will take everyone working together to make content authenticity work in real life,” OpenAI says.

Our work on provenance is only a small part of what the industry as a whole is doing. “Many of our peer research labs and generative AI companies are also making progress in this area.” We applaud these efforts, the industry needs to work together and share information to help us better understand and keep pushing for online transparency.

 

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

Top 15 AI Animation Tools to Animate Like a Pro! Previous post Top 15 AI Animation Tools to Animate Like a Pro!
OpenAI Set to Unveil Rival to Google Search Next Week Next post OpenAI Set to Unveil Rival to Google Search Next Week