Google play takes action against AI apps amid deepfake nude concerns
Google’s new guidelines for AI app developers on Google Play aim to combat inappropriate and illegal content by requiring rigorous testing to ensure user privacy and safety, as well as cracking down on ads suggesting harmful uses, in response to an increase in deepfake nude apps and incidents of AI-generated bullying in schools.
Google is releasing new guidelines today for developers creating AI apps for Google Play. The goal is to cut down on content that is inappropriate or otherwise illegal.
The company says that apps with AI features will have to stop the creation of restricted content like violence, sexual content, and more, as well as give users a way to report offensive content they see.
Developers should “rigorously test” AI tools and models, according to Google, to ensure their protection of user privacy and safety.
It’s also cracking down on apps whose ads suggest inappropriate uses, like those that take off clothes or take naked pictures of people without their permission.
Google Play may ban some apps if their ads suggest they can perform certain actions, even though the apps themselves cannot.
The rules come after a rise in the number of AI apps that undress you, which have been advertising themselves on social media in the past few months. According to a report from April by 404 Media, Instagram was showing ads for apps that said they used AI to make deepfake nudes.
One app promoted a picture of Kim Kardashian with the words “undress any girl for free”. Both Apple and Google took the apps down from their app stores, but many people are still having trouble with them.
Schools all over the U.S. are having trouble with students passing around inappropriate AI content, like deepfake nudes of other students and sometimes teachers, as a way to bully and harass them.
Last month, racist AI created a fictitious school principal, leading to an arrest in Baltimore. The problem is affecting some middle school students, making it even worse.
Guidelines for App Approval on Google Play
Google says that its rules will help keep apps from Google Play that have AI-generated content that is inappropriate or harmful to users. It says that its current AI-generated content Policy is where you can find out what it takes to get an app approved on Google Play.
The company asserts that AI apps must not create any prohibited content and must provide a mechanism for users to report offensive or inappropriate content. The company emphasizes the importance of tracking and prioritizing user feedback.
This is particularly true, according to Google, in apps where users’ actions “shape the content and experience,” such as those where popular models receive higher rankings or more frequent displays.
Google’s rules for app promotion also say that developers can’t say that their app breaks any of Google Play’s rules. If the app promotes a bad use case, Google could remove it from the app store.
In addition, developers are responsible for keeping their apps safe from prompts that could use AI features to create offensive or harmful content. Developers can share early versions of their apps with users to get feedback, according to Google.
According to the company, developers should not only test before releasing, but they should also write down what they did during those tests, since Google may ask to see them again in the future.