NIX Solutions: Google’s New AI App Moderation Rules

Starting early next year, Google will set new content moderation requirements for Android apps that utilize artificial intelligence (AI). The primary goal of these changes is to combat the proliferation of offensive or controversial material, including deepfakes and disinformation.

 

Scope of the New Policy

The new policy applies to a specific set of AI applications, including chatbots, apps that generate visual and audio content, and those with features for reporting offensive or controversial material directly in the user interface. However, applications that solely store AI-generated content or use AI for summarizing textual material will remain unaffected by these new rules.

Targeted Content Categories

Google’s focus is on certain content categories that require strict moderation. This includes fake videos or images created without the consent of individuals depicted in them, as well as recordings of real people used for fraudulent purposes. Additionally, the company will be particularly vigilant about combatting the dissemination of false or misleading information, especially in the context of elections. Stringent controls have also been established for applications where generative AI is primarily used for sexual content or the creation of malicious code.

In its announcement, Google emphasized the rapid growth of generative AI applications, hinting at the possibility of revising its AI policies as the technology evolves. In tandem with these content moderation rules, Google has expanded its Play Store photo and video permissions policy, with the aim of enhancing user privacy and security by limiting Android apps’ access to personal data, notes NIX Solutions.

“Photos and videos on users’ devices are their personal and confidential information, requiring the utmost responsibility,” the company stated. “Such data creates potential vulnerability for users in the event of leaks or other forms of exploitation. Restricting access not only increases user security but also eases the burden on developers in processing sensitive information.”