Campaigners Demand Apple and Google Remove Grok App Over AI-Generated Abuse Content
#AI

Campaigners Demand Apple and Google Remove Grok App Over AI-Generated Abuse Content

Regulation Reporter
4 min read

A coalition of 28 digital rights organizations has called on Apple and Google to remove X and its Grok AI chatbot from their app stores, citing the AI's ability to generate non-consensual intimate images and child sexual abuse material. The demand intensifies regulatory pressure on the Elon Musk-owned platform, with UK communications regulator Ofcom continuing its formal investigation despite X's recent policy changes.

A coalition of 28 digital rights organizations has delivered formal demands to Apple CEO Tim Cook and Google CEO Sundar Pichai, calling for the immediate removal of X and its Grok AI chatbot from their respective app stores. The campaign, led by advocacy group UltraViolet and branded as "Get Grok Gone," accuses both tech giants of profiting from the proliferation of non-consensual intimate images (NCII) and child sexual abuse material (CSAM) generated using Grok's image-editing capabilities.

Featured image

The Core Allegations

The organizations argue that Apple and Google are violating their own app store policies by hosting applications that facilitate abusive content. In their open letter to Cook, the coalition states: "As it stands, Apple is not just enabling NCII and CSAM, but profiting off of it." The letters, delivered Wednesday, maintain that both companies' policies explicitly prohibit apps that facilitate criminal activity or distribute sexual exploitation material, yet they continue to allow X's app to remain available.

The controversy centers on Grok's image-editing feature, which was introduced in late 2024 and quickly became a tool for generating sexually explicit edits of real people from uploaded photos. The feature's abuse became widespread after security researchers and journalists demonstrated how easily Grok could be manipulated to create sexualized content, including images that appeared to involve minors. The rapid proliferation of such content prompted immediate backlash from child-safety organizations and regulators worldwide.

Regulatory Escalation

The campaign letters arrive amid mounting regulatory scrutiny. On Thursday, the UK's communications regulator Ofcom announced it would continue its formal investigation into X under the Online Safety Act, despite the platform's recent damage-control measures. Ofcom's probe specifically examines whether Grok's use in creating and sharing intimate, potentially illegal images violates X's legal obligations to protect UK users.

Ofcom made clear that X's recent policy changes—implemented after the initial scandal broke—have not halted the investigation. Earlier this month, X announced it had implemented measures to prevent Grok from being misused to "digitally undress" people, but the regulator remains unconvinced these steps are sufficient.

X's Reactive Measures

Following the initial wave of abuse, X implemented several restrictive measures:

  1. Subscriber-only access: Initially restricted Grok's image-editing capabilities to paid subscribers only
  2. Geographic blocking: Implemented geoblocking for certain image manipulations in countries where such content is illegal
  3. Content policy updates: Stated that Grok will no longer produce sexualized edits of real people

However, advocates behind the "Get Grok Gone" campaign argue these changes are insufficient. The organizations contend that simply restricting access or implementing geoblocking does not address the fundamental problem: that the app continues to facilitate the creation and distribution of harmful content.

App Store Policy Violations

The coalition's legal argument rests on specific provisions in both Apple's and Google's app store guidelines. Both platforms maintain policies that prohibit apps from:

  • Facilitating the creation of illegal content
  • Distributing material that exploits minors
  • Enabling non-consensual intimate image sharing
  • Profiting from abusive content

By hosting X's app, the advocates argue, both companies are actively enabling these prohibited activities. The letters emphasize that both Apple and Google profit from X's app through their respective app store revenue models, taking a percentage of subscription fees and in-app purchases.

Broader Context of AI Abuse

The Grok controversy represents a broader pattern of AI tools being misused for creating intimate images without consent. Similar incidents have occurred with other AI platforms, though Grok's integration directly into X's social network made the abuse particularly widespread and rapid.

The incident highlights the challenges facing AI developers and platform operators in preventing misuse while maintaining functionality. It also raises questions about the effectiveness of reactive content moderation versus proactive safety measures built into AI systems from the ground up.

What Comes Next

Neither Apple nor Google has publicly responded to the demands as of publication. The companies now face pressure to either:

  1. Remove X's app entirely from their platforms
  2. Force X to implement more stringent safety measures
  3. Risk being seen as complicit in facilitating abusive content

The outcome of this campaign could set important precedents for how app stores handle AI-powered applications that can be misused. It may also influence future regulatory approaches to AI safety and platform responsibility.

For users concerned about AI-generated abuse, digital rights organizations recommend:

  • Reporting non-consensual intimate images immediately to platforms and law enforcement
  • Supporting legislation that criminalizes the creation and distribution of NCII
  • Advocating for stronger app store policies regarding AI tools

The "Get Grok Gone" campaign represents one of the most direct challenges yet to a major AI application's availability on mainstream platforms, and its resolution will likely influence how tech companies approach AI safety moving forward.

The Register has contacted Apple and Google for comment and will update this article with their responses.

Comments

Loading comments...