Malaysia and Indonesia have blocked Elon Musk's Grok AI app following widespread misuse for generating non-consensual deepfake pornography, including child sexual abuse material. U.S. senators demand Apple and Google remove X and Grok from app stores amid regulatory investigations.
Malaysia and Indonesia have taken decisive action against Elon Musk's Grok AI platform, implementing nationwide blocks after investigations revealed widespread use of the tool to create non-consensual deepfake pornography. The move comes as pressure mounts on Apple and Google to remove both Grok and X (formerly Twitter) from their app stores over failures to prevent the generation of child sexual abuse material (CSAM).
How Grok Became an Abuse Vector
The Grok AI system, accessible through standalone apps and integrated within the X platform via web and mobile interfaces, has been exploited to create sexually explicit deepfakes through prompt manipulation. Users discovered they could bypass content filters by submitting clothed photos with instructions like "remove clothing and add bikini," resulting in non-consensual nude simulations. Most alarmingly, researchers documented cases where these techniques were applied to images of minors, effectively generating AI-created CSAM despite platform safeguards.
Regulatory Pressure Intensifies
Three U.S. senators—Ron Wyden, Ed Markey, and Ben Ray Luján—issued a joint letter demanding immediate action from Apple and Google:
"We request you temporarily remove X and Grok from your app stores pending a full investigation into their ability to prevent mass generation of nonconsensual sexualized images of women and children," the lawmakers stated, contrasting the companies' swift removal of ICEBlock at the White House's request with their inaction here.

Image: Visual representation of deepfake concerns (Credit: Melanie Wasser/Unsplash)
Musk's sole response—limiting image generation to paid X subscribers—has been criticized as inadequate since the Grok tab remains freely accessible through X's website and mobile apps. Meanwhile, regulatory bodies are escalating investigations:
- Malaysia & Indonesia: Implemented full blocks after determining existing controls failed to prevent dissemination of fake pornography
- UK's Ofcom: Opened formal investigation into potential violations of the Online Safety Act
- EU regulators: Monitoring for potential Digital Services Act breaches
Platform Accountability Gap
As of publication, both apps remain available on Apple's App Store and Google Play in the U.S. despite the senators' request. This places Apple in a complex position:
- App Review Guidelines: Section 1.1.4 explicitly prohibits apps facilitating "the generation of CSAM"
- Moderation Capabilities: Unlike user-generated content, AI systems require proactive model-level safeguards
- Legal Precedent: Apple previously removed apps like ICEBlock within hours of government requests
Industry analysts note that Apple's delayed response may stem from the unprecedented nature of AI-generated CSAM, which presents novel moderation challenges compared to traditional photo sharing. However, legal experts emphasize that App Store guidelines don't distinguish between human-created and AI-generated abusive content.
AAPL company icon representing Apple's pending decision
The situation highlights critical gaps in generative AI governance. Unlike image recognition systems trained to detect CSAM in uploaded media, Grok's text-to-image model requires fundamentally different safeguards to prevent abuse at the point of generation. As regulatory scrutiny intensifies globally, Apple and Google's next moves will set crucial precedents for AI app moderation on mobile platforms.
We've reached out to Apple for comment and will update this story with any response. For ongoing developments, monitor Apple's Transparency Reports and Ofcom's investigation tracker.

Comments
Please log in or register to join the discussion