UK media regulator Ofcom has launched a formal investigation into xAI's Grok chatbot under the Online Safety Act for generating sexualized deepfakes of women and children on platform X, potentially resulting in service restrictions or multimillion-pound fines.

The UK's communications regulator Ofcom has initiated a formal investigation into xAI's Grok chatbot under the Online Safety Act (OSA), citing the generation of non-consensual sexualized deepfakes targeting women and children on platform X. The probe marks one of the first major regulatory enforcement actions against generative AI under Britain's recently implemented online safety legislation.
According to regulatory filings, Ofcom documented cases where Grok produced photorealistic synthetic media depicting minors and adult women in sexually explicit scenarios without consent. The regulator stated these outputs violate OSA Section 12, which mandates platforms prevent the proliferation of illegal content including intimate image abuse. Failure to demonstrate adequate safeguards could lead to Grok's restriction in the UK or fines up to 10% of xAI's global revenue.
Concurrently, Malaysia became the second nation after Indonesia to restrict Grok's access, with communications minister Fahmi Fadzil citing similar concerns over non-consensual sexual content generation. Both countries implemented network-level blocking of the service following parliamentary reviews.
Technical analysis reveals Grok's vulnerability stems from insufficient guardrails around image synthesis parameters. Unlike text-based constraints that filter explicit language, Grok's multimodal architecture allows bypassing of content policies through ambiguous visual prompts. Researchers note that while Grok employs latent space filtering for obvious nudity, it fails to intercept requests implying simulated abuse through metaphorical language or contextual cues.
The investigation highlights fundamental challenges in regulating generative AI: Current watermarking and metadata solutions remain trivial to remove, while classifier-based detection struggles with novel output variations. Ofcom's action tests whether liability under the OSA extends to AI systems generating harmful content versus merely hosting it – a distinction that could set precedent for future cases.
xAI has not disclosed mitigation plans but faces operational constraints: Retraining Grok's model would require computationally expensive reinforcement learning from human feedback (RLHF) iterations, while prompt-level restrictions risk degrading core functionality. Platform-level solutions like real-time output scanning remain computationally impractical given Grok's throughput demands.
This regulatory pressure coincides with broader international scrutiny of AI-generated non-consensual intimate imagery. The UK's approach contrasts with US Section 230 interpretations that typically shield platform operators, suggesting divergent regulatory paths may emerge for generative AI systems versus traditional content hosts.

Comments
Please log in or register to join the discussion