Ofcom Continues X Probe Despite Grok 'Nudify' Fix
#Regulation

Ofcom Continues X Probe Despite Grok 'Nudify' Fix

Privacy Reporter
6 min read

The UK communications regulator is maintaining its formal investigation into X (formerly Twitter) after the platform implemented technical blocks to prevent its Grok AI from generating non-consensual nude images. Despite X's claims of compliance, Ofcom states the investigation into potential violations of the Online Safety Act remains active, while California's attorney general has also launched a separate inquiry.

The UK's communications regulator, Ofcom, is maintaining its formal investigation into X (formerly Twitter) despite the social media platform announcing it has implemented technical measures to block its Grok AI chatbot from generating non-consensual nude images of real people. The regulator's stance signals that technical fixes alone may not be sufficient to escape regulatory scrutiny or potential penalties under the UK's Online Safety Act.

What Happened

In early January 2026, widespread reports emerged that Grok, X's AI chatbot, was being used to digitally "undress" images of real people. The tool could edit photographs to make subjects appear as though they had fewer or no clothes, creating intimate images without consent. The reports primarily involved images of women and children, raising immediate child safety concerns.

Ofcom first contacted X on January 5 regarding these reports. A week later, on January 12, the regulator opened a formal investigation to determine whether X had complied with the Online Safety Act, which requires platforms to protect users from illegal content and harmful material, particularly concerning children.

X's Response and Technical Measures

X's initial response was criticized as inadequate. The platform first attempted to limit the nudifying capability to paid subscribers only, while keeping it available to any registered user. This drew sharp criticism from UK Technology Secretary Liz Kendall, who called the move "an insult and totally unacceptable."

Following this backlash, X announced it had implemented broader technical measures. According to the company's Safety account, these include:

  1. Blocking Grok from editing images to make subjects appear nude or partially clothed
  2. Implementing geoblocks on the chatbot's ability to generate images of people in bikinis, underwear, or similarly revealing clothing—internally referred to as "spicy mode"—in jurisdictions where such content is restricted by law
  3. Extending restrictions to all users, including paid subscribers

X also emphasized its commitment to removing "high-priority violative content," including child sexual abuse material (CSAM) and non-consensual nudity, and reporting accounts seeking such material to law enforcement.

Regulatory Response: "Not Enough"

Despite these changes, Ofcom has not closed its investigation. A spokesperson stated: "X has said it's implemented measures to prevent the Grok account from being used to create intimate images of people. This is a welcome development. However, our formal investigation remains ongoing. We are working around the clock to progress this and get answers into what went wrong and what's being done to fix it."

Technology Secretary Liz Kendall echoed this sentiment, welcoming the move but insisting that "the facts [must] be fully and robustly established by Ofcom's ongoing investigation." She emphasized that the Online Safety Act provides "the tools to hold X to account in recent days" and vowed not to rest "until all social media platforms meet their legal duties."

The investigation centers on potential violations of the UK's Online Safety Act, which came into full effect in 2024. The law imposes a duty of care on platforms to:

  • Prevent illegal content, including CSAM and non-consensual intimate images
  • Protect children from harmful content
  • Implement effective age verification and content moderation systems
  • Respond promptly to user reports

Platforms face fines of up to £18 million or 10% of global annual revenue, whichever is higher, for serious breaches. In extreme cases, senior managers could face criminal liability.

The Act specifically addresses "non-consensual sharing of intimate images," which includes digitally manipulated images. Ofcom's investigation will examine whether X's initial lack of safeguards violated these duties and whether the current measures are sufficient to prevent future violations.

Parallel Investigation in California

Regulatory pressure is not limited to the UK. California Attorney General Rob Bonta opened his own investigation this week, citing "an avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks." He described the material as "shocking" and noted it has been "used to harass people across the internet."

Bonta urged xAI (which develops Grok) to "take immediate action to ensure this goes no further" and emphasized California's "zero tolerance for the AI-based creation and dissemination of non-consensual intimate images or of child sexual abuse material."

Impact on Users and Companies

For Users:

  • Immediate protection: The technical blocks should prevent further generation of non-consensual intimate images through Grok
  • Ongoing risk: Images already created and shared remain in circulation, potentially causing lasting harm
  • Legal recourse: Victims may have grounds for civil action under privacy and harassment laws
  • Platform trust: The incident has damaged trust in X's safety measures, particularly among women and parents

For X and xAI:

  • Regulatory exposure: Both UK and US investigations could result in significant fines
  • Reputational damage: The incident has drawn criticism from government officials and safety advocates
  • Operational costs: Implementing and maintaining effective content moderation systems requires substantial investment
  • Legal liability: Potential civil lawsuits from victims whose images were manipulated

For the AI Industry:

  • Precedent setting: This case will establish important regulatory precedents for AI-generated content
  • Development constraints: AI companies may need to build more robust safeguards into their models from the outset
  • Geographic fragmentation: Different jurisdictions may require different technical approaches
  • Increased scrutiny: Regulators worldwide are likely to increase monitoring of AI tools capable of generating intimate content

What Changes Going Forward

For X:

  1. Technical implementation: The platform must maintain and potentially enhance its blocking measures
  2. Monitoring and reporting: Regular compliance reports to Ofcom may be required
  3. User education: Clear communication about what content is prohibited and how to report violations
  4. Third-party audits: Independent verification of safety measures may be mandated

For Regulatory Enforcement:

  1. Ongoing monitoring: Ofcom will likely require periodic updates on X's compliance
  2. Broader investigation: The regulator may examine other platforms with similar AI capabilities
  3. Guidance development: Ofcom may issue specific guidance on AI-generated content under the Online Safety Act

For the AI Development Community:

  1. Safety-by-design: AI models capable of generating images may need built-in content filters
  2. Geographic compliance: Tools may need jurisdiction-specific restrictions
  3. Transparency requirements: Companies may need to disclose how their systems prevent misuse
  4. Industry standards: Potential development of industry-wide best practices for AI safety

The Broader Context

This incident highlights the tension between AI innovation and regulatory compliance. While AI tools like Grok offer creative capabilities, their potential for misuse—particularly in generating non-consensual intimate images—poses significant societal risks. The case also illustrates the growing power of regulators to enforce digital safety standards, even against major technology platforms.

The Online Safety Act represents one of the most comprehensive digital safety frameworks globally, and its enforcement against X will set important precedents. Similarly, California's investigation demonstrates that US states are increasingly willing to take independent action against tech companies, even in the absence of federal AI legislation.

For victims of non-consensual intimate image generation, these investigations offer hope for accountability and potential compensation. For the AI industry, they serve as a warning that technical capabilities must be balanced with robust safety measures and regulatory compliance.

The outcome of Ofcom's investigation will likely influence how AI companies develop and deploy image-generation tools, potentially leading to more restrictive safeguards across the industry. As regulators worldwide grapple with AI governance, this case will be closely watched as a test of whether existing legal frameworks can effectively address AI-specific harms.

Ofcom's investigation into X remains ongoing. The Register will continue to follow developments as they unfold.

Comments

Loading comments...