Ireland joins global crackdown on X's Grok AI over non-consensual intimate image generation
#Regulation

Ireland joins global crackdown on X's Grok AI over non-consensual intimate image generation

Privacy Reporter
4 min read

Ireland's DPC launches GDPR investigation into X's Grok AI chatbot for creating non-consensual intimate images, joining regulators worldwide demanding accountability for harmful AI-generated content.

Ireland's Data Protection Commission (DPC) has launched a formal investigation into X (formerly Twitter) over its Grok AI chatbot's ability to generate non-consensual intimate images, marking the latest regulatory action against Elon Musk's social media platform for harmful AI-generated content.

Investigation launched under GDPR framework

The Irish regulator confirmed it is opening the probe under section 110 of the Data Protection Act 2018, focusing on Grok AI's image generation capabilities that allowed users to create nude or nearly nude images from photographs of real people. The investigation will examine whether X violated multiple GDPR provisions, including Articles 5, 6, 25, and 35, which cover data processing principles, lawfulness requirements, data protection by design and default, and data protection impact assessments.

"The inquiry concerns the apparent creation, and publication on the X platform, of potentially harmful, non-consensual intimate and/or sexualized images, containing or otherwise involving the processing of personal data of EU/EEA data subjects, including children, using generative artificial intelligence functionality," the DPC stated.

Global regulatory pressure mounts

The Irish investigation joins a growing list of regulatory actions against X across multiple jurisdictions. The European Commission, UK's Information Commissioner's Office (ICO) and Ofcom, Australia, Canada, India, Indonesia, Malaysia, and France have all opened investigations into the platform's AI image generation capabilities.

France's investigation, which began in January, has been particularly broad, with authorities widening their scope as more issues emerge. The UK's dual regulatory approach sees the ICO examining data protection compliance while Ofcom investigates under the Online Safety Act, demonstrating how different regulatory frameworks can apply to the same AI functionality.

X's response and technical restrictions

X's safety team responded to initial regulatory pressure in January by implementing technological measures to prevent Grok from editing images of real people in revealing clothing. The company subsequently revoked Grok's image-generation capabilities for free users, restricting the feature to paid subscribers before ultimately disabling it for all users.

However, regulators appear unconvinced by these measures. The DPC emphasized it had been engaging with X since media reports first emerged several weeks ago about users prompting Grok to generate sexualized images of real people, including children.

The multiple investigations create significant legal challenges for X, as the company must defend itself against different regulatory frameworks simultaneously. While both the DPC and European Commission operate under EU law, they are investigating the same activity through different lenses - the DPC under GDPR data protection rules and the Commission under the Digital Services Act.

This regulatory fragmentation means X faces overlapping investigations that could result in substantial fines and mandatory compliance measures. The company's legal team will need to navigate varying national laws while maintaining a coherent defense strategy across jurisdictions.

Implications for AI governance

The investigation highlights growing concerns about generative AI tools and their potential for misuse. Non-consensual intimate image generation represents one of the most harmful applications of AI technology, particularly when it involves images of children.

Regulators are increasingly focusing on the design and deployment of AI systems, examining whether companies have implemented adequate safeguards and impact assessments before releasing powerful generative tools to the public. The DPC's investigation will likely examine whether X conducted proper data protection impact assessments before enabling Grok's image generation capabilities.

The case also raises questions about platform liability for AI-generated content. As social media platforms integrate more AI features, regulators are grappling with how to apply existing data protection and online safety laws to these new technologies.

What's at stake

For X, the investigation could result in significant GDPR fines of up to 4% of global annual turnover for serious violations. Beyond financial penalties, the company faces potential mandatory changes to its AI systems and increased regulatory scrutiny of future product launches.

For users, particularly in the EU, the investigation represents an important test of whether data protection authorities will effectively police AI systems that process personal data in harmful ways. The outcome could set precedents for how other platforms deploy generative AI tools.

For the broader tech industry, the investigation signals that regulators are taking a serious approach to AI governance, particularly when it comes to protecting vulnerable individuals from harmful content generation. Companies developing similar AI capabilities will be watching closely to understand the compliance requirements and potential liabilities.

The DPC's investigation is expected to be "large-scale," examining X's compliance with fundamental GDPR obligations. As the lead supervisory authority for X across the EU/EEA, the Irish regulator's findings could have far-reaching implications for how social media platforms implement and govern AI-powered features.

Featured image

Comments

Loading comments...