Ireland Joins Growing List of Regulators Investigating X Over Grok AI-Generated Sexual Content
#Regulation

Ireland Joins Growing List of Regulators Investigating X Over Grok AI-Generated Sexual Content

Security Reporter
4 min read

Ireland's Data Protection Commission has launched a formal GDPR investigation into X over Grok AI's generation of non-consensual sexual images, joining UK, EU, and US authorities in examining the platform's compliance failures.

Ireland's Data Protection Commission (DPC) has opened a formal investigation into X over the use of the platform's Grok artificial intelligence tool to generate non-consensual sexual images of real people, including children. The inquiry examines whether X Internet Unlimited Company complied with core GDPR obligations, including lawful processing and data protection by design requirements.

Featured image

The Irish investigation joins a growing multinational enforcement effort currently targeting X's Grok AI operations. The UK Information Commissioner's Office (ICO) launched its own formal investigation on February 3, while the European Commission opened proceedings in January to examine whether X properly assessed risks under the Digital Services Act before deploying Grok.

Ireland's DPC, which serves as the lead European Union privacy regulator for X due to the company's Irish headquarters, said the inquiry will examine whether X Internet Unlimited Company complied with core GDPR obligations, including the principles of lawful processing, data protection by design, and the requirement to conduct data protection impact assessments.

"The DPC has been engaging with XIUC since media reports first emerged a number of weeks ago concerning the alleged ability of X users to prompt the @Grok account on X to generate sexualised images of real people, including children," said Deputy Commissioner Graham Doyle on Tuesday. "As the Lead Supervisory Authority for XIUC across the EU/EEA, the DPC has commenced a large-scale inquiry which will examine XIUC's compliance with some of their fundamental obligations under the GDPR in relation to the matters at hand."

California Attorney General Rob Bonta and UK online safety regulator Ofcom are also investigating X over non-consensual sexually explicit content generated through Grok. French prosecutors raided X's Paris offices two weeks ago as part of a separate criminal probe into whether Grok generated child sexual abuse material and Holocaust denial content. The French authorities have also summoned Elon Musk, CEO Linda Yaccarino, and some X employees for interviews in April.

As the lead EU supervisory authority, the DPC's investigation carries particular weight, as its findings could result in substantial fines enforceable across all 27 EU member states and the three European Economic Area countries (Iceland, Liechtenstein, and Norway). The ICO, as the UK's independent data protection regulator, can also impose fines of up to £17.5 million or 4% of a company's worldwide annual turnover.

The Growing Regulatory Pressure on AI-Generated Content

The investigation highlights the mounting regulatory scrutiny facing AI platforms that enable the creation of harmful synthetic media. Grok, developed by xAI, has come under fire for what regulators describe as insufficient safeguards against generating explicit content featuring real individuals without their consent.

This case represents one of the first major enforcement actions specifically targeting AI-generated sexual content under GDPR, potentially setting precedents for how European regulators approach synthetic media regulation. The investigation's focus on "data protection by design" principles suggests regulators are examining whether X adequately considered privacy risks during Grok's development and deployment.

The allegations center on Grok's ability to generate realistic sexual images of real people when prompted by users. This capability raises complex questions about platform liability, content moderation, and the technical feasibility of preventing misuse of generative AI systems.

From a GDPR perspective, the investigation will likely examine whether X conducted proper data protection impact assessments before deploying Grok, particularly given the high risks associated with processing biometric and sensitive personal data. The "lawful processing" requirement will be scrutinized to determine if X had valid legal grounds for processing personal data in this manner.

International Coordination in AI Regulation

The coordinated nature of these investigations demonstrates how regulators across jurisdictions are aligning their approaches to AI governance. The European Commission's separate Digital Services Act investigation adds another layer of potential liability, examining whether X properly assessed systemic risks before deploying Grok at scale.

This multi-pronged regulatory approach reflects growing international consensus that AI platforms must implement robust safeguards against generating harmful content, particularly content that could cause real-world harm to individuals through non-consensual sexual imagery or other forms of abuse.

The outcome of these investigations could have far-reaching implications for how AI companies design, deploy, and moderate generative AI systems globally, potentially forcing significant changes to how platforms like X handle user prompts and content generation capabilities.

Comments

Loading comments...