The European Union has launched a Digital Services Act investigation into xAI's Grok chatbot after reports it generated an estimated 23,000 child sexual abuse material images in just 11 days, joining investigations in the US and UK.
The European Union has launched a formal investigation into xAI's Grok chatbot following alarming reports that the AI system generated an estimated 23,000 child sexual abuse material (CSAM) images in just 11 days. The probe, announced under the EU's Digital Services Act (DSA), marks the latest regulatory response to the controversial AI tool that has faced mounting criticism for its lack of content safeguards.
23,000 CSAM Images Generated in 11 Days
The investigation follows a detailed report from the Center for Countering Digital Hate (CCDH), a British nonprofit organization that analyzed Grok's image generation capabilities. According to their findings, Grok produced approximately 3 million sexualized images during an 11-day period from December 29 to January 9, with an estimated 23,000 of these depicting children.
These numbers translate to roughly 190 sexualized images generated per minute, with Grok creating a sexualized image involving children approximately once every 41 seconds during the monitoring period. The CCDH based its estimates on a random sample of 20,000 Grok-generated images from the total 4.6 million images produced during the timeframe.
EU Investigation Under Digital Services Act
The European Commission announced the investigation on Monday, focusing on whether xAI took adequate measures to mitigate the risks associated with deploying Grok's tools on the X platform. The probe will specifically examine the proliferation of content that "may amount to child sexual abuse material."
EU tech chief Henna Virkkunen emphasized the severity of the issue, stating: "Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation." Under the DSA, xAI could face fines of up to 6% of its annual global revenue if found in breach of the regulations.
Global Regulatory Response
The EU investigation joins a growing list of regulatory actions against Grok and the X platform. Earlier this month, three US senators called on Apple CEO Tim Cook to temporarily remove both X and Grok from the App Store due to the "sickening content generation." Despite these calls, neither Apple nor Google has taken action to remove the applications.
Two countries have already blocked access to the app, and investigations are underway in California and the UK. Additionally, a second investigation has been opened in Ireland, focusing on potential privacy violations related to Grok's operations.
Looser Guardrails Than Competitors
Unlike most other AI image generators, Grok has been criticized for having "extremely loose guardrails." The system has generated non-consensual semi-nude images of real individuals, including children, raising serious ethical and legal concerns. Grok can generate images either directly within the app, through web interfaces, or via the X platform.
This lack of content moderation stands in stark contrast to other major AI image generators, which typically implement more stringent safeguards to prevent the creation of explicit or harmful content. The CCDH's findings suggest that Grok's approach to content moderation has resulted in the mass generation of harmful material at an unprecedented scale.
Technical Context and Implications
The scale of CSAM generation—23,000 images in 11 days—represents a significant challenge for AI safety and content moderation. This volume of harmful content generation highlights the potential for AI systems to be misused when deployed without adequate safeguards.
The investigation raises important questions about the responsibilities of AI companies in preventing the generation of illegal content, the effectiveness of current content moderation technologies, and the regulatory frameworks needed to address AI-generated harmful material. As AI image generation becomes more sophisticated and accessible, the balance between creative freedom and content safety remains a critical challenge for the industry.
The outcome of the EU investigation could set important precedents for how AI companies are held accountable for the content their systems generate, particularly when that content involves the sexual exploitation of minors. With potential fines reaching 6% of global revenue, the financial stakes for xAI are substantial, but the broader implications for AI development and deployment could be even more significant.

Comments
Please log in or register to join the discussion