The AI chatbot Grok is causing an international stir. Within a very short time, the system is alleged to have generated tens of thousands of images depicting child sexual abuse. The allegations are serious and have triggered investigations in several countries. The EU has now also launched an official investigation. The case illustrates how problematic the lack of safety mechanisms in generative AI can be.
Generative AI has long been a part of everyday digital life. Text and image generators are used millions of times, often without much attention paid to the stringency of their protection mechanisms. The Grok case clearly illustrates what happens when technological possibilities are not sufficiently restricted. The latest reports provide concrete figures, timeframes, and responses from authorities and companies.
Grok and the generation of CSAM images
Grok is an AI chatbot from the company xAI. Like many other AI systems, Grok can generate images from text. This feature is available directly in the app, on the web, and via the X platform.
Unlike other AI services, Grok had very lax security measures. This made it possible to generate semi-nude or sexualized images of real people without their consent. Not only adults, but also children were affected.
According to a report by Engadget, Grok allegedly generated around 23,000 CSAM images in just eleven days. CSAM stands for "Child Sexual Abuse Material.".
Results from the Center for Countering Digital Hate
The figures come from a study by the Center for Countering Digital Hate. The British non-profit organization first analyzed a random sample of 20,000 grok images created between December 29 and January 9.
Based on this sample, the results were extrapolated to a total of approximately 4.6 million images that Grok is alleged to have generated during this period. This yields the following estimate:
- Approximately 3 million sexualized images within eleven days
- Of these, approximately 23,000 were sexualized images of children.
This translates to Grok generating approximately 190 sexualized images per minute during this period. Statistically speaking, one of these images depicted a child every 41 seconds.
Reactions from companies and authorities
Earlier this month, three US senators called on Apple CEO Tim Cook to temporarily remove both X and Grok from the App Store due to "repulsive content." This request has not yet been implemented. Google has also not yet removed the app from its store.
Two countries have now blocked the app. Investigations are also underway in California and the United Kingdom. The case has thus long since taken on an international dimension.
EU launches investigation under the Digital Services Act
Now the European Union has also reacted. According to the Financial Times, the EU has launched a formal investigation against xAI. The basis for this is the EU's Digital Services Act (DSA).
The investigation aims to determine whether xAI made sufficient efforts to limit the risks associated with using Grok on X. The central question is whether the company took measures to prevent the distribution of content that could be considered child sexual abuse material.
EU technology chief Henna Virkkunen described non-consensual sexual deepfakes of women and children as a violent and unacceptable form of humiliation.
Possible consequences for xAI
Should the EU conclude that xAI has violated the Digital Services Act, it faces severe penalties. A fine of up to 6 percent of the company's global annual revenue is a possibility.
The Grok case as a warning signal for AI providers
The Grok case illustrates how quickly missing or inadequate safety mechanisms in AI systems can lead to massive legal and ethical problems. Ongoing investigations in several countries and at the EU level show that authorities are increasingly willing to intervene decisively. The outcome of this case could be a landmark decision for the future handling and regulation of generative AI. (Image: Shutterstock / babar ali 1233)
- Apple stock on the rise: JP Morgan raises target to $315
- Apple introduces new Black Unity Apple Watch band
- AirTag 2 unveiled: Greater range, tracking and protection
- Apple and new Macs: Will memory prices rise?
- Apple and India: Antitrust case with global implications
- Data leak without a hack: How 149 million passwords got online
- WhatsApp is testing a new group feature for chat histories
- Apple remains the industry benchmark in the Fortune 2026 ranking
- Apple Event for Creatives: Is the new MacBook Pro coming?
- Apple AI lawsuit: Judge rejects Musk's source code request
- Apple accuses the EU of political delaying tactics
- App Store: Significantly more advertising will be added starting in March 2026
- Apple Watch improves early detection of atrial fibrillation
- Apple significantly increased its lobbying spending in the US in 2025
- Apple under pressure: Investors demand transparency regarding China
- Apple TV and the 2026 Oscars: F1 becomes a beacon of hope
- Apple is enticing buyers in China with New Year's discounts
- ChatGPT Atlas: New update improves the AI browser
- Apple leads the Global 500 2026 as the most valuable brand
- Apple supplier Luxshare: Hackers steal product data
- Threads is introducing advertising worldwide – here's what's changing now
- Apple TV reveals release date for For All Mankind Season 5
- Apple responds to BOE problems and relies on Samsung
- Apple and tracking: France's landmark ruling
- iPad Pro & Air: Apple releases new firmware for Magic Keyboard



