The AI tool Grok is estimated to have generated approximately 3 million sexualized images, including 23,000 that appear to depict children. The images were created following the launch of a new image editing feature launched by Elon Musk's company on 29th December 2025.
Research undertaken by CCDH, (Centre for Countering Digital Hate in the US), also noted that 29% of sexualised images of children identified in their sample of 20,000 remained on X as of 15 January.
The research identified approimately 23,000 sexualised images of children, 3 million sexualised images overall and that an image was created every 1 minute and 41 seconds during the period from launch to the 15th January.
Elon Musk, owner and creator of Grok, first denied knowledge of the images and then defended the site by initially blaming users and defending free speech. Grok finally implemented technical measures to prevent users from editing images of real people in revealing clothing. They also limited image generation capabilities to paid subscribers to add a layer of accountability.
Grok is now under investigation by OFCOM in the UK.
I would have thought that anyone creating an artificial intelligence tool would be intelligent enough to realise that their tool could be misused and would have dealt with that potential in advance of launching a product. We all know that the major tech companies are in constant competition to grab users of their platform and then monetise that audience.
Read the full story here.
Read the full research report here.











