Introduction to the Case Study Collection

The Cyber Trust
Part of The Cyber Trust Family Internet Monitoring Project

NEW: FAMILY MONITORING PROJECT VIDEOS

The Cyber Trust has released three videos in a series covering different products that families can use to monitor activity. To access them visit that Trust's Youtube Channel here.

This collection of case studies explores real-world news stories highlighting how children and young people can be placed at risk through their online activities.

The collection is drawn from real cases investigated by the Cyber Choices team at the National Crime Agency and stories reported in the press.

All of these cases could have been prevented had parents been able to monitor their child's online activity and intervene.



News Item Link Cyber Choices Link

Why fake AI videos of UK urban decline are taking over social media

Source: BBC News

 



This news story is the latest example of how AI can be used to undermine democracy. If you can convince the casual news reader that things are really really bad in your country or location you can inject dissatisfaction into that community.  This is an insidious attempt to manipulate people and build up a negative view of things around them. When you are feeling that way you are easy prey to manipulation and that is the angenda here.

The BBC story refers to an AI-generated video shows a crowd of young - mostly black - men, wearing balaclavas and padded jackets, slipping down a water slide into a dirty swimming pool with litter bobbing on the surface. The caption describes the scene as a taxpayer-funded water park in Croydon. This implies that the area is in steep decline and is totally fake.

Our young people need to be able to test such stories by checking other news sources to see if the information can be confirmed as being true. Checking takes time and effort and this is what the publishers of such material bank on knowing that it might trigger rumours that will spread.

As adults we need to engage with what children are reading online and challenge the narrative that is being promoted.  This is one of the important strands of our Family Monitoring Project.

Read the full story here


 

UK Social Media Ban for Under-16s: Implications and Implementation Challenges

Source: Bloomsbery Intelligence and Security Institute


 

The House of Lords voted 261 to 150 on 21 January 2026 to amend the Children's Wellbeing and Schools Bill, requiring platforms to implement effective age assurance measures blocking under-16s within 12 months. This part of the process by UKG to make the decision whether new legislation will be brought into law later this year.

The report takes a deep look at the various aspects of the issue which many will already be aware of. This follows on from Austrilias decision to ban social media to under sixteen yoru olds and looks at the technical issues and challanged that such bans uncover.

An aspect that most arguements do not cover are the potemtial down sides of such legislation.

One section headed 'Digital Preparedness and the Voting Age Paradox' makes very interesting reading. The introductory paragraph reads;

"A blanket ban risks leaving young people unprepared for digital environments they will inevitably encounter. Bans will likely deprive teenagers of opportunities to develop digital literacy skills by navigating online environments gradually and with guidance. Shielding children entirely from social media can delay essential conversations about online risks while hampering their ability to build competencies early."

Such considerations are a vital part of the debate. the last thing we want is to have a large number of teenagers who lack the experience of the online world. They need to develop the skills of communicating online with other friends whilst they are likely to be more receptive to advice and guidanced of adult role models.

Read the full report here.






Police arresting 1,000 paedophile suspects a month across UK

Source: The Guardian

 

The National Crime Agency (NCA) reports that a significant rise in child sexual abuse being driven by technology and online forums. Whilst the UK laws on protecting children from accessing unsuitable material and grooming by peadofiles the report suggests that things are getting worse.

One of the major issues is that most of the changes are placed on tech companies to establish the necessary guardrails this does not mean that everything will be interecepted by their monitoring solutions.   

The NCA said the growth in offending across the UK was driven by technology and linked to the radicalisation of offenders in online forums, encouraging people to view images of child sexual abuse by reassuring them it was normal.

Most contact with children happened on mainstream social media platforms, with algorithms pushing paedophilic material to people who have shown a previous interest in it.

This one further reason that parental monitoring can add a signifcant layer of support for youg people.  

Read the full story  here.

 


 

Parents in the UAE now have a legal obligation to monitor children’s digital usage, experts say

Source: The National

 


Experts have said the UAE government’s new digital safety law is a step in the right direction to better online safety for children.

The new law establishes a national Child Digital Safety Council, to be governed by the Ministry of Family, and applies to internet service providers and digital platforms, whether operating within or targeting users in the UAE.

At least one unique feature is to bring the family  or anyone responsible  for the care of children. This is the first time that parents and carers responsibilities are recognised in the legislation.

Read the full story from the UAE here.

 


 

Children bombarded with weight loss drug ads online, says commissioner

Source: BBC News


 

Children are routinely exposed to adverts for weight loss injections and pills online, according to a report by the children's commissioner for England.

It found young people were "bombarded" with ads for products which claimed to change their bodies and appearance, despite this kind of advertising being banned.

Dame Rachel de Souza said the posts were "immensely damaging" to young people's self-esteem and called for a ban on social media advertising to children.

Read the full story here

 


 

Despite new curbs, Elon Musk’s Grok at times produces sexualized images - even when told subjects didn’t consent

Source: Reuters


 

  • Nine Reuters reporters uploaded photos to Elon Musk's artificial intelligence tool ,Grok  with the instructions to alter them to generate such things as images of naked children and others we have all heard about in news bulletins. 

  • This was after the earlier statement that Grok had dealt with the matter. What is worse is that the AI tool was told that the subjects of the image alterations had not consented to the images being used in this way.

  • You would think that actually telling the tool that consent had not been given would have immedately resulted in a rejection of the request but it appears not. Watch the Reuters video report here.






Concerns rise over online harm after data reveals scale of sexualised images created

Source:SWL Londoner

 

The AI tool Grok is estimated to have generated approximately 3 million sexualized images, including 23,000 that appear to depict children. The images were created following the launch of a new image editing feature launched by Elon Musk's company on 29th December 2025. 

Research undertaken by CCDH, (Centre for Countering Digital Hate in the US), also noted that 29% of sexualised images of children identified in their sample of 20,000 remained on X as of 15 January. 

The research identified approimately 23,000 sexualised images of children, 3 million sexualised images overall and that an image was created every 1 minute and 41 seconds during the period from launch to the 15th January. 

Elon Musk, owner and creator of Grok, first denied knowledge of the images and then defended the site by initially blaming users and defending free speech. Grok finally implemented technical measures to prevent users from editing images of real people in revealing clothing. They also limited image generation capabilities to paid subscribers to add a layer of accountability.

Grok is now under investigation by OFCOM in the UK.

I would have thought that anyone creating an artificial intelligence tool would be intelligent enough to realise that their tool could be misused and would have dealt with that potential in advance of launching a product. We all know that the major tech companies are in constant competition to grab users of their platform and then monetise that audience.   

Read the full story here.

Read the full research report here.