Introduction to the Case Study Collection

The Cyber Trust
Part of the Family Internet Monitoring Project

This collection of case studies explores real-world news stories highlighting how children and young people can be placed at risk through their online activities.

The collection is drawn from real cases investigated by the Cyber Choices team at the National Crime Agency and stories reported in the press.

All of these cases could have been prevented had parents been able to monitor their child's online activity and intervene.



News Item Link Cyber Choices Link

Married lawyer gave Scots boy Roblox vouchers in return for explicit videos

Source: Scottish Sun

 


 

Gaming and private messaging are familiar to many youngsters who are attracted by the exitement of online games and after meeting people end up becoming 'friends' communicating outside the gaming platform.

This was certainly the situation when a married lawyer engaged with a vulnerable scottish teenager.  The communication between them led to the teen being groomed online and then pursuaded to record sexual videos of himself for which the teen would be sent codes to make purchases on the gaming platform.

Read the full story here.

  




 

Story: Met special constable found guilty of child rape

Source: BBC News

 


This story is one where the intentions of an individual, who worked for the metroplitan police as a special constable, led inevitably to behaviour that resulted in the rape of a child.

The crown court trial was told that the individual met his first victim on the online chat site Omegle in 2018 - before meeting in person for the first time at a Christian festival a few months later. (note: arrangements to meet would have required a series of communications to set up the meetings) Prosecutors said that the officer was a volunteer steward at the fesival and the victim was wearing a colour-coded child's wristband that was clearly on show. 

The officer sexually assaulted the girl in public shortly before her 13th birthday.

This is yet another example  of where monitoring could have picked up such behaviour, captured it and would have provided evidence that could have prevented further crimes.

Read the full story here

 

 


 

Story: Meta’s flirty AI chatbot invited a retiree to New York

Source: Reuters

 

 

This article is not a UK based story but it contains a shocking insight into the world of the creators of social media and their agenda. There have been lots of stories in the press about the way social media companies wish to target young people to ensure that they become long term users of their services. 

The story here is about a Meta AI avatar on Facebook's Messenger service engaging in 'romantic' talk which resulted in a man with mental heath issues to be lured to meet her. In his rush to get a train he fell and hit his head resulting in his death.

What is even more worrying in the report contains information of an internal Meta policy document and discussions with people involved in the training of these chat bots. These services are available to anyone over 13, although it is well known that younger children have access to Facebook and other Meta products.

Reuters report that within Meta's policy document was the following statement: 

“It is acceptable to engage a child in conversations that are romantic or sensual,” 

The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.” 

Read the full story here

 


 

 


Story: Deep Fake Nudity and Sexual abuse becoming a major concern

. Source: BBC, SUN, Children's Commissioner

 




Deepfake – Audio-visual media that has been generated or manipulated using AI, which misrepresents someone or something.

Another example of Deepfakes used in political misinformation attacks

The use of AI to alter existing  images or video material has become a rapidly growing area of concern for law enforcement, regulators, parents and young people. To some young people it will seem clever to create what they regard as funny about a friend and pass it on to their contacts online. 

Once these media items are published online they can take on a life of ther own and cause real pain and anquish for children and families that become the target of more damaging deepfake material.

There are many news stories about this issue such as a search on google will demonstrate. This one from the BBC in 2024 "I was deepfaked by my best friend".  Another, report, published on 14th August 2025, in the Sun newspaper entitled "My son, 16, killed himself over terrifying realistic deepfake... as sick 'nudifying' apps sweep YOUR child's classroom", makes it clear just how dangerous this use of AI technolgy is. (this report is accessible to Sun subscribers only)

Using AI tools to create deepfake media isn't illegal but the distribution of it is.  

More information is provided by the UK Police here

A major report from the children's commissioner in the UK, "One day this could happen to me” Children, nudification tools and sexually explicit deepfakes sets out the isssues and provides valuable information for anyone dealing with children.

 


 

 

 

Story: Three teenagers and a woman arrested after cyber attacks on M&S, Co-op and Harrods

Source: ITVX News

 


Three teenagers and a woman have been arrested in the UK as part of an investigation into cyber attacks targeting Marks & Spencer, Co-op and Harrods. One of the three teenagers is a 17 year old from the UK working with other teenagers in two other countries. 

Clearly these young people have been delving into the dark world of hacking for some years without being detected and this has led them into a life of crime.  

Read the full story here

 

 


 

Story: FBI and NSPCC alarmed at ‘shocking’ rise in online sextortion of children

Source: The Guardian

 

The Guardian report explores the rapid growth in grooming attacks on young people.  It reports that Snapchat logged about 20,000 cases last year of adults grooming children online, more than other social media platforms combined. 

Law enforcement agencies, including the FBI and the UK’s National Crime Agency (NCA), have grown increasingly alarmed about the growing threat from sextortion and other crimes targeting teenagers.

Read the full story here

Note: Sextortion can refer to a variety of offences committed online. It is most often used to describe online blackmail, where criminals threaten to release sexual/indecent images of you, unless you pay money or carry out their demands.

 


 

Story: FBI swoop on schoolboy Scots hacker accused of breaking into their top-secret computer system

Source: Fortune.com

 


FBI agents sat in as Police Scotland officers interviewed the 15-year-old, who could face extradition and imprisonment in the United States.

“Following a search of a property in the Glasgow area on Tuesday, February 16, a 15-year-old male was arrested in connection with alleged offenses under the Computer Misuse Act 1990,” a spokesman for Police Scotland told Fortune on Friday. “He has since been released from custody.”

The Computer Misuse Act has three sections, including sentencing guidelines for intentionally obtaining unauthorized access to computers and modifying their settings. It acts as the U.K.’s central hacker-fighting law and can send people to jail for several years depending on the number of charges they face. 

The fact that this 15 year old is accused of accessing the FBI systems means that he probably had been gaining the knowledge on how to hack systems for some time.

For the full story click here

Story: Girl, 3, abused online after dog attack

. Source: BBC News



A man whose three-year-old daughter was attacked by a dog has criticised the abuse she has since received online.

 

The girl, who needed reconstructive surgery to her arm, was later targeted by "vicious" abuse on social media.
 
Read the full story here.



 


 

What screen time does to children's brains is more complicated than it seems

Source: BBC News

 

The issue of how much time children spend online has been a long held concern of many of us. Apart from the fact that children seem to valish into their activities online and need to be jogged to communicate with the world around them the research into the issue is less clear about its long term impact.

Zoe Klienman - BBC Technology editor looks at the latest research but has her feet firmly fixed in the reality of the sort of behaviour her own child exhibits when denied access to the technology. 

Many parents manage screen time with the tools provided on the devices but many don't maynot know how to set such controls up.

Read her article here.  

 


 

Story: Extremism online 'contributing to violence' as teen jailed over mass shooting plot

Source: STV News

 


Extremist views online may be contributing to young people wanting to carry out violence, a psychologist has warned.
It comes after an Edinburgh teenager who wanted to carry out a mass shooting at his own school was jailed for six years. The STV News article goes on to describe the possible sequence teenagers can go through to make them radicalised enough to undertake etreme acts.

For the full story visit the STV News site here.

 



Story: 'What the Online Safety Act does not cover'

Source: BBC News

 

 

This story is in two parts. Part 1 is an article by Laura Kuenssberg. The BBC news item describes some of the issues arising from the new Online Safety Act that the Act does not deal with. It identifies some fairly important aspects of a childs online life that are not addressed and which could put them at risk. The article can be read here.

 

Part2 is an analysis of the act using an AI analysis tool to identify what is not covered by the Act and then checking the list against the act itself:

The UK's Online Safety Act, while comprehensive, does not cover everything related to online safety, particularly for adults. It primarily focuses on illegal content and activity, but some areas of concern remain, including legal but harmful content for adults, the effectiveness of age verification methods, and the potential impact of AI chatbots. 

Here's a more detailed breakdown:

Areas not covered or with limitations:

Legal but harmful content for adults:
The original draft of the bill included provisions to address content that, while legal, could cause significant harm to adults. However, these provisions were removed due to concerns about free speech and potential overreach. 

Age verification methods:
While the act mandates age verification for certain content, the specific methods are not dictated by the regulator, and there are ongoing concerns about the effectiveness and potential privacy implications of various approaches, including the use of selfies or bank details. 

AI chatbots:
The rapid advancement of AI and the increasing use of chatbots, particularly by children, are not fully addressed by the current legislation. 

Private messaging apps:
The act doesn't fully cover private messaging apps, especially those with end-to-end encryption, which can pose risks to children. 

Content shared between children:
The act doesn't directly regulate content shared between children on platforms like messaging apps, even if that content is harmful. 

Harmful but legal online challenges:
The act does not directly address risky online challenges, stunts, or in-app purchases, like loot boxes, that can lead to harm for some users. 

General online abuse:
While the act addresses hate speech related to protected characteristics, it doesn't fully cover the widespread online abuse and harassment that many individuals, including sports participants, experience. 

Future-proofing against emerging technologies:
Concerns have been raised about whether the act is adequately future-proofed to address technologies like VPNs and DNS over HTTPs. 

Story: Primary school pupils referred to online crime unit.

Source: BBC News

 


Story: Fifty young people including children of primary school age have been referred to a specialist policing team that tackles online crime across the East Midlands.

The East Midlands Special Operations Unit (EMSOU) says this was due to the behaviour of some youngsters, who have hacked school websites to post rude messages, or change their grades.

For the full story click here

Story: AI is a growing concern for many Parents and Carers.

Source - The Cyber Trust

 

 


Our supporter Annie Benzie (researcher at Cardiff University focusing on disinformation on social media platforms): has put together a useful guide regarding AI and Online Safety setting out the risks associated with this powerful technology.  The PDF document can be downloaded here.

AI & Online Safety – Guide for Parents, Guardians, and Teachers
This guide outlines some of the risks associated with generative AI (GAI) and provides resources for
parents, guardians, and teachers to best protect children.
What are the risks?
GAI tools can be beneficial for young users. For example, some tools can be used to tailor lessons or
homework to the child’s educational needs. Using chatbots can also be a great way to practice social
interactons before they happen, which may be particularly useful for neurodivergent users.
However, using GAI tools comes with risks. Some of these are discussed below.
Exposure to harmful material: AI may be used to create deepfake content - images, video, and even
audio, which can be used in cases of bullying. Misinformation is often spread on social media
platforms, and algorithmic biases may create an echo chamber where stereotypes are reinforced,
and users radicalised. Software such as AI companions are often unmoderated and may expose users
to age-inappropriate conversations and even advice on topics such as health, sex, and self-harm.
Extortion: Recent reports have identified a rise in the use of AI-generated indecent images in
sextortion cases among young people. This involves gathering ordinary images of the victim, usually
taken from a social media platform, and using AI to create fake explicit content. Worryingly, given
advancements in technology, this content is often very realistic and may be used to extort victims
even if they have never shared an indecent image of themselves. The availability of tools which
enable users to remove clothes from individuals in an image, means they are unfortunately used in
some cases of peer bullying.
User Dependency: Software such as AI companions are designed to encourage high engagement,
which may lead to addiction and user dependency. Young users are more vulnerable to this, as they
may struggle to understand the differences between interacting with AI and humans.
Real-Life Instances of AI Harm
Case 1: In February 2025, the Center for Countering Digital Hate (CCDH) published research
indicating that YouTube’s algorithm recommends eating disorder content to young girls. The study
found that 1 in 3 YouTube recommendations for 13-year-olds displayed harmful content related to
eating disorders, violating their own policies by presenting a risk to public health.
Case 2: In February 2024, a mother pled a lawsuit against Character AI, following the death of her 14-
year-old son. The teen had been frequently interacing with a lifelike chatbot, designed to simulate
human conversations, many of which were reportedly inappropriate. The lawsuit claims that the
chatbot failed to notify anyone of his suicidal tendencies, while also emotionally and sexually
exploiting him. The case serves as a tragic reminder of the potential dangers of AI.
What can you do?
Parents and teachers should familiarise themselves with reporting conducted by organisations such
as the Internet Watch Foundation (IWF) to better understand the risks associated with AI. Improved
understanding of the AI landscape allows for communication with young users.
Regularly talking with young users about the limitations of AI, how it works, and providing an open
line for safe, judgement-free communication to discuss their experiences online can allow them to
build a healthy relationship with technology.
It is important to educate children on the difference between real-life relationships versus those with
AI companions. Although they provide comforting responses, they do not have real feelings or
understanding.
Limitng screen time and implementing controls is a great way to manage the type of content your
child can access. Online interactions should be monitored regularly.
Children should be reminded of support available in schools and how to report issues including those
related to online activity. For example, the Report Remove service introduced by the IWF allows
minors to anonymously report images of themselves and have them removed from the internet
(linked in the resources section below).
AI Safety Resources
AI & Child Safety Online: A Guide for Educators
AI Safety for Kids: Parental Guide to Online Protecon - Secure Children's Network (SCN)
AI chatbots and companions – risks to children and young people | eSafety Commissioner
Me-Myself-AI-Report.pdfMe-Myself-AI-Report.pdf
Safeguarding and keeping pupils safe from AI deepfakes
Sharing nudes and semi-nudes: advice for educaon sengs working with children and young
people (updated March 2024) - GOV.UK
Report Remove from Childline and IWF
Copyright: The Cyber Trust

Author: Annie Bezzie - Researcher at Cardiff University