Introduction to the Case Study Collection

The Cyber Trust
Part of the Family Internet Monitoring Project

This collection of case studies explores real-world news stories highlighting how children and young people can be placed at risk through their online activities.

The collection is drawn from real cases investigated by the Cyber Choices team at the National Crime Agency and stories reported in the press.

All of these cases could have been prevented had parents been able to monitor their child's online activity and intervene.



News Item Link Cyber Choices Link

Story: Extremism online 'contributing to violence' as teen jailed over mass shooting plot

Source: STV News

 


Extremist views online may be contributing to young people wanting to carry out violence, a psychologist has warned.
It comes after an Edinburgh teenager who wanted to carry out a mass shooting at his own school was jailed for six years. The STV News article goes on to describe the possible sequence teenagers can go through to make them radicalised enough to undertake etreme acts.

For the full story visit the STV News site here.

 



Story: 'What the Online Safety Act does not cover'

Source: BBC News

 

 

This story is in two parts. Part 1 is an article by Laura Kuenssberg. The BBC news item describes some of the issues arising from the new Online Safety Act that the Act does not deal with. It identifies some fairly important aspects of a childs online life that are not addressed and which could put them at risk. The article can be read here.

 

Part2 is an analysis of the act using an AI analysis tool to identify what is not covered by the Act and then checking the list against the act itself:

The UK's Online Safety Act, while comprehensive, does not cover everything related to online safety, particularly for adults. It primarily focuses on illegal content and activity, but some areas of concern remain, including legal but harmful content for adults, the effectiveness of age verification methods, and the potential impact of AI chatbots. 

Here's a more detailed breakdown:

Areas not covered or with limitations:

Legal but harmful content for adults:
The original draft of the bill included provisions to address content that, while legal, could cause significant harm to adults. However, these provisions were removed due to concerns about free speech and potential overreach. 

Age verification methods:
While the act mandates age verification for certain content, the specific methods are not dictated by the regulator, and there are ongoing concerns about the effectiveness and potential privacy implications of various approaches, including the use of selfies or bank details. 

AI chatbots:
The rapid advancement of AI and the increasing use of chatbots, particularly by children, are not fully addressed by the current legislation. 

Private messaging apps:
The act doesn't fully cover private messaging apps, especially those with end-to-end encryption, which can pose risks to children. 

Content shared between children:
The act doesn't directly regulate content shared between children on platforms like messaging apps, even if that content is harmful. 

Harmful but legal online challenges:
The act does not directly address risky online challenges, stunts, or in-app purchases, like loot boxes, that can lead to harm for some users. 

General online abuse:
While the act addresses hate speech related to protected characteristics, it doesn't fully cover the widespread online abuse and harassment that many individuals, including sports participants, experience. 

Future-proofing against emerging technologies:
Concerns have been raised about whether the act is adequately future-proofed to address technologies like VPNs and DNS over HTTPs. 

Story: Primary school pupils referred to online crime unit.

Source: BBC News

 


Story: Fifty young people including children of primary school age have been referred to a specialist policing team that tackles online crime across the East Midlands.

The East Midlands Special Operations Unit (EMSOU) says this was due to the behaviour of some youngsters, who have hacked school websites to post rude messages, or change their grades.

For the full story click here

Story: AI is a growing concern for many Parents and Carers.

Source - The Cyber Trust

 

 


Our supporter Annie Benzie (researcher at Cardiff University focusing on disinformation on social media platforms): has put together a useful guide regarding AI and Online Safety setting out the risks associated with this powerful technology.  The PDF document can be downloaded here.

AI & Online Safety – Guide for Parents, Guardians, and Teachers
This guide outlines some of the risks associated with generative AI (GAI) and provides resources for
parents, guardians, and teachers to best protect children.
What are the risks?
GAI tools can be beneficial for young users. For example, some tools can be used to tailor lessons or
homework to the child’s educational needs. Using chatbots can also be a great way to practice social
interactons before they happen, which may be particularly useful for neurodivergent users.
However, using GAI tools comes with risks. Some of these are discussed below.
Exposure to harmful material: AI may be used to create deepfake content - images, video, and even
audio, which can be used in cases of bullying. Misinformation is often spread on social media
platforms, and algorithmic biases may create an echo chamber where stereotypes are reinforced,
and users radicalised. Software such as AI companions are often unmoderated and may expose users
to age-inappropriate conversations and even advice on topics such as health, sex, and self-harm.
Extortion: Recent reports have identified a rise in the use of AI-generated indecent images in
sextortion cases among young people. This involves gathering ordinary images of the victim, usually
taken from a social media platform, and using AI to create fake explicit content. Worryingly, given
advancements in technology, this content is often very realistic and may be used to extort victims
even if they have never shared an indecent image of themselves. The availability of tools which
enable users to remove clothes from individuals in an image, means they are unfortunately used in
some cases of peer bullying.
User Dependency: Software such as AI companions are designed to encourage high engagement,
which may lead to addiction and user dependency. Young users are more vulnerable to this, as they
may struggle to understand the differences between interacting with AI and humans.
Real-Life Instances of AI Harm
Case 1: In February 2025, the Center for Countering Digital Hate (CCDH) published research
indicating that YouTube’s algorithm recommends eating disorder content to young girls. The study
found that 1 in 3 YouTube recommendations for 13-year-olds displayed harmful content related to
eating disorders, violating their own policies by presenting a risk to public health.
Case 2: In February 2024, a mother pled a lawsuit against Character AI, following the death of her 14-
year-old son. The teen had been frequently interacing with a lifelike chatbot, designed to simulate
human conversations, many of which were reportedly inappropriate. The lawsuit claims that the
chatbot failed to notify anyone of his suicidal tendencies, while also emotionally and sexually
exploiting him. The case serves as a tragic reminder of the potential dangers of AI.
What can you do?
Parents and teachers should familiarise themselves with reporting conducted by organisations such
as the Internet Watch Foundation (IWF) to better understand the risks associated with AI. Improved
understanding of the AI landscape allows for communication with young users.
Regularly talking with young users about the limitations of AI, how it works, and providing an open
line for safe, judgement-free communication to discuss their experiences online can allow them to
build a healthy relationship with technology.
It is important to educate children on the difference between real-life relationships versus those with
AI companions. Although they provide comforting responses, they do not have real feelings or
understanding.
Limitng screen time and implementing controls is a great way to manage the type of content your
child can access. Online interactions should be monitored regularly.
Children should be reminded of support available in schools and how to report issues including those
related to online activity. For example, the Report Remove service introduced by the IWF allows
minors to anonymously report images of themselves and have them removed from the internet
(linked in the resources section below).
AI Safety Resources
AI & Child Safety Online: A Guide for Educators
AI Safety for Kids: Parental Guide to Online Protecon - Secure Children's Network (SCN)
AI chatbots and companions – risks to children and young people | eSafety Commissioner
Me-Myself-AI-Report.pdfMe-Myself-AI-Report.pdf
Safeguarding and keeping pupils safe from AI deepfakes
Sharing nudes and semi-nudes: advice for educaon sengs working with children and young
people (updated March 2024) - GOV.UK
Report Remove from Childline and IWF
Copyright: The Cyber Trust

Author: Annie Bezzie - Researcher at Cardiff University