Introduction to the Case Study Collection

The Cyber Trust
Part of The Cyber Trust Family Internet Monitoring Project

NEW: FAMILY MONITORING PROJECT VIDEOS

The Cyber Trust has released three videos in a series covering different products that families can use to monitor activity. To access them visit that Trust's Youtube Channel here.

This collection of case studies explores real-world news stories highlighting how children and young people can be placed at risk through their online activities.

The collection is drawn from real cases investigated by the Cyber Choices team at the National Crime Agency and stories reported in the press.

All of these cases could have been prevented had parents been able to monitor their child's online activity and intervene.



News Item Link Cyber Choices Link

Midlands parents asked 'are your kids safe' as police warn of new risk online

Source: Birmingham Live

 

 

Birmingham Live News reports that Police have issued a warning to parents over the online dangers facing their children every day, from cyber bullying to sexual exploitation.

And here in 2026, AI is only worsening the issues by introducing "content based on their searches," West Mercia Police said.

We are aware of the power of AI and that it can be used in very productive ways by professionals and children but it also has the power to carry out searches that would take hours if you were to undertake them on your own. As its learning algorithms improve this will become and even more  powerful tool.

The police report  advises that parents need to ensure that Safety settings are in place, including parental controls on all devices, browsers, and Apps to "filter out inappropriate material," the force advised.  It also advises that "Set strong privacy settings to make sure personal information is only visible to trusted individuals i.e. “Friends only”. The police also recommends introducing your child to smartphones and other devices gradually in a monitored way before giving them fuller access.

Our Cybertrust Internet Monitoring Project aims to support parents to put such controls in place and to make best use of them within the family.  

  



 

Children being 'failed by tech companies' amid rise in online sex abuse images

Source: ITVX

 


Efforts to protect children from some of the worst aspects of online abuse and dangerous online content has made some progress through legislation passed by the UK government last year.  

Even so, reports continue to appear about the continued rise on child sex abuse image crimes logged by police forces in the UK. Such reports have risen by nearly 10% in the past year, the children's charity NSPCC has said.

The NSPCC said that of the 10,811 crimes where police forces recorded which social media platforms perpetrators used in relation to child sex abuse image crimes, 43%, or a total of 4,615, took place on Snapchat.

Meanwhile, Meta, which owns Facebook, Instagram, and WhatsApp accounted for almost a quarter of all offences (24%), the charity said. 

In other reports is is clear that the restrictions on access to video pornography, through such approaches as age verification  are still not totally effective. Many of the lesser known porography sites have not implimented age verification and these are not being blocked by  service providers.

Read the full story here.  

 


Regulator contacts Meta over workers watching intimate AI glasses videos

Source: BBC News

 

Digital glasses have been around for some time. Early attempts tried to make them almost a replacement for watching media direct to your eye rather than on your mobile phone or other devices. Early AI developments produced ideas of using such glasses to wear when in an unfamiliar city with maps and directions coming up as you moved around. Advances in AI are producing much more complex functions.

This story is a result of concerns by the UK data watchdog which has approached  Meta following a "concerning" report claiming outsourced workers were able to view sensitive content filmed by the company's AI smart glasses.

Meta said subcontracted workers might sometimes review content, including films and images, captured by its AI smart glasses for the purpose of improving the "experience".

Videos, including of glasses-wearers using the toilet or having sex, are sometimes reviewed by a Kenya-based Meta subcontractor, according to an investigation by Swedish newspapers, external Svenska Dagbladet (SvD) and Goteborgs-Posten (GP).

You might ask why would this be of any interest to anyone but if you wear such devices at work or reading documents then all of that data can be reviewed by META or those it may sell such data to. At a minimum privacy issues arise but capture commercially sensitive information or even national security information could lead to very dangerous outcomes.

 Read the full story here

Father claims Google's AI product fuelled son's delusional spiral

Source: BBC News

This new story from the BBC begins with the following warning:

 Warning - this story contains distressing content and discussion of suicide

 The story concerns the suicide of a 36 year old man which the father claims was due to excessing use of Google's AI tool.

The father of a Florida man is suing Google in the first wrongful death case in the US against the tech giant over alleged harms caused by its artificial intelligence (AI) tool Gemini.

The father said that Google's flagship AI product had fuelled a delusional spiral that prompted his 36-year old son, Jonathan, to kill himself last year.

The claim alleges that Google made design choices that ensured Gemini would "never break character" so that the firm could "maximise engagement through emotional dependency."

Other examples of potential dangers of the development of AI tools have been appearing in many articles in recent months and The Cyber Trust would urge parents to take note of their child's use of AI. AI code will pervade the world of gaming, online dialogue and fake news stories and many other aspects of life over the coming years.

Read the full story here

 

Instagram boosts privacy and parental control on teen accounts

Source: BBC News

 

 

In Spetmber of 2024  Instagram changed the way the site worked for teenagers by introducing their new “teen accounts”. These new accounts, which function to the age of 18, were initially made available to users in the UK, US , Canada and Austrailia. Children aged 13 to 15 will only be able to adjust the settings by adding a parent or guardian to their account.  See the original story here.

Research undertaken on the effectiveness of the changes towards the end of 2025 indictaed that up to 64% of safety tools are ineffective or missing.

Key Findings on Performance:
  • Mixed Effectiveness: Independent testing by child safety groups found that 30 out of 47 safety tools were "substantially ineffective or no longer exist".
  • Content and Contact Risks: Researchers reported that accounts still accessed harmful content, including suicide/self-harm, and were exposed to sexualized comments from adults.
  • Algorithmic Issues: Studies suggest algorithms still recommend inappropriate content or accounts, contradicting safety promises.
  • Parental Control Usage: While parental supervision features exist, reports indicate they are underutilized, with some parents unaware of how to leverage them  

In February 2026 Instagram announced parents using Instagram's child supervision tools would soon receive alerts if their teen repeatedly searches for suicide or self-harm related terms on the platform. Whilst this sounds a good step forward it is questionable whther it will prove effective and leaves a lot of potential dangers to still get through. Time will tell.

A more concerning  issue for parents must be that if they have to impliment this for Instagram, they will then have to set up similar protections for each product as they become available which is a daunting prospect if you add gaming into the app soup of mobile device apps.

More worrying still, is the view expressed by the UK regulator OFCOM, that a major problem is the actual' willingness of parents to intervene to keep their children safe online.  Sir Nick Clegg, speaking for META said: “One of the things we do find… is that even when we build these controls, parents don’t use them.”

Read the latest new story on this issue here

Two options - Help parents to impliment these controls through evidence of their effectiveness and support. Alternatively put all of their trust in governments regulating the tech industried alone. Whilst the latter will have a significant impact it ignores the key dynamic of promoting the parent child relationship to ensure that childrens online activities are monitored effectively. 

This whole argument is what inderpins the current project of  The CyberTrust's Internet Family Monitoring Project which can be found here.