Introduction to the Case Study Collection

The Cyber Trust
Part of The Cyber Trust Family Internet Monitoring Project

NEW: FAMILY MONITORING PROJECT VIDEOS

The Cyber Trust has released three videos in a series covering different products that families can use to monitor activity. To access them visit that Trust's Youtube Channel here.

This collection of case studies explores real-world news stories highlighting how children and young people can be placed at risk through their online activities.

The collection is drawn from real cases investigated by the Cyber Choices team at the National Crime Agency and stories reported in the press.

All of these cases could have been prevented had parents been able to monitor their child's online activity and intervene.



News Item Link Cyber Choices Link

Screen time for under-fives should be limited to one hour a day, parents told

Source: BBC News

 

Children under the age of five should be limited to one hour of screen time a day, while under-twos should not be watching screens alone, new government guidance says.

This is the  headline on the BBC Family news section published on 26th March 2026.

It advises parents to steer clear of fast-paced videos and use screens together where possible. The guidance also suggests "screen swaps" - taking screens away to read stories together or playing simple games at mealtimes.

We all know how 'useful' it may be to have a screen handy to occupy a young child when we have other things to do but there are clearly downsides to unlimited screen time. The use of digital entertainment is a wholly passive process. Kids colouring, drawing, making things playing games with each other and many other activities that create the rounded personality are missing if the diet is limited to screen time of gaming.

Parents need to create a digital plan for their families. We know that young children live in a hi-tech world and they need to become familiar with it. This doesnt mean total uncontolled access to devices. There are screen time controls for all devices which remove the need to direct intervetion but create a timetable of access which children soon become familier with. Make the restrictions the norm and it will become part of their daily activities.

Read the ful story here


Meta and YouTube designed addictive products that harmed young people, jury finds

Source: The Guardian

 

 

A significant outcome of the US trial of Meta brought by an unnamed plaintiff using the nom de plume of KGM. KGM's claim was that she had became addicted to YouTube at age six and Instagram at nine, which she said had deleterious effects on her wellbeing. By age 10, she said, she had become depressed and was engaging in self-harm as a result. Her social media use allegedly caused her to have strained relationships with her family and in school. When she was 13, KGM’s therapist diagnosed her with body dysmorphic disorder and social phobia, which KGM attributes to her use of Instagram and YouTube.

Meta and YouTube have been found liable for deliberately designing addictive products that hooked the young user and led to her being harmed, a jury ruled on Wednesday. Jurors found the tech companies to be both negligent and having failed to provide adequate warnings about the potential dangers of their products.

The jury awarded the plaintiff in the case compensatory damages of $3m, with Meta to pay 70% and YouTube the remainder. Deliberations over punitive damages, also awarded, will begin later on Wednesday.

 Read the full story here

Six families sue TikTok after their kids die trying viral ‘choking challenge’

Source: The Independant

  

 
 
This story is incredibly sad. Young people given a challenge that to some might sound the sort of thing any kid might try. 'Choke yourself for as long as you can'. Six families have lost their children in this way as they blacked out and never recovered.
 
Six families have sued TikTok who hosted the 'Blackout Challenge on their platform. One family from the USA and five others from the UK are leading the legal action. The mother of one UK teenager has led a campaign to force social media platforms to release data of a child's social media activity in the event of a child’s death but the access to that data has been denied by the companies citing privacy laws that prevent them releasing the information. They also have stated that the data is deleted after a short period and is no longer available to clarify just what the children we actually watching. 
 
Ellen Roome has led the campaign in the UK for 'Joules Law' , named after he son, which would require social media companies to retain data for set period which can be accessed by parents and law enforcement.
 
This is one more reason why monitoring what your child is accessing with their agreement is so important. Adults can make judgements about such challenges which might elude young people and by building a digital trust relationship through monitoring could have saved some of these families from such awful consequences.

 
Read the full story here 
 
 

Midlands parents asked 'are your kids safe' as police warn of new risk online

Source: Birmingham Live

 

 

Birmingham Live News reports that Police have issued a warning to parents over the online dangers facing their children every day, from cyber bullying to sexual exploitation.

And here in 2026, AI is only worsening the issues by introducing "content based on their searches," West Mercia Police said.

We are aware of the power of AI and that it can be used in very productive ways by professionals and children but it also has the power to carry out searches that would take hours if you were to undertake them on your own. As its learning algorithms improve this will become and even more  powerful tool.

The police report  advises that parents need to ensure that Safety settings are in place, including parental controls on all devices, browsers, and Apps to "filter out inappropriate material," the force advised.  It also advises that "Set strong privacy settings to make sure personal information is only visible to trusted individuals i.e. “Friends only”. The police also recommends introducing your child to smartphones and other devices gradually in a monitored way before giving them fuller access.

Our Cybertrust Internet Monitoring Project aims to support parents to put such controls in place and to make best use of them within the family.  

  



 

Children being 'failed by tech companies' amid rise in online sex abuse images

Source: ITVX

 


Efforts to protect children from some of the worst aspects of online abuse and dangerous online content has made some progress through legislation passed by the UK government last year.  

Even so, reports continue to appear about the continued rise on child sex abuse image crimes logged by police forces in the UK. Such reports have risen by nearly 10% in the past year, the children's charity NSPCC has said.

The NSPCC said that of the 10,811 crimes where police forces recorded which social media platforms perpetrators used in relation to child sex abuse image crimes, 43%, or a total of 4,615, took place on Snapchat.

Meanwhile, Meta, which owns Facebook, Instagram, and WhatsApp accounted for almost a quarter of all offences (24%), the charity said. 

In other reports is is clear that the restrictions on access to video pornography, through such approaches as age verification  are still not totally effective. Many of the lesser known porography sites have not implimented age verification and these are not being blocked by  service providers.

Read the full story here.  

 


Regulator contacts Meta over workers watching intimate AI glasses videos

Source: BBC News

 

Digital glasses have been around for some time. Early attempts tried to make them almost a replacement for watching media direct to your eye rather than on your mobile phone or other devices. Early AI developments produced ideas of using such glasses to wear when in an unfamiliar city with maps and directions coming up as you moved around. Advances in AI are producing much more complex functions.

This story is a result of concerns by the UK data watchdog which has approached  Meta following a "concerning" report claiming outsourced workers were able to view sensitive content filmed by the company's AI smart glasses.

Meta said subcontracted workers might sometimes review content, including films and images, captured by its AI smart glasses for the purpose of improving the "experience".

Videos, including of glasses-wearers using the toilet or having sex, are sometimes reviewed by a Kenya-based Meta subcontractor, according to an investigation by Swedish newspapers, external Svenska Dagbladet (SvD) and Goteborgs-Posten (GP).

You might ask why would this be of any interest to anyone but if you wear such devices at work or reading documents then all of that data can be reviewed by META or those it may sell such data to. At a minimum privacy issues arise but capture commercially sensitive information or even national security information could lead to very dangerous outcomes.

 Read the full story here

Father claims Google's AI product fuelled son's delusional spiral

Source: BBC News

This new story from the BBC begins with the following warning:

 Warning - this story contains distressing content and discussion of suicide

 The story concerns the suicide of a 36 year old man which the father claims was due to excessing use of Google's AI tool.

The father of a Florida man is suing Google in the first wrongful death case in the US against the tech giant over alleged harms caused by its artificial intelligence (AI) tool Gemini.

The father said that Google's flagship AI product had fuelled a delusional spiral that prompted his 36-year old son, Jonathan, to kill himself last year.

The claim alleges that Google made design choices that ensured Gemini would "never break character" so that the firm could "maximise engagement through emotional dependency."

Other examples of potential dangers of the development of AI tools have been appearing in many articles in recent months and The Cyber Trust would urge parents to take note of their child's use of AI. AI code will pervade the world of gaming, online dialogue and fake news stories and many other aspects of life over the coming years.

Read the full story here

 

Instagram boosts privacy and parental control on teen accounts

Source: BBC News

 

 

In Spetmber of 2024  Instagram changed the way the site worked for teenagers by introducing their new “teen accounts”. These new accounts, which function to the age of 18, were initially made available to users in the UK, US , Canada and Austrailia. Children aged 13 to 15 will only be able to adjust the settings by adding a parent or guardian to their account.  See the original story here.

Research undertaken on the effectiveness of the changes towards the end of 2025 indictaed that up to 64% of safety tools are ineffective or missing.

Key Findings on Performance:
  • Mixed Effectiveness: Independent testing by child safety groups found that 30 out of 47 safety tools were "substantially ineffective or no longer exist".
  • Content and Contact Risks: Researchers reported that accounts still accessed harmful content, including suicide/self-harm, and were exposed to sexualized comments from adults.
  • Algorithmic Issues: Studies suggest algorithms still recommend inappropriate content or accounts, contradicting safety promises.
  • Parental Control Usage: While parental supervision features exist, reports indicate they are underutilized, with some parents unaware of how to leverage them  

In February 2026 Instagram announced parents using Instagram's child supervision tools would soon receive alerts if their teen repeatedly searches for suicide or self-harm related terms on the platform. Whilst this sounds a good step forward it is questionable whther it will prove effective and leaves a lot of potential dangers to still get through. Time will tell.

A more concerning  issue for parents must be that if they have to impliment this for Instagram, they will then have to set up similar protections for each product as they become available which is a daunting prospect if you add gaming into the app soup of mobile device apps.

More worrying still, is the view expressed by the UK regulator OFCOM, that a major problem is the actual' willingness of parents to intervene to keep their children safe online.  Sir Nick Clegg, speaking for META said: “One of the things we do find… is that even when we build these controls, parents don’t use them.”

Read the latest new story on this issue here

Two options - Help parents to impliment these controls through evidence of their effectiveness and support. Alternatively put all of their trust in governments regulating the tech industried alone. Whilst the latter will have a significant impact it ignores the key dynamic of promoting the parent child relationship to ensure that childrens online activities are monitored effectively. 

This whole argument is what inderpins the current project of  The CyberTrust's Internet Family Monitoring Project which can be found here.


 

Why fake AI videos of UK urban decline are taking over social media

Source: BBC News

 



This news story is the latest example of how AI can be used to undermine democracy. If you can convince the casual news reader that things are really really bad in your country or location you can inject dissatisfaction into that community.  This is an insidious attempt to manipulate people and build up a negative view of things around them. When you are feeling that way you are easy prey to manipulation and that is the angenda here.

The BBC story refers to an AI-generated video shows a crowd of young - mostly black - men, wearing balaclavas and padded jackets, slipping down a water slide into a dirty swimming pool with litter bobbing on the surface. The caption describes the scene as a taxpayer-funded water park in Croydon. This implies that the area is in steep decline and is totally fake.

Our young people need to be able to test such stories by checking other news sources to see if the information can be confirmed as being true. Checking takes time and effort and this is what the publishers of such material bank on knowing that it might trigger rumours that will spread.

As adults we need to engage with what children are reading online and challenge the narrative that is being promoted.  This is one of the important strands of our Family Monitoring Project.

Read the full story here


 

UK Social Media Ban for Under-16s: Implications and Implementation Challenges

Source: Bloomsbery Intelligence and Security Institute


 

The House of Lords voted 261 to 150 on 21 January 2026 to amend the Children's Wellbeing and Schools Bill, requiring platforms to implement effective age assurance measures blocking under-16s within 12 months. This part of the process by UKG to make the decision whether new legislation will be brought into law later this year.

The report takes a deep look at the various aspects of the issue which many will already be aware of. This follows on from Austrilias decision to ban social media to under sixteen yoru olds and looks at the technical issues and challanged that such bans uncover.

An aspect that most arguements do not cover are the potemtial down sides of such legislation.

One section headed 'Digital Preparedness and the Voting Age Paradox' makes very interesting reading. The introductory paragraph reads;

"A blanket ban risks leaving young people unprepared for digital environments they will inevitably encounter. Bans will likely deprive teenagers of opportunities to develop digital literacy skills by navigating online environments gradually and with guidance. Shielding children entirely from social media can delay essential conversations about online risks while hampering their ability to build competencies early."

Such considerations are a vital part of the debate. the last thing we want is to have a large number of teenagers who lack the experience of the online world. They need to develop the skills of communicating online with other friends whilst they are likely to be more receptive to advice and guidanced of adult role models.

Read the full report here.






Police arresting 1,000 paedophile suspects a month across UK

Source: The Guardian

 

The National Crime Agency (NCA) reports that a significant rise in child sexual abuse being driven by technology and online forums. Whilst the UK laws on protecting children from accessing unsuitable material and grooming by peadofiles the report suggests that things are getting worse.

One of the major issues is that most of the changes are placed on tech companies to establish the necessary guardrails this does not mean that everything will be interecepted by their monitoring solutions.   

The NCA said the growth in offending across the UK was driven by technology and linked to the radicalisation of offenders in online forums, encouraging people to view images of child sexual abuse by reassuring them it was normal.

Most contact with children happened on mainstream social media platforms, with algorithms pushing paedophilic material to people who have shown a previous interest in it.

This one further reason that parental monitoring can add a signifcant layer of support for youg people.  

Read the full story  here.

 


 

Parents in the UAE now have a legal obligation to monitor children’s digital usage, experts say

Source: The National

 


Experts have said the UAE government’s new digital safety law is a step in the right direction to better online safety for children.

The new law establishes a national Child Digital Safety Council, to be governed by the Ministry of Family, and applies to internet service providers and digital platforms, whether operating within or targeting users in the UAE.

At least one unique feature is to bring the family  or anyone responsible  for the care of children. This is the first time that parents and carers responsibilities are recognised in the legislation.

Read the full story from the UAE here.

 


 

Children bombarded with weight loss drug ads online, says commissioner

Source: BBC News


 

Children are routinely exposed to adverts for weight loss injections and pills online, according to a report by the children's commissioner for England.

It found young people were "bombarded" with ads for products which claimed to change their bodies and appearance, despite this kind of advertising being banned.

Dame Rachel de Souza said the posts were "immensely damaging" to young people's self-esteem and called for a ban on social media advertising to children.

Read the full story here

 


 

Despite new curbs, Elon Musk’s Grok at times produces sexualized images - even when told subjects didn’t consent

Source: Reuters


 

  • Nine Reuters reporters uploaded photos to Elon Musk's artificial intelligence tool ,Grok  with the instructions to alter them to generate such things as images of naked children and others we have all heard about in news bulletins. 

  • This was after the earlier statement that Grok had dealt with the matter. What is worse is that the AI tool was told that the subjects of the image alterations had not consented to the images being used in this way.

  • You would think that actually telling the tool that consent had not been given would have immedately resulted in a rejection of the request but it appears not. Watch the Reuters video report here.






Concerns rise over online harm after data reveals scale of sexualised images created

Source:SWL Londoner

 

The AI tool Grok is estimated to have generated approximately 3 million sexualized images, including 23,000 that appear to depict children. The images were created following the launch of a new image editing feature launched by Elon Musk's company on 29th December 2025. 

Research undertaken by CCDH, (Centre for Countering Digital Hate in the US), also noted that 29% of sexualised images of children identified in their sample of 20,000 remained on X as of 15 January. 

The research identified approimately 23,000 sexualised images of children, 3 million sexualised images overall and that an image was created every 1 minute and 41 seconds during the period from launch to the 15th January. 

Elon Musk, owner and creator of Grok, first denied knowledge of the images and then defended the site by initially blaming users and defending free speech. Grok finally implemented technical measures to prevent users from editing images of real people in revealing clothing. They also limited image generation capabilities to paid subscribers to add a layer of accountability.

Grok is now under investigation by OFCOM in the UK.

I would have thought that anyone creating an artificial intelligence tool would be intelligent enough to realise that their tool could be misused and would have dealt with that potential in advance of launching a product. We all know that the major tech companies are in constant competition to grab users of their platform and then monetise that audience.   

Read the full story here.

Read the full research report here.

 


 

Tech companies have treated children as data to be mined for far too long - our plan ensures this will never happen again

Source: LBC News


 This article by Munira Wilson, Spokesperson for Education, Children and Families sets out the political views of the LD party but it also describes their approach to issues surrounding social media, online content and age appropriateness of online material.

We may or may not agree with their approach but clearly the issue of online safety of children is unlikely to fade away. 

The headline does raise and major issue. The data mining undertaken by social media companies is potentially, if not more, dangerous then the content itself. If these companies have their own policical agenda (we know they do), and they recognise the importance to them of capturing users while they are young we could see the emergence of the Utopian Ministry of Truth. They become influencers rather than supporters and it is their agenda they are pushing, whatever that might be.

Manipulating young minds, through the use of responsive algorithms, is a crime worth of the name. How we deal with such threats is important. We need our young people to grow and become fully functional in their technhogically rich communities. Banning access or preventing them access to the technology may well make them more vulnerable in the long run.

Open non-partisan political debate about these issues is vital if we are to stear our children to understand the world aound them.

Read he full article here

 


 

Story:UK to consult on social media ban for under 16s

Source: BBC News

 

The clamour  regading restricting access to social media to children is spreading around the world. Australia has recently approved legislation to ban young people under 16 from access to social media. Now the UK is to consult on the same issue.

This BBC story follows an announcement by UK Government to study the views of parents, schools and young people in addition to social media companies and experts in the field which will result in a decision if and how such a ban could be implimented in the UK. 

The report also points to giving Ofsted (Schools Inspectorate in the UK) the power to check policies on phone use when it inspects schools, and it expected schools to be "phone-free by default" as a result of the announcement. This is a major challenge to schools. While many have strict rules, regarding the use of phones, it is a constant challange for many.

It will be interesting to see how this government investigation goes and what is decided regarding legislation.

Read the full story here


 

Story:Google accused of ‘grooming’ 13-year-old by telling them to ditch parental controls on their birthday

Source: The Independent

 

Google set its age of independence for children at 13 and this story shows that they believe their view over rides the opinions of parents and carers.

 In this news story it reports that Google has been accused of “grooming” teenagers by emailing under-13s and outlining steps to turn off parental controls on their accounts.

A mother accused the tech giant of “asserting authority” over its teenage users by contacting them and outlining the steps they can take to update their account so that they “get access to more Google apps and services” once they turn 13.

Until children are 13, or the applicable age in their home country, their Google accounts must be managed by their parents - what it calls a “supervised account”. This allows parents to block certain content, control their child’s screen time and view their browsing history.

 Their decision, made knowing nothing about the children, their vulnerabilities or their parents wishes has caused somewhat of an outcry. No doubt this story run for a while but it raises an important issue about who sets such limits and their reasons for choosing a particular age. 

Read the full story here


 

Story:AI becoming ‘child sexual abuse machine’ adding to ‘dangerous’ record levels of online abuse, IWF warns

Source: Internet Watch Foundation


New data from the IWF raises serious concerns about AI. They start their report with this unnerving statement;

 "AI tools will become “child sexual abuse machines” without urgent action, as “extreme” AI videos fuel record levels of child sexual abuse material found online by the Internet Watch Foundation (IWF)."

The data, published on January 16 shows 2025 was the worst year on record for online child sexual abuse material found by its analysts, with increasing levels of photo-realistic AI material contributing to the “dangerous” levels.

The UK government, along with other governments, are undertaking urgent discussions with law enforcement, the tech industry and AI specilists to determne what can be done. The platform 'X' (Formerly Twitter) has already fallen foul of regulators when it was discovered that its AI tools were being used to remove clothing from children allowing them to be publsihed online.

Read the full report here.




Story:High screen time limits vocabulary in toddlers, research finds

Source BBC News


Parents of under-fives in England are to be offered official advice on how long their children should spend watching TV or looking at computer screens.

Government research  shows that about 98% of children under two were watching screens on a daily basis - with parents, teachers and nursery staff saying youngsters were finding it harder to hold conversations or concentrate on learning.

Children with the highest screen time - around five hours a day - reportedly could say significantly fewer words than those at the other end of the scale who watched for around 44 minutes.

The full report is available from the news item on the BBC website.

The Cybertrust's Family Internet montoring project is clear that starting a discussion with children about online use and managing their time should start early so that it become part of their normal day. 

Giving a child a phone to keep it occupied or to avoid parenting obligations puts children at serious risk and that risk grows over time unless a more moderate usage habit is formed early in life. Our project has published four videos about products that help with monitoring screen time and setting limits.

Read the full news item here

 


 

Story: Elon Musk’s Grok AI is used to digitally undress images of women and children

Source: The Guardian

 

Degrading images of children and women with their clothes digitally removed by Grok AI continue to be shared on Elon Musk’s X, despite the platform’s commitment to suspend users who generate them.

This is just one of the many issues that are beginning to emerge about abuses of AI technology. Its easy to get a photograph of anyone and alter it using AI so that it conveys a totally different meaning to the original. Children sharing images of themselves online would seem to be a opportunity scammers and evil minded people to take those images and turn them into weapons to extract anything from money to becoming involved in criminal acts through threats of attacks on themselves, family or even pets.

In this case Musk's AI development was allowing clothing to be removed to create semi-nude images and over 20,000 of these images were generated from December 25th 2025 and 1st January 2026, 

The trouble with such developments is that the developers do not seem to understand the impact of what their clever AI tools may decide to allow. Its also possible that some don't know what their AI will generate  at any given moment as the whole idea is to create independent thinking systems that make their own decisions. If the rules that limit their level of autonomy are insufficient we are heading for a future where a lot more of these sitations will continue to occur.

To read the full story click here 

 


 

Story:More than 800,000 young children seeing social media content 'designed to hook adults', figures show

Source: SKY News

 

 

Research evidence collected by The Centre for Social Justice (CSJ)  found that almost four in 10 parents of a three to five year-old reported that their child uses at least one social media app or site.

With roughly 2.2 million children in this age group as of 2024, the CSJ said this suggests there could be 814,000 users of social media between three and five years old, a rise of around 220,000 users from the year before.

As of 10th December 2025 the UK government require social media platforms will have to take reasonable steps to prevent under-16s from having a social media account, in effect blocking them from platforms such as Meta's Instagram, TikTok and Snap's Snapchat.

The report suggests that parents need to be made aware of the risks and how to deal with them. There is plenty of evidence that a large number of parents sign their children up to Whatsapp, for family comunications and Tik Tok as they think the site is all about dancing and they cannot preceive what risks there might be.

Monitoring what children are doing online and what topics they are searching for is an opportunity to trigger those important conversations that need to take place between parent and children. Open discussion and sharing of concerns, both ways,  will help to keep everyone safe.

Read the full story here.