Over three quarters of the UK public want child safety checks on new generative AI products

We’re calling on the Government to adopt specific safeguards for children in its legislation.

41935-exp-2026-11

  • Polling shows 78% of the public would opt for child safety checks on new generative AI products, even if this causes delays in releasing products.
  • This comes as our new research identifies seven key safety risks to children including sexual grooming and harassment, bullying, sextortion and the proliferation of harmful content.
  • We're calling on government to slow down artificial intelligence action plans until they have embedded a statutory duty of care for children.

Read our report


Safety concerns around generative AI

New research commissioned by us highlights the different ways that generative artificial intelligence (AI) is being used to groom, harass and manipulate children and young people.

This comes as polling shows that the UK public are concerned about the rollout of AI. Savanta surveyed 3,356 people from across the UK and found that most of the public (89%) have some level of concern that “this type of technology may be unsafe for children”. 1

The majority of the public (78%) said they would prefer to have safety checks on new generative AI products, even if this caused delays in releasing products over a speedy roll-out without safety checks.

Our new paper shares key findings from research conducted by AWO, a legal and technology consultancy. The research Viewing Generative AI and children’s safety in the round2 identifies seven key safety risks associated with generative AI; sexual grooming, sexual harassment, bullying, financially motivated extortion, child sexual abuse & exploitation material, harmful content, and harmful ads and recommendations.

Generative AI is currently being used to generate sexual abuse images of children, enable perpetrators to more effectively commit sexual extortion, groom children and provide misinformation or harmful advice to young people.

From as early as 2019, we have been receiving contacts from children via Childline about AI.

One boy aged 14 told the service3:

“I’m so ashamed of what I’ve done, I didn’t mean for it to go this far. A girl I was talking to was asking for pictures and I didn’t want to share my true identity, so I sent a picture of my friend’s face on an AI body. Now she’s put that face on a naked body and is saying she’ll post it online if I don’t pay her £50. I don’t even have a way to send money online, I can’t tell my parents, I don’t know what to do.”

One girl, aged 12 asked Childline:

“Can I ask questions about ChatGPT? Like how accurate is it? I was having a conversation with it and asking questions, and it told me I might have anxiety or depression. It’s made me start thinking that I might?”

Solutions and urgent actions

Our paper outlines a range of different solutions to address these concerns including stripping out child sexual abuse material from AI training data and doing robust risk assessments on models to ensure they are safe before they are rolled out.

A member of the NSPCC Voice of Online Youth, a group of young people aged 13-17 from across the UK, said:

“A lot of the problems with Generative AI could potentially be solved if the information [that] tech companies and inventors give [to] the Gen AI was filtered and known to be correct.”

The government is currently considering new legislation to help regulate AI and there will be a global summit in Paris this February where policy makers, tech companies and third sector organisations, including the NSPCC and their Voice of Online Youth, will come together to discuss the benefits and risks of using AI.4

We’re calling on the government to adopt specific safeguards for children in its legislation. Four urgent actions are needed by the government to ensure generative AI is safe for children:

  1. Adopt a duty of care for children’s safety
    Generative AI companies must prioritise the safety, protection, and rights of children in the design and development of their products and services.

  2. Embed a duty of care in legislation
    It is imperative that the government enacts legislation that places a statutory duty of care on generative AI companies, ensuring that they are held accountable for the safety of children.

  3. Place children at the heart of generative AI decisions
    The needs and experiences of children and young people must be central to the design, development, and deployment of generative AI technologies.

  4. Develop the research and evidence base on generative AI and child safety
    The government, academia, and relevant regulatory bodies should invest in building capacity to study these risks and support the development of evidence-based policies.

Chris Sherwood, CEO at the NSPCC, said:

chris sherwood 900x506.jpg

“Generative AI is a double-edged sword. On the one hand it provides opportunities for innovation, creativity and productivity that young people can benefit from; on the other it is having a devastating and corrosive impact on their lives.

“We can’t continue with the status quo where tech platforms 'move fast and break things' instead of prioritising children’s safety. For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented. Now, the government must learn from these mistakes, move quickly to put safeguards in place and regulate generative AI, before it spirals out of control and damages more young lives.

“The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety. We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both government and tech companies must take the urgent action needed to make Generative AI safe for children and young people.”


References

  1. 1. Savanta interviewed 3,356 people from across the UK aged 18+ online between 22 – 30th June 2024. Data was weighted to be representative by region and by UK adults by gender. Savanta is a member of the British Polling Council and abides by its rules

  2. 2. Viewing Generative AI and children’s safety in the round combines analysis from research commissioned by the NSPCC with publicly available data and the views of children and young people to outline the current risks to children’s safety posed by Gen AI. It outlines the potential solutions to these risks and the necessary policy response. We commissioned AWO, a legal and technology consultancy, to consult experts from a wide range of sectors and identify evidenced and hypothetical risks to children’s safety and how they may be mitigated. A panel of 11 young people aged 13–16 from the NSPCC’s Voice of Online Youth were asked to give their perspectives on Gen AI risks and who they felt was responsible for addressing these risks. We also gathered relevant insights from Childline, ensuring children’s voices were central to our policy development.

  3. 3. Snapshots are based on real Childline and service users but are not necessarily direct quotes.

  4. 4. The NSPCC, LEGO and Common Sense Media will hold a fringe meeting in Paris on the eve of the French Government’s Artificial Intelligence Summit discussing the risks posed by AI to child safety online.