Meet the Voice of Online Youth
Elodie, 14, County Down
Emily, 14, West Midlands
Finn, 14, East Dunbartonshire
James J, 14, North Somerset
James T, 17, Armagh
Leo, 15, London
Liidia, 14, Glasgow
Malia, 15, West Midlands
Mika, 16, Midlands
Rayhaan, 17, Leicestershire
Shalom, 14, Bolton
Tiffany, 16, Devon
Will, 14, Nottinghamshire
Zara, 14, Birmingham
Our Manifesto for Change
We’re the NSPCC’s Voice of Online Youth, a group of 14 young people aged 14-17 from across the UK. We’re here to create a future where every child’s experience online is a positive one. Our generation’s voices must be heard.
Since April 2024, we have been developing our innovative manifesto for change, which sets out our 5 priorities.
These will:
- Mean we can press for action in areas such as safety, privacy, and education.
- Make sure young people are represented.
- Give us a prominent role in decision making, awareness raising, and regulation of the online world.
- Make sure that the internet remains an uplifting and positive space for young people.
We chose these 5 priorities because we believe that they are the most relevant concerns for our generation. We also believe that these are the areas where things really need to change.
We’ll work by meeting with key decision makers, supported by the NSPCC and the people they work with. They’ll help us achieve our priorities, creating the changes that we need to see.
The problems we see:
Online safety education is often outdated and irrelevant, failing to address the real challenges children face. And resources for parents are built on what parents see as the problems, not the actual problems that children are facing.
The solutions we want:
We want all adults to be better educated to support young people with the issues they are facing online. This support needs to be informed by children and their experiences. It needs to be relevant and age-appropriate, highlighting child-focused support. Young people need to have a significant role in shaping this.
The problems we see:
AI tools aren’t yet regulated, leading to unchecked developments without accountability. AI chat bots can be unreliable and have the potential to spread misinformation which can lead to serious harm. Additionally, generative AI tools for voice, image and video can be used by anyone to create whatever they want without consequence.
The solutions we want:
Introduce strict AI regulations to make sure it’s developed responsibly, with accountability for creators and clear ethical standards. Implement rigorous testing processes for AI systems to prevent misinformation and restrict what AI will give advice on. There should be limits on the type of content that can be created by AI tools, especially when it comes to sexualised content.
The problems we see:
Online advertising has become the norm in the content young people see online, which risks having a negative impact on their self-esteem and behaviour. And when influencers advertise to young people, they may not always be truthful about what they’re selling. Ultimately tech companies are prioritising advertising over young people’s safety and wellbeing.
The solutions we want:
Stricter regulations on online advertising which could be viewed by young people, and for paid endorsements to be better signposted. AI and filters should be regulated in advertising and content that endorses products. And educational resources should explore the impact of online advertising on young people, and help them have more control over what they see in their feed.
The problems we see:
The tools available to report harmful behaviours on social media do not protect children and young people. They’re often complicated and unclear, making children and young people less likely to report issues. Reporting tools on major platforms are ineffective, and young people often feel like nothing happens when they do report. They also feel that there isn’t effective support.
The solutions we want:
Easier reporting tools should be put in place with clear guidance for younger users. When reports are made by young people, a clear, quick response should let young people know that their report is being dealt with, and where to go for additional support. This response and support should be moderated by real people, who should direct children and young people to youth-focused services like Childline.
The problems we see:
Privacy and data sharing is often opt-in by default and young people are unclear about what information is being collected about them and why, exposing them to privacy risks and their data being used without being fully informed. Additionally, AI models are being trained using data without consent, increasing young people’s fears about AI misinformation.
The solutions we want:
To address privacy risks for young people, platforms should shift to opt-out data sharing by default, use age-based consent frameworks, and be clearer about how data gets used. Safety updates or privacy rollouts should be young person friendly, so they are easy to read and understand.
Find out more about how to keep children safe online
Supported by Vodafone.