Safety concerns around generative AI
New research commissioned by us highlights the different ways that generative artificial intelligence (AI) is being used to groom, harass and manipulate children and young people.
This comes as polling shows that the UK public are concerned about the rollout of AI. Savanta surveyed 3,356 people from across the UK and found that most of the public (89%) have some level of concern that “this type of technology may be unsafe for children”. 1
The majority of the public (78%) said they would prefer to have safety checks on new generative AI products, even if this caused delays in releasing products over a speedy roll-out without safety checks.
Our new paper shares key findings from research conducted by AWO, a legal and technology consultancy. The research Viewing Generative AI and children’s safety in the round2 identifies seven key safety risks associated with generative AI; sexual grooming, sexual harassment, bullying, financially motivated extortion, child sexual abuse & exploitation material, harmful content, and harmful ads and recommendations.
Generative AI is currently being used to generate sexual abuse images of children, enable perpetrators to more effectively commit sexual extortion, groom children and provide misinformation or harmful advice to young people.
From as early as 2019, we have been receiving contacts from children via Childline about AI.