05/09/2024 | News release | Distributed by Public on 05/09/2024 03:56
Self-generated child sexual abuse material (CSAM) continues to rise at an alarming rate, according to not-for-profit organisation, Internet Matters. CSAM includes both material that's been shared voluntarily between peers (and then re-shared without consent) and coerced 'self-generated' imagery through grooming, pressure or manipulation. Regardless of origin, there's a significant risk that this material can land in the hands of adult offenders, where it's then distributed within offender networks.
At Nominet, we support a number of research, initiatives and programmes that prevent, respond and eradicate practices that put young people at significant risk online. We've found that the creation and distribution of this material is a growing concern, and one echoed throughout our Countering Online Harms Innovation Fund grantees. In 2023, 92% of content removed from the internet by the Internet Watch Foundation (IWF) contained "self-generated" child sexual abuse material.
It was therefore important that we funded work that tackled the growing problem of self-generated CSAM. In 2023, we partnered with Internet Matters to conduct focused research into how to prevent the creation and distribution of this material.
Now, Internet Matters has published the full findings of its research into self-generated CSAM online. The report, 'Shifting the dial: methods to prevent self-generated child sexual abuse among 11-13-year olds', is funded by Nominet and aims to not only inform the work of Internet Matters, but the work of the wider online safety sector in combating this issue.
How widespread is the issue of self-generated CSAM?
The volume of self-generated sexual images involving 11-to-13-year-olds continues to rise, with a 14% increase in 2022 (199,363) to 2023 (254,071) alone. According to a recent Internet Matters national survey, 14% of teenagers under 16 say they have experienced a form of image-based sexual abuse. This equates to over 400,000 children in the UK. In the same survey, a quarter of teenagers under 16 said they are aware of a form of image-based abuse being perpetrated against another young person.
'Shifting the Dial' demonstrates the lack of programmes tailored by gender to prevent CSAM - despite girls being overwhelmingly the victims of online sexual abuse. There is also a lack of evidence about what works to deter children from sharing sexual images online. Some resources have also been criticised for being simplistic or victim blaming, particularly when it comes to young women and girls.
What did Internet Matters find in its research?
The research involved speaking with different focus groups of 111 children (58 girls, 53 boys) on multiple occasions. It asked them which educational messages would be most effective in dissuading 11-to-13-year-olds from sharing sexual images. The research revealed:
Can self-generated CSAM be prevented, and if so, how?
Efforts to tackle self-generated abuse content are typically focused on removing this content once it's already in circulation. While this is valuable, there needs to be a greater emphasis on preventing sexual content from being shared in the first place.
Internet Matters trialled two digital methods of prevention - an interactive game and an 'in-the-moment' nudge technique - which could be available on digital devices. Both showed promise and received an enthusiastic response from the children's panels. Internet Matters will be developing these further, following feedback from children, parents, and professionals to make them available to a wider number of children.
The children in the Internet Matters panels also appreciated the single sex RSHE lessons that Internet Matters designed and felt that learning in smaller groups that were split by gender worked well. Moving forward, this could be a more effective approach to preventing self-generated CSAM - enabling young people to feel more comfortable and have their voices heard.