Stop victim blaming: there is no such thing as ‘self-generated’ child sexual abuse material

TrustElevate Team
trustelevate
Published in
7 min readJun 22, 2022

--

Dr Rachel O’Connell and Aebha Curtis

The Internet Watch Foundation’s 2021 report revealed that “self-generated” child sexual abuse material (CSAM) is now the predominant type of child sexual abuse material they are seeing online. “Self generated” is defined as being produced when a child records, by way of a phone or webcam,sexually explicit material in many cases when alone in their bedroom. Around 70% of reports of child sexual abuse material online includes this kind of material.

The report also highlighted that 60% of actioned reports “specifically show the sexual abuse of an 11–13 year old girl who has been groomed, coerced or encouraged into sexual activities via a webcam.” And that children as young as 3 to 6 were abused in this way.

The fact is that the internet and digital platforms have made it shockingly easy for adults with a sexual interest in children to interact with, and ultimately abuse, children online. Critical to this facilitation is the widespread and indiscriminate use of recommendation systems.

Platforms’ features and business models actively promote CSAM generation

Recommendation systems seamlessly connect adults with a sexual interest in children with potential victims. In the same way that someone who searches for VW camper van will find their For You and Explore pages populated by ‘van life’ and ‘tiny home’ videos and people who share the same interests on TikTok and Instagram, adults with a sexual interest in children who searched for and engaged with images and videos of children are automatically connected to children and presented with more and more of the videos and live streams created by children. These same systems are also establishing shared interest groups and enabling adults with a sexual interest in children to hunt as a pack.

And despite the severity of the consequences and scale of the problem, platforms have not deployed measures to mitigate the risks enabled by their systems, that are designed to facilitate access to content and grow a community of interest — in this case paedophiles — which has resulted in the expotential increase in online child sexual abuse.

Platforms also indiscriminately deploy features which function to modify user behaviour by gamifying online interaction and engagement. For example, quantifying likes, followers and viewers is one of the central features of Instagram. Displaying those prominently and in powerful association with users causes users to attach a certain amount of value to those numbers. Children and young people, in particular, are more likely to ascribe a lot of value to these counts.

There have been a number of studies showing the negative impact of likes, for example, and Frances Haugen’s whistleblowing has unveiled Facebook’s own knowledge of the impact of their services on children and young people. It is understood that the number of hg likes, followers and viewers of live streams are often tied in with children’s sense of self worth. Children and young people also understand that high counts of any of these are monetisable.

For example, TikTok has enabled a feature whereby viewers of live streams can ‘gift’ creators virtual items, which the creator can then in turn redeem for real money. Although their policies state that only those over 16 can live stream and only those over 18 can redeem gifts, it is widely documented that these policies have not been adequately implemented — as Forbes puts it, TikTok Live has become “A Strip Club Filled With 15-Year-Olds”.

Adults with a sexual interest in children leverage service design to exploit children

Adults with a sexual interest in children take advantage of the double whammy of self-worth and actual money being tied in with digital viewership and validation. A classic tactic of such predators is to act as a group: when a child starts a livestream, adults will join in big numbers. These adults praise and encourage these children to engage in increasingly sexually explicit acts.

When a child doesn’t respond to their requests, large numbers of adults drop off the live stream. The child sees the audience numbers decrease and is more willing to comply with the requests of the audience, at which point the other adults rejoin. This orchestrated join-leave-rejoin of a live stream occurs as adults communicate with one another via other channels and coincides with adults shifting between praising, cajoling, threatening and coercively controlling, back to praise and appreciation.

These interactions have led to the spike in the amount of this child abuse material online. Much of this content is produced through orchestrated coercion, harassment, and threats from individual adults but, most often, groups of adults with a sexual interest in children who act in concert. Note that it is not only the children directly abused in this manner who are harmed, but also those children in the livestream audience viewing these abusive interactions, which has the effect of normalising these activities.

So, targeted and sustained abuse is systematically facilitated by digital platforms, which are themselves complicit in the creation of groups of abusers who can then at scale use simple psychology to generate child sexual abuse material and, via sexploitation, create a cycle of abuse to generate yet more material. It is this material that has been labelled by the Internet Watch Foundation and others as “self-generated” child sexual abuse material.

To refer to the captured images and videos of these children as being “self-generated” is a deeply unfair mischaracterisation that amounts to little more than victim blaming. To say that these images and videos are self-generated is to ignore the targeted and sustained abuse committed against them by highly manipulative sexual predators, to say nothing of the power exerted over them by the algorithms and systems designed to capture users’ attention and absorb their self-image.

The label ‘self-generated’ obfuscates that those children are coerced into engaging in sexually explicit acts online and that the legacy of that abuse is child sexual abuse material which is accessible online and will be circulated amongst adults with a sexual interest in children for decades. In addition to minimising the role of the adults, it also hides the extent of the continued role of companies and, in particular, social media platforms, in enabling child sexual abuse.

Corruption or Coercion of A Minor

For an alternative suggestion for a label for ‘self-generated’ child sexual abuse material, we may look to the law for guidance. Under US law, there is a criminal offence of inducing people below the age of consent to engage in sexual activity which is known as the Corruption of A Minor (COAM). This is a more accurate label. The terminology of COAM aligns with the UK Sexual Offences Act sections 10 and 13 with regards to the ‘corruption’ or coercion of a child by another person to engage in sexual activity.

It is critical to examine both the role and by extension liability of companies in the context of digitally facilitated COAM. At one point, back in the 00s it may have been possible to argue that the unprecedented amount of COAM material being generated and circulated online was an unintended consequence of recommendation engines and manipulative service design. However, it would not have been a difficult consequence to anticipate had a risk assessment been conducted.

Furthermore, there is a growing body of evidence gathered by the FBI and the IWF of COAM. For platforms to continue to operate in this way, knowingly facilitating the activities of adults with a sexual interest in children and the victimisation of our children and young people, should be untenable.

Today, in the continued absence of completed risk assessments, companies can rely on their own community guidelines and enforcement reports to report on their action against issues on their platforms.

However, we must question whether it is appropriate to allow platforms to ‘mark their own homework’ as it were. And what undertaking has been made, if any, by these platforms to understand the true scale of the problem, as opposed to the scale of enforcement action?

The Online Safety Bill is an opportunity to hold platforms to account

The fact that what we have described above is knowingly allowed to continue in the absence of legal safeguards and regulatory oversight underpins the necessity of the Online Safety Bill (OSB) and the importance of getting things right. Clearly, we cannot rely on companies that knowingly enable child sexual abuse and do little to stop it under a self-regulatory regime.

The OSB calls for age verification so that platforms know the ages of their users and can better protect them by not, for example, seamlessly connecting adults with a sexual interest in children to potential victims. Age verification on its own will not suffice, companies must be mandated that companies design products and services that align with Safety By Design principles and document their decision making with regard to risk minimisation for a regulator to review.

Companies have a duty of care toward children. In addition to new legislation, age verification, education, children’s rights impact assessments, greater transparency with respect to the handling of reports of abuse by platforms are just some of the measures that the OSB should require companies to implement.

The OSB is a vital opportunity to enhance the safeguarding of children online. Companies must be held to account with respect to the underlying recommendation engines, AI and manipulative service design that shape the operation of platforms and drive the commercial model of the attention economy.

And we must reframe the narrative around the sexual exploitation of children such that it doesn’t further victimise children and inadvertently hide the dynamics between the perpetrators and the platforms that facilitate this abuse. It is important to recognise this content as evidence of the commission of a crime and the responsible party is not, in fact, the child. Using a legal definition is also helpful in the context of both criminal proceedings and future class actions.

It is absolutely vital that we cease to use the term ‘self-generated’ and refer to this category of child online sexual exploitation as COAM and incorporate this shift in language into legislation and regulatory frameworks, including in relation to the OSB. And, finally, we must engage in a legal review of liability in relation to platforms, young users/content creators and adults with a sexual interest in children to identify other policy or regulatory controls needed under other laws such as COAM and UK Sexual Offences Act.

--

--

TrustElevate Team
trustelevate

The leading RegTech solution for protecting user data