Children’s lives and rights at stake: TikTok algorithms legally challenged by grieving families

TrustElevate Team
10 min readJul 8, 2022

A new future for algorithmic responsibility

By Aebha Curtis and Dr Rachel O’Connell

Grieving parents sue TikTok for the role of the platform’s algorithms in leading to their children’s wrongful deaths. The text of the EU’s Digital Services Act provides a legal basis for regulating the algorithms underpinning recommender systems deployed by social media platforms such as TikTok and Instagram. A new legal precedent set in Japan upends the notion that proprietary algorithms can operate under the protection of trade secrets.

Each of these represents an important part of the movement to empower users and regulators to challenge platforms’ proprietary algorithms. We are at the cusp of a new era of algorithmic transparency and accountability.

Major platforms are coming under greater scrutiny from lawmakers and the judicial system this month in the wake of a number of high-profile cases challenging the systems underpinning their operations. Among the most high profile and distressing of these cases are those brought by the families of Lalani Erika Walton, 8, and Arriani Jaileen Arroyo, 9, who died by hanging after attempting the ‘blackout challenge’ after videos of the challenge were repeatedly promoted to the girls on TikTok, as alleged by the lawsuit.

Another case was brought by the family of Nylah Anderson, 10, who also died after attempting the blackout challenge and asphyxiating with the strap of one of her mother’s bags. The lawsuit brought by her family states that “TikTok has invested billions of dollars to intentionally design and develop its product to encourage, enable, and push content to teens and children that the defendant knows to be problematic and highly detrimental to its minor users’ mental health,” the lawsuit says.”

While it is the case that this ‘challenge’ existed and was spread by word of mouth before social media, the issues that arise here relate to both the acceleration of the spread and the repeated promotion and thereby amplification of the challenge. This case or a similar one might uncover insights into the risk assessments, if any, conducted by the content moderation team, the emails exchanged between internal teams regarding how to handle the situation, which will provide insights into the levels of knowledge of the issue amongst decision makers, and the company’s risk appetite and associated trade-offs.

Codified in laws such as the Digital Services Act (DSA) and the Online Safety Bill and Comment 25 of the UNCRC is that platforms have a duty of care to prevent or mitigate risks to users — young or old. The fact, though, that more has not been done by the platform to prevent the youngest and most vulnerable users from seeing this kind of content and being implicitly encouraged to recreate it is perhaps the key point.

In 2021 the Italian data privacy watchdog installed a ban on TikTok accounts where ages could not be verified after a girl aged 10 from Palermo also died after trying to participate in the blackout challenge. Between February and April, following the intervention, more than 12.5 million Italian users were subject to age assurance measures to prove they were over 13.

Note that one of the most immediately available and effective measures to implement in mitigating at least some of the major risks on the platform is age verification. Arguably, what this case will examine is the question of to what degree these platforms are knowingly facilitating harm to children. It is also interesting to consider, in the wake of the Italian Government’s decision and enforcement action, to what degree (if any) platforms such as Tik Tok are working toward rolling out consistent age verification across the world. The actions of the Italian government raises the question of a need for concerted action at a supranational level in response to situations such as this. Arguably, there needs to be concerted effort or a response at a supranational level — children in each country around the world are deserving of protection. These platforms are transnational and whilst each country may want the autonomy to take action against companies, that shouldn’t preclude enforcement across whole regions.

In looking at these issues of algorithmic accountability and corporate responsibility, another recent case that may well impact the level of scrutiny and kind of outcomes. Just last month, a restaurant group in Japan argued that the operator of a restaurant review platform, Kakaku.com, had changed its tallying of user scores in such a way that it has damaged its sales. Kakaku.com was ordered to pay ¥38.4mn ($284,000) in damages and, interestingly, to disclose part of its algorithms.

This disclosure is to be constrained such that the restaurant group cannot reveal what it was shown, but it does set a remarkable precedent in terms of not upholding the understanding that digital platforms’ algorithms are trade secrets. While this case relates to competition law, it may well be an important touchstone in future cases where the disclosure of algorithms may be a significant and valuable outcome.

It also mirrors, in some ways, the emerging attitudes around algorithmic accountability in the EU. Major platforms have been coming under increasing pressure from regulators to be more transparent about their internal systems. The text of the new EU Digital Services Act, agreed upon just this week by a huge majority, requires that online platforms “consistently ensure that recipients of their service are appropriately informed about how recommender systems impact the way information is displayed, and can influence how information is presented to them.

They should clearly present the parameters for such recommender systems in an easily comprehensible manner to ensure that the recipients of the service understand how information is prioritised for them. These parameters should include at least the most important criteria in determining the information suggested to the recipient of the service and the reasons for their respective importance, including where information is prioritised based on profiling and their online behaviour.” Benchmarks need to be set with regard to the age-appropriateness of and exposure to these systems. Children want to interact with people of their own age, to play and have a good time. They have a right to an age appropriate space and to not be manipulated as per the parameters of data operations and algorithms that they and their parents have no opportunity to influence. New legislation requires very large online platforms to consistently ensure that users enjoy, in relation to their recommender systems, alternative options which are not based on user profiling and are directly accessible from the place where recommendations are presented to the user.

This is a powerful step forward in empowering users to shape their own online experiences and move away from potentially dangerous and damaging recommender systems which can warp what users see online. It will undoubtedly also be most significant for children and young people, who are deserving of greater protections against potentially harmful digital practices such as profiling. Indeed, this greater degree of protection for children and young people against profiling specifically is mentioned in Recital 38 of the General Data Protection Regulation. Recital 71 also states that decisions based on profiling should not concern a child.

To reliably demonstrate compliance with these requirements, platforms must know the ages of their users to appropriately identify which users can be subject to certain functionalities of the platforms and particular data processing activities. Age verification must be deployed to effectively mitigate harms to the youngest and most vulnerable online users and to stop the abuse of services in ways like TikTok LIVE has been abused to create ‘a strip club filled with 15-year-olds’. This latter instance is particularly important in conveying the role of recommender systems in not only promoting kinds of content but also contact between adults with a sexual interest in children and children.

The removal of friction in connecting adults with children means that the child doesn’t have a choice in who views their content, which is especially problematic when children live-stream. During the pandemic, we witnessed a massive spike in the amount of content referred to as ‘self-generated’ child abuse images and videos. To learn more about the seamless connection of adults and children by digital platforms and the problematic use of the term ‘self-generated’, read our blog post here.

Platforms should also implement systems and processes to equip themselves to respond to a broader swathe of known and new and emerging risks and harms in a way that is sensitive to the varied needs of children and young people across different age bands. Indeed, age verification is not a silver bullet and a single age gate distinguishing between, say, those under and over 13 is insufficient to truly promote the best interests and development of children and young people. An important instrument in the toolkit to mitigate risk relies on the operationalisation of safety by design principles, which involves conducting a Child Rights Impact Assessment.

This kind of assessment operationalises not only the principles laid out in law but is iterative and responsive to the changing landscape of risks and harms, and is vital in establishing standard methods of assessing risk to the benefit of service providers, users and regulators alike while providing auditable trails of action and transparency. Child rights impact assessments also pave the way for a more nuanced approach to the ‘self-regulation’ of content (to the degree that that is now possible given the spate of emerging regulations which seeks to move toward a more traditional regulatory approach of the tried and tested safety and risk management processes applied offline in the digital space), such that distinctions can be drawn between professionally produced content and user generated content in terms of moderation, mitigating risk, managing virality, etc.

Virality is, crucially, facilitated by ranking and recommender systems, which are designed to maximise the time spent on an individual service provider’s platform or app as well as the time spent on an individual piece of content, also called engagement. Features of content which tend to be more attention-grabbing and holding include being shock and/or anger inducing. The attention economy, whereby platforms offer services for free and make money by selling to advertisers and are thereby incentivised to increase attention paid and time spent on the platform to secure advertisers, entails a system that algorithmically curates and prioritises attention grabbing content that is often polarising, harmful and has little or no value in terms of being informative, educational, developmentally sensitive, or comforting.

This algorithmically curated system is propped up by addiction. It is designed to capture our attention and keep it there: Chamath Palihapitiya, former Vice President of User Growth at Facebook, has spoken about the power of social media in making us dependent on it.

“Platforms like Facebook, Snapchat, and Instagram leverage the very same neural circuitry used by slot machines and cocaine to keep us using their products as much as possible.” Knowing this problematises the suggestion that parents simply need to ‘watch what their kids do online’ and further implicates platforms in terms of their knowingly exploiting their users, including children, and refusing to exercise a duty of care.

Engagement, or addiction, does not rely on being supportive of children’s safe and healthy growth, education and recreation. Our society and our children themselves, however, do. There is a clear and present need for child-centric, trustworthy systems in major platforms’ content delivery systems and the seamless connections between strangers in particular adults who have a sexual interest in children and children. Whether that means that child-centric, trustworthy systems underpin a new wave of responsibly deployed recommender systems or are used to support a new way of doing things is up to the assessment and determination of regulators and civil society.

That being said, online platforms are now at least beginning to be prepared to consider age verification and taking action and issues such as the need for child-centric approaches are being considered. Just recently, the Family Online Safety Institute’s 2022 European Forum was hosted by Google. The Forum was opened with an important speech by Baroness Beeban Kidron, architect of the Information Commissioner’s Office’s Age Appropriate Design Code, or Children’s Code, about how we must do better by children.

The parents of children including Lalani, Arriani and Nylah, are speeding up the process of holding companies to account and leveraging new legislation. Parents’ combined voices are stronger together and can effect significant change. For too long, parents have felt brow-beaten and guilt ridden about not knowing how to keep their children safe online. These cases highlight that platforms’ operations were stacked against parents and their efforts to keep children safe online. There are clear steps being taken towards ensuring the legal protections are in place to ensure algorithmic transparency and accountability that will truly impact children’s lives for the better. Cases such as this may set precedent and benchmark where responsibility lies as well as the nature, scale and extent of the duty of care that companies must exercise when catering for children and adults. It is also clear that enforcement of these new laws and future legal precedents is required, as are more concerted and coordinated national and transnational efforts. Companies spend phenomenal amounts of money lobbying against efforts to create a more equitable social contract between platforms and society. However, lawsuits launched by parents send a clear message to other parents worldwide: change is possible if you combine your voices and call for it.

We might look to real world examples of child appropriate spaces to find a model for how to improve children’s digital playgrounds and experiences. For example, when you take your children to an amusement park you know that the play areas have been built to an agreed specification and small children are not allowed to go on the adult rides but are perfectly happy on age-appropriate rides,

Parental oversight is still a must but the amusement park is legally required to exercise a duty of care and faces tough penalties if its practices can result in harm or a lack of care that demonstrates reckless disregard for the safety or lives of others, which is so great it appears to be a conscious violation of other people’s rights to safety. It will be interesting to see where along this spectrum TikTok is found.

___________________________________________________________________

Aebha Curtis: Senior Policy and Regulatory Affairs Manager, TrustElevate

Aebha is an experienced research analyst and policy and regulatory affairs professional having worked on multiple digital accountability and policy-oriented projects, including the Government-funded Verification of Children Online. She continues to commit herself to the advancement of digital social responsibility and children’s data rights, contributing to the work of the Internet Commission and supporting TrustElevate by way of ongoing Policy Analysis.

Dr Rachel O’Connell: CEO, TrustElevate

Dr Rachel O’Connell is CEO of TrustElevate and a leading expert on digital identity. She is the author of an identity attribute verification standard, the Pas 1296 Age Checking Code of Practice, which was published by the British Standards Institution (BSI) and is now becoming an ISO standard. Rachel is a member of the BSI technical committee IST/33/5 Identity Management and Privacy Technologies and its panel IST/33/5/5 Age Assurance Standards.

--

--