Executive Summary

From August 25th 2023 Europe’s new Digital Services Act (DSA) rules kick in for the world’s largest digital platforms, shaping the design and functioning of their key services. For the nineteen platforms that have been designated “Very Large Online Platforms” (VLOPs) and “Very Large Online Search Engines” (VLOSEs), there will be many new requirements, from the obligation to undergo independent audits and share relevant data in their transparency reports, to the responsibility to assess and mitigate against “systemic risks” in the design and implementation of their products and services. Article 34 of the DSA defines “systemic risks” by reference to “actual or foreseeable negative effects” on the exercise of fundamental rights, dissemination of illegal content, civic discourse and electoral processes, public security and gender-based violence, as well as on the protection of public health and minors and physical and mental well-being.

One of the major areas where platform design decisions contribute to “systemic risks” is through their recommender systems – algorithmic systems used to rank, filter and target individual pieces of content to users. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems became a crucial design-layer of VLOPs regulated by the DSA. Shadowing their rise, is a growing body of research and evidence indicating that certain design features in popular recommender systems contribute to the amplification and virality of harmful content, such as hate speech, misinformation and disinformation, addictive personalisation and discriminatory targeting in ways that harm fundamental rights, particularly the rights of minors. As such, social media recommender systems warrant urgent and special attention from the Regulator.

VLOPs and VLOSEs are due to submit their first risk assessments (RAs) to the European Commission in late August 2023. Without official guidelines from the Commission on the exact scope, structure and format of the RAs, it is up to each large platform to interpret what “systemic risks” mean in the context of their services – and to choose their own metrics and methodologies for assessing specific risks.

In order to assist the Commission in reviewing the RAs, we have compiled a list of hypotheses that indicate which design features used in recommender systems may be contributing to what the DSA calls “systemic risks”. Our hypotheses are accompanied by a list of detailed questions to VLOPs and VLOSEs, which can serve as a “technical checklist” for risk assessments as well as for auditing recommender systems.

Based on independent research and available evidence we identified six mechanisms by which recommender systems may be contributing to “systemic risks”:

  1. amplification of “borderline” content (content that the platform has classified as being at higher risk of violating their terms of service) because such content drives “user engagement”;
  2. rewarding users who provoke the strongest engagement from others (whether positive or negative) with greater reach, further skewing the publicly available inventory towards divisive and controversial content;
  3. making editorial choices that boost, protect or suppress some users over others, which can lead to censorship of certain voices;
  4. exploiting people’s data to personalise content in a way that harms their health and wellbeing, especially for minors and vulnerable adults;
  5. building in features that are designed to be addictive at the expense of people’s health and wellbeing, especially minors;
  6. using people’s data to personalise content in ways that lead to discrimination.


For each hypothesis, we provide highlights from available research, which support our understanding of how design features used in recommender systems contribute to harms experienced by their users. However, it is important to note that researchers have been constrained in their attempts to verify causal relationships between specific features of recommender systems and observed harms by what data was made available to them either by online platforms or platforms’ users. Because of these limitations external audits have spurred debates about the extent to which observed harms are caused by recommender system design decisions or by natural patterns in human behaviour.

It is our hope that risk assessments carried out by VLOPs and VLOSEs, followed by independent audits and investigations led by DG CONNECT, will end these speculations by providing data for scientific research and revealing specific features of social media recommender systems that directly or indirectly contribute to “systemic risks” as defined by Article 34 of the DSA.

In the second part of this brief (page 14) we provide a list of technical information that platforms should disclose to the Regulator, independent researchers and auditors to ensure that results of the risk assessments can be verified. This includes providing a high-level architectural description of the algorithmic stack as well as specifications of different algorithmic modules used in the recommender systems (type of algorithm and its hyperparameters; input features; loss function of the model; performance documentation; training data; labelling process etc).

Revealing key choices made by VLOPs and VLOSEs when designing their recommender systems would provide a “technical bedrock” for better design choices and policy decisions aimed at safeguarding the rights of European citizens online.

You can find a full glossary of technical terms used in this briefing on page 16 of the full report.


Read the full report in the pdf attached.

ACKNOWLEDGEMENTS

This brief was drafted by Katarzyna Szymielewicz (Senior Advisor at the Irish Council for Civil Liberties) and Dorota Głowacka (Panoptykon Foundation), with notable contributions from Alexander Hohlfeld (independent researcher), Bhargav Srinivasa Desikan (Knowledge Lab, University of Chicago), Marc Faddoul (AI Forensics) and Tanya O’Carroll (independent expert).

In addition, we are grateful to the following civil society experts for their contributions:

Anna-Katharina Meßmer (Stiftung Neue Verantwortung (SNV). Asha Allen (Centre for Democracy and Technology, Europe Office). Belen Luna (HateAid). Josephine Ballon (HateAid). Claire Pershan (Mozilla Foundation). David Nolan (Amnesty International). Fernando Hortal Foronda (European Partnership for Democracy). Jesse McCrosky (Mozilla Foundation/Thoughtworks). John Albert (AlgorithmWatch). Lisa Dittmer (Amnesty International). Martin Degeling (Stiftung Neue Verantwortung (SNV). Pat de Brún (Amnesty International). Ramak Molavi Vasse’i (Mozilla Foundation). Richard Woods (Global Disinformation Index).

Fixing Recommender Systems_Briefing for the European Commission (pdf)
Fixing Recommender Systems_Briefing for the European Commission (pdf)