Safe by Default
Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems

Photo by Christin Hume
Executive Summary
Over the last decade, social media platforms have too often fallen short on their promise to connect and empower people and have instead become tools optimised to engage, enrage and addict them. The business model of the dominant platforms has created a profit incentive for platforms to prioritise user engagement over safety, with algorithmic recommender systems focused on keeping people clicking and scrolling as long as possible, which in turn allows the companies to sell more ad space, thereby generating revenue.
There is mounting evidence of the harms caused by ranking and recommending content being optimised for engagement. Ranking algorithms optimised for engagement select emotive and extreme content, and show it to people who they predict are most likely to engage with it (where “engage with” means they will scroll/stop scrolling to view or watch, click, reply, retweet, etc.). Meta's own internal research disclosed that a significant proportion (64%) of new joiners to extremist groups were caused by their own recommender systems. Even more alarmingly, in November 2023, Amnesty International found that TikTok’s algorithms exposed multiple accounts of 13-year-old children to videos glorifying suicide within less than an hour of launching the account.
By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems are a crucial design layer of Very Large Online Platforms (VLOPs)1 regulated by the Digital Services Act (DSA).2 Because of the specific risks they pose, recommender systems warrant urgent and special attention from regulators to ensure that platforms mitigate against “systemic risks”. Article 34 of the DSA defines “systemic risks” by reference to “actual or foreseeable negative effects” on the exercise of fundamental rights, dissemination of illegal content, civic discourse and electoral processes, public security and gender-based violence, as well as on the protection of public health and minors and physical and mental well-being.
As shown in our previous briefing, “Prototyping User Empowerment”, there are many ways for companies to mitigate against systemic risks, including by providing features that would encourage individuals to make conscious choices regarding content curation, promoting safer online behaviours and healthier habits. This transition towards authentic personalisation (i.e. an experience actively shaped by users) must start with VLOPs making their platforms safe-by-default. Unfortunately, this cannot be achieved with one quick switch. It will involve re-designing many elements of the platform. This includes new features to actively promote more conscious user choice, opening up the social network infrastructure to third party content curation services, as well as measures aimed at protecting users from addictive and predatory design features.
In this briefing, we outline five categories of changes to the default settings of today’s dominant social media platforms which will make their functioning safer, rights-respecting and human-centric:
- Profiling off by default
In their default version VLOPs' recommender systems should not be based on behavioural profiling i.e. observing and collecting passive data about how users behave and interact on the platform in order to infer their interests. Instead, the default feed should only use as input signals data actively provided by the user for this very purpose (e.g. interests declared by the user when building their profile), as well as explicit user feedback on specific content (e.g. “show me more/show me less” signal sent by clicking a relevant button). - Optimising for values other than engagement When designing their recommender systems VLOPs should depart from signals and metrics that correlate with user engagement (especially short term engagement) and prioritise signals/features that correlate with (subjective) relevance and (objective) credibility of the recommended content. This includes: prioritising the signals provided by explicit user feedback and preferences, bridging signals (e.g. the diversity of the users who engaged with a given piece of content and positive explicit feedback coming from users that are very different from one another), and signals that correlate with legitimacy, credibility and transparency of the source, especially when it comes to recommendations and search returns on sensitive topics.
- Prompting conscious user choice, including opening up content curation to third party services Platforms should create new features that facilitate conscious, authentic personalisation of the feed by their users and protect their wellbeing. This includes a range of measures such as sliders to set different optimization goals for recommendations (e.g. more long-form vs short-form content, local vs global relevance etc.), a ‘hard stop’ button to remove unwanted classifications of content from appearing altogether, a button to ‘reset’ an individual’s feed, prompts to share declared interests and settings to allow users to explore how their feed changes based on their choices and interactions. A further promising avenue for user empowerment would be to oblige VLOPs to open up their infrastructure to allow independent, third-party content curation and moderation services.
- Positive friction to disrupt compulsive behaviour and trigger reflection
Platforms should introduce positive friction aimed at slowing down posting and user interactions, giving users a chance to think before sharing. This includes ‘think before you share’ messages and limits on resharing as well as a series of practical recommendations aimed at countering platform ‘stickiness’ so that users are nudged towards disconnecting from social media rather than compulsively engaging, as well as being provoked to be more intentional about what they want to get out of a given social media session. - No addictive design features
Based on a growing body of research on the nature and impact of addictive design features on social media, we call on platforms to stop using certain design features altogether. These include measures like: notifications turned on by default, infinite scroll, video autoplay and misleading buttons which give users a false sense of control over content curation whilst not producing the results they advertise (such as “do not show content like this” buttons that do not prevent similar content from appearing again).
We appreciate that recommender systems are complex machines and any experimentation comes at a risk of causing new harms. Therefore measures recommended in this briefing should be tested and refined before implementation. This is the task for VLOPs guided by the European Centre for Algorithmic Transparency and the European Commission. What we hope is that, at the end of this process, (very large) social media platforms will have strong incentives to join a race to the top: competing with each other for default settings that prioritise safety and quality in user experience, and prototyping advanced features that allow for independent curation of recommended content.


This briefing was drafted by Katarzyna Szymielewicz (Panoptykon Foundation), with input from Tanya O’Carroll (independent expert), Marc
Faddoul (AI Forensics), Dorota Głowacka (Panoptykon Foundation), and Oliver Marsh (AlgorithmWatch).
We would like to acknowledge valuable contributions and inspiration from the following experts:
Abagail Lawson, Integrity Institute,
Claire Pershan, Mozilla Foundation
Jeff Allen, Integrity Institute
Johnny Ryan, Irish Council for Civil Liberties
Julian Jaursch, Stiftung Neue Verantwortung (SNV)
Kasper Drazewski, BEUC, The European Consumer Organisation
Lisa Dittmer, Amnesty International
Margaux Vitre, École Normale Supérieure
Pat de Brún, Amnesty International
Rosie Morgan-Stuart, People vs Big Tech
Stanisław Burdziej, Nicolaus Copernicus University
Xavier Brandao, #jesuislà