Back to news

Big Tech’s Assault on Women

Ahead of the DSA vote, online platform priorities continue to enable rampant abuse, sexism, and racism.

As European leaders shape the final provisions of the Digital Services Act (DSA) and the Digital Markets Act (DMA), it is essential that they pay attention to the particular ways in which Big Tech products harm the women and gender non-conforming people who use their platforms. Considering how Facebook began -- as a way for male Harvard students to rate and rank women on their “hotness” -- it’s perhaps not surprising that these platforms continue to perpetuate and enable misogyny. But it is absolutely unacceptable going forward.

Big Tech is an industry still largely dominated by men. While there have been efforts to increase diversity at tech companies in recent years, women remain massively underrepresented -- especially in leadership and tech roles -- making up just a quarter of the entire workforce. And though Big Tech companies have pledged time and time again that they will work harder to eradicate online misogyny and disinformation, we have yet to see any meaningful results. To the contrary, we have all too often been met with further studies and internal documents revealing just how bad these issues truly are. It’s plain to see that the parties who write the rules must change.

In honour of International Day for the Elimination of Violence Against Women, this article surveys the wealth of existing research that demonstrates how women -- and especially women of colour and from LGBTQ communities -- face an increased risk of harm and abuse while they engage on online platforms. From vile hate speech and threats to rampant disinformation designed to exploit sexist and racist tropes, the facts are clear: Big Tech companies are incapable of regulating themselves. To avert further damage, decisive action must be taken by civic leaders to create an online world that is safe for all users. With a majority of young women and girls experiencing online abuse, and 87% reporting the problem is getting worse, there is simply no time to waste. European legislators must seize the golden opportunity now before them.

A troubling rise in online abuse, hate speech, and revenge porn

Academic and civil society research reveals that not only is online gendered abuse widespread -- it is on the rise. Globally, 38% of women have personally experienced online violence -- and 65% know women from within their networks who have experienced it.

During the Covid-19 pandemic, this type of abuse has increased even further. A study by UK charity Glitch into the impact that the UK’s national lockdown had on online abuse against women and non-binary individuals found that 46% of respondents experienced online abuse since the beginning of Covid-19, with 29% reporting it had gotten worse during the pandemic (for women of colour and non-binary people, this figure increased to 38%). A sharp increase in online violence against women coincided with the rise in people staying at home and spending more time online. With many workers transitioning to remote offices, online abuse alarmingly began to include the actions of colleagues (9%) as well.

For women in high profile positions, such as journalists, politicians and influencers, online abuse and threats are common. These threats make many women feel unsafe in offline spaces, too, forcing them to take additional measures to protect their safety. For some that means hiring private security or moving locations; for others it means removing themselves from online spaces and networks, censoring their actions and speech, or taking other similar precautions that unjustly inhibit their ability to express themselves.

study by Amnesty International examining the Twitter presence of women journalists and politicians in the US and UK found that 7.1% of tweets they received were abusive or problematic. Black women in the study were 84% more likely to be the targets of online abuse than white women. The study estimated that “of the 14.5 million tweets mentioning the women, 1.1 million were abusive or problematic. That’s a problematic or abusive tweet every 30 seconds.”

Unlike the type of abuse men may receive online, the nature of gendered abuse means that the kinds of messages women receive are more violent, and often involve threats of sexual or other physical violence.

Women are also disproportionately at risk of image-based online abuse -- commonly known as revenge porn -- where private photos are leaked to online platforms or porn sites without their consent. There is little a person can do once their photo is circulating on these platforms -- while it can be reported, the onus lies with the platform to remove it, and police often have little powers or resources to properly follow this up. This reality can cause anxiety: a survey by HateAid found that 30% of women fear their photos will be stolen or leaked online.

Troublingly, there are platforms that are deliberately designed to facilitate using pictures of women for porn against their will, by creating easy to use interfaces where a woman’s picture or video can be uploaded in a couple of clicks. Some research estimates between 90% and 95% of all online deepfake videos are non-consensual porn, and around 90% of those feature women. The emotional impact this can have on survivors is immense.

Disinformation designed to perpetuate sexism

While the above examples of gender-based violence focus on instances where women receive comments or messages that are targeted at them, another form of gender-based violence online is gendered disinformation, i.e., abuse about women. These kinds of disinformation campaigns are designed to exploit existing gender narratives, language, and discrimination in order to “maintain the status quo of gender equality or creating a more polarised electorate”.

Gendered disinformation is often used to discredit female politicians running for office. For example, in the US, once Kamala Harris was named as President Biden’s running mate, false claims about her were being shared 3,000 times an hour on Twitter. These kinds of disinformation campaigns work to promote the narrative that women are not good political leaders and aim to undermine female candidates by spreading disinformation about their qualifications and experience, or implying they are “too emotional” for the task -- with the ultimate aim to keep women out of politics altogether and ultimately harm democratic processes.

Notably, these coordinated attacks on women are often orchestrated by far-right groups (like in the US context) or groups aligned with government authorities (such as in the Philippines). One of the impacts of these campaigns is that it shifts the narrative away from the political to the personal, meaning that women are forced to spend time refuting personal attacks and thus have less time to talk about substantive issues. This disinformation also creates barriers for other women wanting to get involved in politics, or dissuade them from even standing for office.

Platforms have also played a significant role in facilitating the spread of disinformation targeted at transgender people, including false claims and hateful rhetoric about bathrooms, gender dysphoria, puberty blockers, "detransitioning," and mental illness. The impact of these disinformation campaigns on the trans community in particular cannot be underestimated.

Young women and girls at particular risk of abuse and mental health damage

A 2020 survey by the World Wide Web foundation found that 52% of young women and girls have experienced online abuse, and 87% think the problem is getting worse. Of those that have experienced it, 51% said it affected their emotional wellbeing.

Even for those not directly under attack online, image-based platforms such as Instagram subject young women and girls to a constant stream of problematic content. Recent research by SumOfUs showed how quickly and far too easily users can find content promoting eating disorders or extreme dieting on Instagram -- despite the platform banning certain hashtags related to these topics. Those promoting their products know they can easily get around such restrictions by using alternative hashtags. Content promoting plastic surgery was also rampant, with promoters targeting young people and collaborating with influencers to convince girls and young women to spend money on altering their bodies.

Though Facebook, who owns Instagram, has long promised to curb this type of harmful content, we now know, thanks to Frances Haugen’s disclosures that the company has turned a blind eye to the toxic impact its platform has on young people, and particularly teenage girls. Facebook’s own research from 2019 confirms that “32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” While Facebook continues to downplay these negative impacts, their failure to take action to address this problem allows the company to continue to profit from the ad revenue related to this kind of harmful content.

Online harms can also affect young women in other ways. For example, what message does it send to them if they are seeing that the few women that are in the public eye are regularly targeted by vile and vicious digital violence? If Big Tech platforms aren’t properly regulated and forced to adequately tackle digital violence, this could lead to young women being deterred from taking up positions that would make them a target - thus limiting their career or education choices.

A strong Digital Services Act could help to tackle this

Women’s rights organisations have appealed to EU lawmakers to address these harms by supporting the key accountability tools on risk assessment, risk mitigation and mandatory audits in the European Commission’s proposal for the Digital Services Act.

They say: “Since some of this abusive behaviour is facilitated and indirectly encouraged by platforms’ design features, there needs to be a clear obligation imposed on very large platforms in particular to identify, prevent and mitigate the risk of gender-based violence taking place on and being amplified by their products. Through Article 26.1 of the DSA, platforms should be forced to take into account the ways in which design choices and operational approaches can influence and increase these risks, especially as defined in article 26.1.a-c.”

The DSA provides this unique opportunity to act on and to prevent further harms against women, and all users, of Big Tech platforms. EU leaders must embrace this opportunity and pass a strong DSA that puts people ahead of Big Tech profits. "The status quo is in no way supporting freedom of expression," says Lucina Di Meco, a co-founder of #ShePersisted Global. “It’s in reality supporting censorship of women online.”

Read more about the People’s Declaration and our movement’s demands to EU leaders.