Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India
New research documents the dangerous degree to which hate speech and disinformation on Facebook are thriving. Genocide Watch warns that the “early warning
New research documents the dangerous degree to which hate speech and disinformation on Facebook are thriving. Genocide Watch warns that the “early warning signs of genocide” are present in India.
A recent study by The London Story (TLS), a diaspora-led foundation in the Netherlands working to combat disinformation and hate speech online, reveals the shockingly pervasive nature of hate speech on Facebook in India. The foundation’s report, Face of Hatebook, finds the world’s most popular social media platform hosts and promotes “a disturbing volume of direct hate speech, disinformation, and calls to violence against minorities” – particularly those of Muslim faith. The prevalence of this type of content is all the more dangerous in a country such as India, notes TLS, which has Facebook’s highest number of users and is a de facto “public square” for news and propaganda, and where violent far-right voices have become more mainstream due to amplification on Facebook and its associated apps.
The nature of the hate speech, the protection of perpetrators of violence, and the complicity of elected and law enforcement officials have all prompted genocide experts to sound the alarm that India’s minorities, particularly Muslims, are at grave risk. Genocide Watch, which issued the alert, is however careful to say that it does not mean genocide is underway – only that the signs of danger are present – and that if or when genocidal violence is unleashed, it will be by mobs rather than directly by the State. Civil society activists have been warning for years that Silicon Valley’s tools have emboldened and enabled these violent mobs. Activists are demanding that Facebook release the full human rights impact assessment it commissioned on India.
In the wake of Frances Haugen’s disclosures, Facebook’s non-English content moderation failures are well documented. Although roughly two-thirds of Facebook users engage on the platform in a language other than English, internal company documents show dangerous content and hate speech are common in regions where Facebook lacks moderators and AI solutions capable of understanding and detecting threatening content in other languages. This allocation of resources contributes to a gap in protection for at-risk populations in non-English-speaking countries such as India where unaddressed online hate speech and disinformation can be weaponised to prompt and accelerate real-world attacks.
The widespread violence that occurred against Rohingya Muslims in Myanmar is one of the most acute examples of how dangerous this environment can be. A U.N. fact-finding mission examining the Myanmar crisis found Facebook was “a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet.” The mission further noted the company’s response to the ethnic cleansing was “slow and ineffective.” A subsequent independent report commissioned by the Big Tech giant itself confirmed these issues, finding the platform had created an “enabling environment” for human rights abuse and concluding: “Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence.”
A similarly explosive situation is now brewing in India. Documents disclosed last year by Haugen to the U.S. Securities and Exchange Commission reveal Facebook knows it is struggling with a similar problem in its largest user market, where nearly 400 million accounts exist. In March of 2021, documents show company employees discussing whether they would be able to address the “fear mongering, anti-Muslim narratives” being broadcast on the platform by a far-right Hindu nationalist group with ties to Indian Prime Minister Narendra Modi. Another internal document shows significant portions of India-based Islamophobic content were “never flagged or actioned” due to Facebook’s inability to effectively moderate content in Hindi and Bengali. Facebook’s content policies in India faced additional scrutiny in 2020 when it came to light that the company’s former Public Policy Director for India, South & Central Asia, Ankhi Das, opposed applying hate-speech rules to politicians associated with Modi’s Bharatiya Janata Party and told staff doing so “would damage the company’s business prospects in the country.”
Aware of these glaring shortcomings, and pursuant to its mission to investigate human rights violations and abuses, TLS conducted its own research into hate speech in India on Facebook. TLS researchers used specific keywords to identify over 607 active Facebook groups that post anti-Muslim and pro-"Hindu Rashtra" (Hindu nation) narratives on the platform. The team compared the content of these groups’ posts to Facebook’s Community Standards, and found the following:
- Violence and Incitement: “There are several examples of threats to kill, raze and demolish persons, properties and communities on the Facebook. These include direct threats and calls to action, as well as indirect subtle commenting that some people, religion, etc. should perish. Both direct and indirect threats have resulted in real-time violence across the world. These threats have also led to imminent harm, such as lynching, assembling of armed groups with machetes and firearms.”
- Hate Speech: “Hateful content dehumanizing Indian Muslims, attacking them as inferior, morally bankrupt, alleging them to be violent and sexual predators, and calling for their exclusion and segregation continues to get a platform on Facebook. By allowing such massive amount of hate and vitriol, Facebook is not just complicit in dehumanizing Indian Muslims, it is also shares responsibility for creating an atmosphere of fear in India.”
- Dangerous Individuals and Organizations: “Facebook continues to host ultra-right wing Hindu outfits like RSS, Vishwa Hindu Parishad, Hindu Swayam Sewak, Bajrang Dal, and its support system of fan pages, despite the violent proclamations against Indian Muslims of these groups and pages. Facebook continues to allow not only these organizations, but also their millions of supporters to praise and incite violence against Indian Muslims.”
Throughout the study, TLS reported its troubling findings to Facebook. Yet consistent with the moderation concerns identified by the above disclosed internal documents, the platform’s automated processes repeatedly responded that the content was not in violation of any of Facebook’s Community Standards. Although several of the most problematic posts were also submitted for human review, they all remained on the platform.
Excerpt from Face of Hatebook report: Post calling for violence against Muslims and stating “They all deserve to be kept in camps like China keeps Uyghur Muslim.”
One such post, a 2019 video of a speech in which influential Hindu religious leader Yati Narsinghanand calls for the “extermination” of Islam “from the face of the Earth,” has been viewed over 32 million times. The speech, and the bulk of its 144,000 comments, are in Hindi. The TLS team referred this post to Facebook’s Oversight Board, but the case was not selected for review.
The continued public availability of posts like this video beggars belief because the priest featured in the video, Yati Narsinghanand, was ultimately arrested for his subsequent actions after his December 2021 speech in Haridwar called for violence against India’s Muslims and encouraged an “ethnic cleansing” similar to the attacks on Rohingya Muslims in Myanmar. Video of the event went viral, elevating hate-filled and violence-tinged rhetoric in the country to dangerous levels.
TLS’s additional research revealed videos of Narsinghanand’s December speech remained up and publicly available in various segments on Facebook at the time of this post’s publication. Other similarly inciting videos, such as one from last year with nearly six million views in which Narsinghanand refers to a fifteen-year old Muslim boy who was beaten in his temple as a “poisonous snake,” remain available on the platform as well.
In light of Facebook’s inability to contain hate speech and inciting calls to violence on its platform, TLS is calling for Facebook to be shut down in India to help protect millions of Muslims and other minorities from hate speech and dehumanisation on social media. The foundation is also urging Facebook shareholders not to turn a blind eye to these harms, but rather to consciously divest from Facebook and its businesses. A public petition demanding the release of Facebook’s human rights impact assessment on India in 2020 is available for signature here. Given the volatility of the situation, and the fact that these online harms are increasingly translating into offline violence, the time for action is now.
To help raise awareness of this pressing issue, TLS is hosting several conversations on the issue of hate speech and digital propganda in the India on the Brink: Preventing Genocide summit from February 26-28, 2022. The virtual event will bring together a variety of expert speakers to commemorate the 20th anniversary of the 2002 Gujarat pogrom, share insights and warning signs of what may be to come, and put forth possibilities for a way forward that prevents genocide from occuring in India. Those speaking at the event include former UN Special Adviser on the Prevention of Genocide Adama Dieng, Executive Director of Genocide Watch Dr. Gregory Stanton, and international genocide experts like Elisa von Joeden. All are welcome to attend and encouraged to help amplify the risks facing Indian minorities. More details and sign-up information is available here.