This briefing was authored by Global Coalition for Tech Justice members India Civil Watch International, The London Story and Digital Action.
The world’s biggest general election kicked off on 19 April 2024 in India, where nearly one billion people were eligible to cast their ballots. The vote was preceded by a polarising campaign, during which the ruling Hindu nationalist Bharatiya Janata Party (BJP) sought to secure a third consecutive term. Spread over a seven-phase voting period, it concluded on 1 June 2024.
In power for a decade, BJP, led by Prime Minister Narendra Modi, was hoping to win by a landslide, thanks to welfare payouts to the most needy and amid a campaign steeped in violent rhetoric against Muslims. But the election results, announced on 4 June 2024, shocked the world as BJP lost its single-party majority in the Lok Sabha (the lower house of Parliament).
Months after the ballot, our analysis of evidence from a myriad of investigations and media reports, reveals that despite being lauded as a “successful” exercise in democracy in which the BJP failed to win its expected landslide, Muslim citizens lost their lives and election laws were broken in a context of Big Tech compliance and platform safety failures.
India’s ballot and the post-election period were steeped in offline and online hate speech and accompanied by violence against Muslim voters. Such violence, which social media platforms have helped normalise, is not a hallmark of democracies.
Neither India’s electoral laws, nor the tech companies’ own policies appear to have mattered throughout. Even though India’s laws prohibit references to religion and creating “communal disharmony” during election campaigning, this was fully ignored by the politicians and the Big Tech companies. Social media platforms have operated with impunity in deciding which democratic processes they want to follow, and which they want to ignore – it is clear India’s election wasn’t a priority.
Platforms also failed to meaningfully engage with civil society groups throughout the election period and made next to no efforts at being transparent – there were no announcements on how election policies were being implemented and no stock taking after the ballot concluded.
These systemic failures are at the heart of tech harms to democracy and human rights across the globe. They’re also at the core of the platforms’ toxic business model, which incentivises harmful and hate-filled content and prioritises ad revenue over democratic integrity and human lives. Unless and until that model changes, the already devastating impacts of social media platforms on people and elections will only become more severe.
Social media landscape
Home to over 460 million social media users, India is among the largest and most significant social media markets globally. Platforms have played a key role in political campaigning since at least 2019, when a third of the country’s population had access to social media for the first time.
Meta-owned WhatsApp is the most often used social network and messaging app in the country, followed by YouTube, Facebook and Instagram. With almost half a billion active users in 2024, India is the encrypted app’s largest market. While all platforms have been employed by bad actors to spread falsehoods and incite violence, WhatsApp and YouTube were at the forefront of electoral disinformation and hate speech.
Some ascribe YouTube’s prominence in India to the fact that the authorities banned TikTok in 2020, which until then had 200 million users. This prompted many younger social media users to turn to YouTube and Instagram.
India has also emerged as a petri dish for tech harms and their often life-ruining consequences – the social media trends and adverse impacts of tech platform failures to secure information integrity in India have been harbingers of things to come elsewhere in the world. This has been the case partly due to a number of factors, including 1) India’s ever growing online population, 2) immense investment into ‘digital’ by the Indian government and both domestic and foreign companies, 3) India, being a low rights jurisdiction with poor on-ground protection of human rights, and 4) absence of robust regulatory hurdles/controls such as data protection law, online harms regulation and health-tech and financial credit related regulations.
Social media trends: Online hate, real-world violence
BJP politicians and other Hindu nationalist hardliners have been inciting violence against India’s Muslims offline and online for years, but their hate speech intensified in the leadup and during the election. The problems of online hate in India had pre-dated the 2024 elections and in some cases led to real-life violence. In 2013, for example, a misleading video in northern India incited riots and five years later a spate of mob lynchings were linked to messages that circulated on WhatsApp groups.
Additionally Big Tech companies have been a part of India’s political landscape for almost a decade and in that time have gained awareness and understanding of the track record of the ruling BJP – a supremacist party with a documented history of anti-Muslim hate speech – in stoking social polarisation and weaponising vitriol to win votes.
Despite this knowledge, tech platforms routinely failed to enforce their own content policies. They neither stymied nor removed posts fomenting enmity or hatred between different classes of citizens on grounds of religion, race, caste, community, or language – which violated India’s election laws. These failures have meant that videos, ads and other social media posts attacking and calling for violence against India’s Muslims have become a fixture, fostering an environment of hate and societal approval for the dehumanisation and violence against Muslims. This in turn has led to Muslim men and women being violently attacked, and in some cases tortured and killed. Such violence is not a hallmark of democracies.
The impacts of Big Tech failures have been long-lasting, continuing well after the election concluded on 1 June 2024, including in states ruled by the opposition Congress party, as part of a purported retaliation for BJP’s loss of majority. Families had their homes razed to the ground and at least three men were beaten to death after being tortured. Like with past attacks against Muslims, the violence was often recorded and shared on social media, where it went viral.
While much of the anti-Muslim violence appears to be a result of a years-long environment of hate, offline and online, there is a growing body of evidence linking social media to specific harms perpetrated by those espousing supremacist Hindutva views. The attacks are often documented by perpetrators or onlookers and shared online, where they cause further distress and suffering to members of already marginalised communities.
In August 2024, vigilantes from the right-wing group Hindu Raksha Dal (HRD) assaulted Muslim families in two separate incidents in northern India – both recorded and shared on HRD’s social media accounts. “They were so angry, went to everyone’s house, and asked if they were Muslims or Hindus,” a witness of one of the attacks reportedly said. “Hindus were spared, and Muslims were beaten brutally.”
The violence took place after BJP politicians and other hardliners called for revenge on social media for violence against Hindus in Bangladesh following the escape of Bangladesh’s Prime Minster from the capital Dhaka on 5 August 2024. At the time a wave of deadly anti-government protests had swept through Bangladesh.
BJP’s Nitish Rane reportedly called for outright murder, tweeting: “If Hindus are targeted and killed in Bangladesh, why should we allow even one Bangladeshi to breathe here. We will also target and kill.” Though at the time of writing the tweet appears to have been deleted.
Another BJP hardliner, Kangna Ranaut, in her X post drew comparisons between India and Israel, saying that “now we are also covered by extremists. We must be ready to protect our people and our land.” “Peace is not air or sunlight that you think is your birthright and will come to you for free…Pick your swords and keep them sharp, practise some combat form everyday,” said Ranaut’s post which at the time of writing had 1 million views.
Five days later HRD thugs, led by Daksh Chaudhary were seen abusing the Muslim residents of Delhi’s Shastri Nagar district. “Go to your Bangladesh!” Chaudhary shouted in the video which was shared on social media. (Chaudhary is known for hate speech and violence. In June 2024, he was arrested after the ballot for abusing the residents of the northern city of Ayodhy after BJP lost there.) The second assault by an HRD mob took place on 9 August 2020, when thirteen men attacked Muslim families in a shanty town of Ghaziabad, a city north of the capital Delhi. This attack was also filmed and reportedly first shared on the group’s social media accounts, and by others. One video of the assault shared on X, garnered over 1 million views.
Other instances of violence include offline and online calls for the eradication of mosques by vigilante influencers like Preet Sirohi. Sirohi keeps filing complaints with local authorities, claiming mosques he wants demolished were built illegally. In June 2024, he was behind the campaign to demolish two mosques in Delhi in a week.
Sirohi’s hateful campaign has continued at the time of writing. In October 2024, Sirohi vowed to keep on demolishing mosques, claiming they are illegal and sharing posts with inflammatory language, calling for the destruction of Mosques. “Those who remain silent, what face will you show to God, where were you busy when the tyrannical encroachers were occupying the revered motherland?”, his October 2024 tweet reads.
The details: How platforms helped foster a violent environment
Content moderation and policy enforcement
Modi peddled an islamophobic conspiracy theory around the so-called “vote jihad”, which echoed the alt-right “great replacement theory” – implying Muslims were in India to replace Hindus – further disenfranchising India’s Muslims and dissuading them from voting. The narrative echoed across all social media platforms, with pro-BJP Facebook pages becoming a “ubiquitous part of Indian election campaigns as political parties and their leaders seek to directly connect with their followers,” according to Usha M. Rodrigues, a professor at Charles Stuart University.
Facebook was at the heart of a countrywide hate-filled campaign. Researchers examined posts shared in 812 Facebook pages and in 15 Facebook groups between 1 March and 10 May 2024. In that sample alone, they found over 50 posts inciting hostility between Hindus and Muslims, with a large share of the content boosting Prime Minister Modi’s speeches.
“In most cases, it was only after large and blatant policy failures were made public through reports and press articles that companies took action. In multiple cases, that action came too late to protect the election,” the diaspora organisation India Civil Watch International (ICWI) said.
While Meta’s blog post boasted having 20 local languages in India covered by content moderators – out of a total of 780 – that obscured the fact that the measures were neither new nor sufficient to protect the integrity of the elections. In fact, the tech companies’ election measures have remained largely the same over the years, “despite mounting evidence that they must be adjusted”, said corporate accountability watchdog The London Story.
Similarly, WhatsApp appears to have failed to implement its content moderation policies, allowing BJP-affiliated accounts to spread conspiracy theories, anti-Muslim rhetoric and hate speech free from public scrutiny (The party reportedly operates at least 5 million WhatsApp groups in the country.) This included messages glorifying police brutality, which spread like wildfire within minutes.
Failures of the tech platforms also resulted in electoral misinformation, with conspiracy theories and falsehoods about politicians and opposition candidates circulating on Facebook, WhatsApp YouTube, and X.
YouTube failed to remove policy-violating content, according to a report by ICWI, Dalit Solidarity Forum, Indian American Muslim Council, Hindus for Human Rights and Tech Justice Law Project.
The groups reported to YouTube that 26 videos, published between April and May 2024 by the far-right Sudarshan News YouTube channel, violated the platform’s hate speech and misinformation policies – they contained conspiracy theories, hate speech dehumanising Muslims and misinformation about the opposition Congress party. Instead of taking the videos down, YouTube shared ad revenue with the channel.
The lack of meaningful engagement with civil society groups has been a hallmark of the 2024 election season in India and beyond. Multiple members of the Global Coalition for Tech Justice who have monitored the social media space in the leadup and during the ballot reached out to YouTube and Meta to act on violating and harmful content, and take it down. But in most cases, they said, platforms failed to take any action and when they did the hate speech or disinformation in question had already spread far and wide.
X’s crowdsourced fact-checking programme Community Notes, which allows X contributors to write context notes for misleading tweets, also proved to be a complete fiasco, leaving tweets with outright falsehoods without any notes or disclaimers. This included a tweet by a verified X user, who falsely claimed that a non-existent Dubai-based Association of Sunni Muslims was financially supporting Muslims travelling to the southwestern state of Karnataka to unseat the BJP and Modi. There was no Note accompanying the tweet, even though fact-checkers flagged it as containing false information.
Failure to enforce ad policies
Social media platforms, including YouTube, also failed to implement their own ad policies. Corporate accountability rights groups tested YouTube’s systems for ad review and ad approval mere days before the election kicked off, only to find out they didn’t work.
Global Witness and Access Now submitted 48 ads in English, Hindi, and Telugu, all breaching YouTube’s advertising and election misinformation policies – the ads contained disinformation meant to suppress voter turnout among youth and women, and content inciting violence against the Muslim minority. (It was based on already existing falsehoods specific to India.) Although YouTube purportedly reviews all ads prior to publication, it approved all the ads, which the groups withdrew post approval by the platform.
YouTube was at pains to defend itself, saying the ads would have been reviewed by a moderator before going live. But when Global Witness tested YouTube’s ad approval system in English and Spanish ahead of the 2022 US elections, the video-sharing platform rejected all the ads at the first stage and suspended the host channel. Such glaring disparities in the treatment of harmful content by YouTube proves that the company has the means to enforce its policies, but it chooses not to do so in Global Majority countries, even in major markets like India – something civil society groups have been flagging for years.
Breaches of campaign financing laws
India’s campaign financing laws may have been violated due to lack of transparency across social media platforms.
Political advertising in India, including on social media, is subject to a number of content and financing regulations. To skirt around the rules and evade the scrutiny of the Election Commission, political parties and actors allegedly resorted to surrogate and shadow advertising, thus undermining the integrity of India’s ballot.
Surrogate and shadow advertising involves an individual or a group placing social media ads on behalf of a candidate or a political party without disclosing that the ads are directly funded by said party. Surrogate advertisers are typically identifiable by their tax number but shadow advertisers are less transparent and further obfuscate who’s behind purchased advertisements.
Corporate accountability rights groups Eko, The London Story and ICWI reviewed Meta’s Ad library in the leadup to India’s vote and uncovered a network of 22 far-right shadow advertisers, who purchased ads with memes and videos attacking the opposition, fomenting violence against the Muslim minority and promoting the BJP. The adverts amassed more than 10 million interactions in over 90 days and appeared to breach Meta’s own content and transparency policies.
The 22 shadow pages spent over $1 million on advertisements for BJP and in particular Prime Minister Modi, which accounted for almost 22% of the total sum of election advertisements during a 90 day period before the ballot.
The researchers couldn’t establish who owned the pages due to lack of verifiable information. The pages had no active phone lines and the researchers’ emails to addresses indicated on the shadow pages remained unanswered.
Meta’s failures to implement its own policies on political advertising go beyond the national election. A group of rights groups, including Global Coalition for Tech Justice member ICIW, documented how Meta has allowed BJP and its shadow pages to violate India’s electoral law and Meta’s own policies ahead of the November 2024 local elections in the north-eastern state of Jharkhand. The researchers identified at least 87 shadow pages purchasing ads spreading BJP narratives in Meta. These pages have been churning out nearly five times more ads and garnering four times more impressions than the official BJP Jharkhand page.
AI-generative content and system failures
It should be noted that, despite widespread fears that AI-generated content would inundate social media with falsehoods, in India deepfakes were predominantly used to troll rather than to launch information warfare.
This was a blessing for tech platforms, which despite their pledges to prioritise and fight generative AI misinformation failed a simple stress test by civil society groups, which proved that some platforms can’t detect AI-manipulated content.
In May 2024, corporate accountability groups Ekō and ICWI created inflammatory and Islamophobic political ads and submitted them to Meta’s Ad library – the repository for ads published on Facebook and Instagram. Some of the ads contained AI-manipulated images and all of them were created based on real hate speech and disinformation prevalent in India. One ad, for example, used inflammatory language targeting Muslims and read: “Hindu blood is spilling, these invaders must be burned”.
Out of the 22 ads the groups submitted in English, Hindi, Bengali, Gujarati and Kannada, Meta approved fourteen – all containing generative AI images. The ads were approved despite containing harmful content, violative of the platform policies, and despite the fact that Big Tech giants pledged to prioritise tackling harmful generative AI content. (The adverts were submitted halfway through the six-week voting period and withdrawn post approval, so that they would not be circulated.)
Researchers monitoring social media throughout the six-week voting period have also raised concerns about the lack of clarity around the fact-checking of political advertising across Meta’s platforms. Some ads that were flagged by fact-checkers, they said, remained on the platforms, while others did not contain labels indicating they had been fact-checked, creating further confusion.