This series of case studies illustrates what’s at stake in global majority countries if Big Tech companies fail to protect people and elections in 2024. Brazil is due to hold municipal elections between 6 and 29 October 2024.
On 8 January 2023, exactly one week after Luis Inacio Lula da Silva’s third inauguration as the President of Brazil, supporters of the far-right ex-president Jair Bolsonaro stormed governmental buildings in Brasilia. Yelling “God, homeland, family and liberty,” the rioters live streamed the mass destruction of buildings housing democratic institutions on platforms like TikTok, Instagram and Youtube.
Footage from the attacks showed a sea of people sporting the yellow, green and blue colours of Brazil’s flag overtaking and vandalising the National Congress and Supreme Federal Court buildings, among others. Brazilian authorities arrested over one thousand people and prosecutors filed criminal charges against 39 rioters days later.
But this didn’t need to happen. “Everyone saw Brazil violence coming. Except social media giants,” POLITICO’s Mark Scott wrote the day after the insurrection, positing that “Silicon Valley’s biggest names have again been caught asleep at the wheel….”
For months before the attack experts had warned that the far-right had been using encrypted platforms like WhatsApp and Telegram to organise and spread falsehoods – echoing the narrative of US-based “Stop the Steal” campaign which alleged the 2021 US presidential vote had been rigged.
They also warned Big Tech about the risks of a US Capitol-style insurrection taking place in Brasilia and urged them to keep the election-related safety measures in place until the first weeks of the new government. Many made these requests – which the companies ignored – because they monitored the disinformation campaign about voting machines and the credibility of democratic institutions, among others.
Just like in the US, in Brazil too rioters planned the mayhem in the open, resorting to thinly coded messages across a plethora of platforms, including Telegram and Google maps. Days before the violence, a map indicating where bus transportation from different cities to Brasilia could be found to reach the “party” on 8 January was shared in a public Telegram group with over 18,000 members.
“Only adults willing to participate in all the games, including target shooting of police and robbers, musical chairs, indigenous dancing, tag, and others” were allowed to attend, the post reportedly read.
Before a Brazilian judge ordered social media platforms to remove violent and misleading content, rioters live streamed their attack on YouTube. Misinformation about the vote and the uprising also continued to circulate on Twitter [since renamed X] and Facebook, further enhancing the public’s distrust in state institutions and the electoral process.
The authorities were quick to act in the aftermath. President Lula called for a law that would make social media platforms accountable for what their users share, including hate speech. The electoral superior court demanded that platforms block a list of profiles with what was considered terrorist content under penalty of a daily fine of one hundred thousand reais. Bolsonaro became the subject of criminal investigations and in June the superior electoral court barred him from elections for the next eight years.
Accused of undermining the country’s democracy by falsely claiming that Brazil’s electronic ballots were vulnerable to hacking, he was found guilty of abusing his political power in the leadup to last year’s ballot.
But despite state action, concerns echoing those from last year are being raised again. This time the threat of misinformation campaigns and violence looms large over the fall 2024 municipal elections, according to experts.
Though strides have been made in the relationship between platforms and local authorities, the majority of Big Tech companies still don’t have rules in place that could prevent another January 8th insurrection from happening, a 2023 study by Brazilian think tank InternetLab has found.
While most platforms, except Telegram, have electoral integrity policies which prohibit misleading information about dates, times and location of the ballot, among others, they do not have rules specific enough to deal with incitement to violence post election or against the peaceful transfer of power – the kind of content that led to the January attack.
Companies also don’t have specific policies for ads containing disinformation or questioning electoral integrity, the survey stated.
“The far-right now has a common enemy and they are organising themselves as never before to gain votes in those local municipalities, gain strength for the next national election,” said Nina Santos, researcher at the Brazilian National Institute of Science & Technology in Digital Democracy and director of Aláfia Lab.
With little evidence showing that Big Tech companies have changed their game plan, Santos wonders how the deluge of municipal election-related falsehoods can be stymied. “Nothing has changed, so we’re facing the same problems. And because we’re talking about municipal elections, this will gather less international attention.”
Santos has also underlined that the structural and news consumption issues, which have facilitated the January attacks, still persist in Brazil today.
Those with limited means, she said, often purchase internet data plans which allow them to access encrypted platforms like WhatsApp and Telegram once they’ve used up their data. With trust in mainstream media waning year on year, Brazilians have increasingly relied on these messaging apps for their news. As a result, the poorest individuals end up with snapshots of information shared across these platforms, home to far-right misinformation.
“Another thing is the lack of attention of platform policies to local cultures, like political violence, racism, misogyny, are not taken into account. All of these policies are just translated into local languages from English – there is no understanding of the local context,” Santos said.