This series of case studies illustrates what’s at stake in global majority countries if Big Tech companies fail to protect people and elections in 2024. South Africa is due to hold national and provincial elections between May and August 2024.
In many ways, social media has made Yasmin Rajah’s work and that of the organisation she heads, Refugee Social Services, much easier. It facilitated contact with clients and helped reach a larger group of people. But to many of the migrant workers, asylum seekers and refugees from low-income African countries who use the group’s services, social media platforms have become a means of oppression.
“People are scared, they’re not sure whether they should go to work. A lot of people are working on the streets and are worried they’re not safe,” Yasmin said of the social media posts people she’s been working with have been exposed to. “What we see is the targeting of people, it has come from people with power. In recent years, migration has come up as leverage before elections. Politicians are tapping into this now before the elections.”
Social media platforms have become a cesspit of hate speech, with hashtags like #PutSouthAfricansFirst and #ZimbabweansMustFall trending frequently and Facebook, WhatsApp and X (formerly Twitter) posts blaming migrants for South Africa’s socio-economic ills, including lack of access to services, poverty, unemployment and crime.
“People write that we should go home, that this is not our country, that we are bringing crime … the messages spread so fast,” Nora, a domestic worker from Zimbabwe and living in South Africa told Thomson Reuters Foundation in 2022, adding: “These messages can lead to violence.” The woman reportedly asked to remain anonymous, fearing for her safety.
While xenophobic sentiments have been simmering in the country for years, they came to a boiling point in 2022 with the launch of Operation Dudula. Meaning to “force out” in the Zulu language, this social media hate campaign spilled over into the streets of Johannesburg and elsewhere, unleashing violent protests, arson of migrant-owned businesses and leading to the murder of a Zimbabwean national.
Rights watchdogs have been sounding the alarm ever since, with the UN warning that the country was “on the precipice of explosive violence”. And as the 2024 general election approaches, “there is a great potential for and concern that the targeting of immigrants as a political tactic will be mobilised to garner support from dissatisfied South Africans”, a recent report by the Centre for Analytics and Behavioural Change said.
But to date, neither South Africa’s politicians nor social media companies like Meta, YouTube and TikTok have done enough to curb the spread of inciting language and violence, according to Sherylle Dass of public interest law firm Legal Resources Centre.
“The gains we have made post-apartheid have come undone by the escalating levels of xenophobic violence against migrants, asylum seekers and refugees predominantly fuelled by targeted political online campaigns on social media platforms,” Dass said. “In the run-up to our 2024 elections it is imperative that social media platforms take proactive steps to protect our democracy and not be complicit in amplifying xenophobic hate against migrants, asylum seekers and refugees.”
Activists and rights groups have been systematically criticising Big Tech companies for allowing rights abuses in global majority countries and underinvesting in content moderation in non-English languages.
Teams reviewing content in non-English languages are typically understaffed and have little to no understanding of local contexts, allowing hate speech to spread like wildfire across social media platforms. Non-English languages have also proved a stumbling block for automated detection systems used by the companies to moderate content and approve online ads.
South Africa is a case in point, as revealed in an investigation by Legal Resources Centre and international non-governmental organisation Global Witness.
In June 2023, the groups sought to test the platforms’ online safety efforts, by submitting extreme and hate-filled adverts for approval by Facebook, TikTok and YouTube.
They prepared ten adverts – which were withdrawn post-approval and never published – and based on real-life content in English and translated into Afrikaans, Xhosa and Zulu. The ads called on the South African police to kill foreigners and encouraged violence through “force” against migrants.The ads were approved by all three social media platforms, with the exception of only one ad rejected by Facebook.
“This isn’t the first time the platforms have failed to enforce their own policies on hate speech. Since 2021, we’ve conducted the same investigation more than 10 times in Brazil, Ethiopia, Ireland, Kenya, Myanmar, Norway, and the USA. The results uncovered stark differences in the platforms’ abilities to detect content and large divergences in how users around the world are treated,” Global Witness wrote in their recent blog post about the investigation.
Company inaction in South Africa could have tragic consequences ahead of the 2024 ballot, according to a representative of a local civil society organisation who asked that their name be withheld for fear of reprisals.
“The failure of Big Tech companies to address and curb the spread of anti-migrant hate speech poses significant risks to both people and democracy in South Africa. If left unchecked, these hate campaigns can further polarise society, erode trust in democratic institutions, and undermine the credibility of elections. It can also lead to the marginalisation and disenfranchisement of vulnerable populations, hindering their participation in the democratic process,” they said.