White Paper
November 2023
Table of Contents
IntroductionBy Mona Shtaya, Campaigns & Partnerships Manager (MENA) and Corporate Engagement Lead, Digital Action |
The volatile situation in the Middle East and North Africa (MENA) region, coupled with online platforms’ lack of prioritization of human and democratic rights, has shrunk spaces for expression and assembly as well as increasing digital censorship.
Since the start of the democratic revolutions in 2010, events in MENA have underscored the profound impact of tech harms on peoples’ lives, prompting human rights organizations to step in and urge social media platforms to safeguard citizens and their democratic rights, including protecting their fundamental right to life. These incidents have escalated over the past couple of years and intensified since 7 October 2023, when Israel started its offensive in the Gaza strip in response to Hamas’s attack on the Israeli population. Social media platforms have since been inundated with misinformation, incitement to violence, and hate speech, particularly targeting Palestinians.
The International Court of Justice has since issued an order on 26 January 2024 in which it establishes provisional measures for Israel to prevent and punish the direct and public incitement to genocide against Palestinians, much of which has occurred on social media. However, Meta, among other social media companies, has not taken any significant steps to stop the flow of harmful and illegal content.
Social media platforms keep censoring Palestinian voices, while activists globally are silenced when raising alarm about the situation in Palestine, especially when highlighting the Israeli military’s indiscriminate attacks on civilian areas and targets, as well as the mass starvation, killings and displacement of Palestinians throughout Gaza. Censorship includes content takedowns such as of evidence of human rights violations, and shadow-banning, which restricts the reach and visibility of content that conforms to community standards, effectively obstructing users from accessing vital information. Additionally, some accounts face temporary or permanent suspension, further impeding users from sharing updates on developments in the region.
Tech harms extend far beyond Gaza and are resulting in severe oppression and human rights abuses in the wider region. For example, LGBTQIA+ individuals in the MENA region endure online harassment, doxxing, outing, extortion, hacked accounts, and entrapment by both security forces and private individuals on social media platforms, especially those owned by Meta, thereby jeopardizing their safety.
Discrimination against LGBTQIA+ people is often disseminated on social media platforms. Discriminatory and threatening social media campaigns can force some to flee their home countries, lose their jobs, endure domestic violence, and severely affect their mental health.
Social media failures have a long history of enabling and intensifying the persecution of protesters and opposition figures, such as in Iraq and Tunisia. Meta’s lax content moderation policies facilitated hate, doxxing, and disinformation content during the 2021 protests in Tunisia, where the emerging dictator defamed his critics. Moreover, in 2018 during the Iraqi elections, defamatory content, gender-based violence and disinformation videos led to the withdrawal of female candidates.
Against this backdrop of crisis and human rights abuse in many parts of the Middle East and North Africa, not only are companies failing to make additional investments to protect people’s lives and democratic rights, they are actually slashing jobs and many safeguarding roles. For example, over the past year, Meta, Twitter, and YouTube collectively fired around 40,750 employees, many of whom were crucial contributors to trust and safety, ethical engineering, responsible innovation and content moderation. As a result, tech companies have further undermined people’s safety and democratic rights at a time when these safeguards were desperately needed. Moreover, some tech companies like Meta have permitted paid advertisements endorsing atrocities in Gaza, including posts inciting the displacement of Palestinians and calling for the assassination of activists. This not only associates Meta with the spread of genocidal discourse but also implicates the company as benefiting financially from it.
Tech companies collude with governments by responding to their requests to censor the documentation of war crimes or criticism of regimes. For instance, on 14 November 2023, Forbes Magazine reported that the Israeli Public Prosecutor submitted 9,500 removal requests to social media platforms, resulting in content removal in 94% of the cases. However, the companies have not provided transparent reporting on these removals nor on the nature of the requests received.
As a result of choices that consistently prioritise profit over human safety, tech giants have become tools for digital authoritarianism, enhancing authoritarian regimes’ mission to stifle dissent and activists’ voices. They continue to undermine people’s democratic rights and threaten their lives and safety, leaving persecuted populations invisible and unheard.
The situation in the MENA region underscores the urgent need for tech companies to address tech harms, including disinformation, incitement, and harassment. To safeguard the democratic rights of people, platforms must establish augmentation plans, independent oversight mechanisms, and allocate additional financial and human resources. Moreover, platforms should consider the local context and apply international human rights standards on their platforms. Transparency is key, as platforms must provide users and other parties with transparent information about resources, government requests (whether or not permitted by law), among other information. By understanding the complex landscape of online violations, platforms can allocate resources effectively and play a more responsible role in safeguarding people in the global majority.
Executive Summary |
With over 65 elections around the world scheduled to vote, 2024 will be the most significant year for elections in our lifetime. In the Middle East and North Africa (MENA) region, these elections include but are not limited to legislative elections in Syria, Jordan, and Kuwait.
The region has undergone significant changes over the last twelve years since the democratic revolutions. Protesters took to the streets in 2011 to demand freedom, justice, and accountability in countries including Tunisia, Egypt, Libya, Bahrain, Yemen, and Syria. Later waves of protests, with a particular emphasis on corruption and governance, would unfold again in 2019 in countries including Algeria, Lebanon, Iraq, and Sudan. Though a few social and political openings resulted in some of these contexts, in others, civil war and mass atrocities ravaged populations, and authoritarianism re-entrenched. Changes did not unfold on the ground alone, but also in the online space, as more and more actors went onto social media platforms, and hacking, phishing, and surveillance technology developed.
Across these moments in time, elections have remained highly contentious moments – whether or not they have been free and fair. They have presented opportunities for authoritarian leaders to consolidate control; they have created vacuums capitalised on by militias and other violent actors; they have created space for opposition figures and civil society to come together and coalesce around shared values; and they have presented an opportunity for discourse and debate among the everyday public. With the proliferation of technology and social media use specifically, much of this contestation has happened online. And so, with elections upcoming, it becomes particularly important to understand the nature of what is unfolding online in the lead-up to, during, and after elections. Understanding this empowers civil society, human rights organisations, and the digital rights movement to protect the integrity of the vote and most importantly, to protect the citizens on-the-ground choosing to engage or disengage with elections.
Looking at a number of examples from recent elections that have taken place in the MENA region in the last few years since 2019, this paper will first unpack the types of online harms that have taken place alongside elections, categorising them and identifying key perpetrators. It will then investigate and present the offline harms that are correlated with or caused by these online harms. And finally, it will make tangible recommendations for interventions that platforms can take in support of elections and social media users.
Online election violations and harms |
During recent election periods in the MENA region, human rights groups, election observers, activists, journalists, and researchers have documented a range of online violations and harms that have occurred on social media platforms. The most common of these threats have been misinformation and disinformation, foreign meddling in elections, defamation campaigns, violent threats, and hate speech, gender-based violence and doxxing, that force some candidates to withdraw their candidacy. In response to these harms and violations, technology platforms have often done little to prevent them or adequately counter their impacts.
Disinformation and gender-based violence have been systematically used to smear female politicians and undermine their candidacy in elections. For instance, during the 2018 elections in Iraq, Intisar Ahmed Jassim, a candidate running for the Iraqi parliamentary elections, withdrew from the electoral race as a result of a fabricated video that was shared about her involvement in a sexual video clip maliciously attributed to her and circulated widely on social media.
Perpetrators of online harms during election periods include state, non-state, and semi-state actors.
State actors include governments working within their borders and foreign governments with political and economic interests in the region. For example, in 2019, during Algeria’s popular protest movement, there was a significant crackdown on freedom of speech, leading to the arrest of more than 140 individuals between June and December of the same year. Among those affected was cartoonist Abdelhamid Amine (Nime), who faced imprisonment for sharing a satirical drawing via Instagram that portrayed Tebboune as the “chosen one” and depicted the Army Chief of Staff, Ahmed Gaïd Salah, placing a golden slipper on his foot. Nime’s one-year sentence was widely criticized as a severe violation of freedom of expression.
Non-state actors include armed groups and militias, political parties, election candidates, politicians and their supporters. For example, during the 2022 Lebanese legislative election campaign, Lebanese Forces activists and supporters of the Free Patriotic Movement and Hezbollah were responsible for some of the most popular hashtag campaigns on Twitter, including those targeting politicians and political parties during the election period, according to analysis by Maharat Foundation.
Semi-state actors have also been involved in committing online harms. These are actors that are close to the state but are not (officially) part of the state and its institutions.
Mis- and disinformation |
Misinformation and disinformation are posing a challenge to the freedom of the public debate, including during major political events such as election periods.
During the 2019 presidential and legislative elections in Tunisia, political mis- and disinformation spread on social media, particularly Facebook. According to a report by Atide and Democracy Reporting International that monitored 291 political Facebook pages with a high level of political engagement during the election campaign, while “the official Facebook pages of political parties and candidates mostly complied with electoral regulations (no use of hate speech, respect of spending regulations during the campaign, respect of electoral silence period), unofficial pages and networks largely ignored them, spreading defamation and disinformation.” Many of these pages also ran targeted ad campaigns in support of some candidates.
Examples of mis- and disinformation spread online included reports about presidential candidates withdrawing from the race to support other candidates, reports about political figures or celebrities supporting certain candidates, fake polls, and rumors targeting the credibility of the electoral process (one rumor that circulated online was that erasable pens were used at pollen stations).
Facebook, which has rebranded as Meta since then, took certain measures to counter election misinformation and disinformation. For example, it announced the removal of 265 Facebook and Instagram accounts, Facebook Pages, groups, and events involved in coordinated “inauthentic behavior” targeting multiple African countries, including Tunisia, and originating from Israel. Using data made available by Facebook about the activities of these pages, independent media organisation Inkyfada investigated 11 pages targeting Tunisia, It found that while a number of pages and entities were targeted, one candidate in particular, media mogul Nabil Karoui, was spared the pages’ criticism and even benefited from positive content.
Yet, the company’s actions were insufficient. For example, the platform provided limited information in its ad library about political ads during the election campaign, which made it impossible for the election regulator and election monitoring groups to adequately track election ad spending, which at that time was limited to 10,000 TND for presidential candidates and completely banned for those running for the parliament.
Online defamation and harassment |
During election periods, a range of actors such as candidates, activists, journalists, and human rights defenders are targeted with defamation campaigns and threats.
This was, for instance, the case during the 2022 Lebanese election where independent candidates, in particular, were subjected to defamation and harassment on social media. Basel Salah, a university teacher and independent political activist, was subjected to a campaign threatening him on social media for criticising Hezbollah and supporting independent activists. His work and home addresses were doxxed. Candidates of the independent list “Together for Change” were subjected to defamation campaigns by Hezbollah supporters on social media aimed at discrediting them. For instance, independent candidates faced false accusations of obtaining financial support from the US embassy to “serve” foreign and Israeli interests.
These types of violations can be particularly dangerous in the context of political repression, sectarian politics or conflict, where online threats can materialize into actual violence and even murder. For example, in Iraq, (political) violence against journalists, political activists, protesters, and civil society can lead to kidnappings and murder. On August 19, 2020, Dr. Reham Yacoub, a prominent Iraqi female activist deeply involved in the local protest movement since 2018, tragically lost her life in Basra. Dr. Yacoub’s untimely demise followed a sustained campaign of defamation and incitement against her on Facebook among other social media platforms. It was compounded by a death threat she received in October 2019 via her mobile device from an unidentified phone number, as reported.
Furthermore, in June 2022, Qais Saied escalated an online campaign on Facebook and Twitter to defame Bochra Bel Haj Hamida, a female judge by alleging her involvement in an extramarital affair (adultery), which formed part of a broader effort to suppress judicial independence.
Discrimination: Gender-based violence and racist speech |
In general, women, minorities, and gender non-conforming individuals active in politics and civil society are at increased risk of hate speech and online violence. These threats increase during election periods and tend to disproportionately affect women politicians and women running for office.
This is, for example, the case for Iraqi women, where “gendered hate speech and misogynistic disinformation have resulted in a number of female candidates withdrawing from their campaigns” in previous election periods, according to a 2022 report titled “Online Violence Towards Women in Iraq.” The report documented the use of deep fake videos to depict women candidates in fake sexual situations, misogynistic terms like “whore,” doxxing of their private photos and videos and attacks on women for how they choose to dress, this content was spread via Facebook and Instagram. In an ultra-conservative society like Iraq and other places in the region, these tactics are very detrimental to women and their participation in public life including elections, resulting in exclusion, and in some cases violence and even murder.
In another case combining both gender-based discrimination and anti-black racism, Dalia Ahmad, a TV host working with Aljadeed TV in Lebanon, was subjected to a racist social media campaign based on her race and Sudanese origin after criticizing Lebanese politicians—including Hezbollah leader and the country’s President— in her TV show during the 2022 election period, and describing them as “crocodiles.” Some of the racist and sexist terms used to target her on Facebook and Twitter, included “black witch,” “black dog,” “prostitute,” and “black bastard.”
Furthermore, in Tunis, human rights activist Rania Amdouni, who is feminist and queer activist, and artist, faced a traumatic ordeal in 2021 during anti-government protests triggered by rising unemployment rates. Targeted by supporters of President Kaïs Saïed’s authoritarian regime, Amdouni was subjected to a hateful social media campaign on Facebook that included derogatory comments, threats, and doxxing. Despite seeking help from the police, she was arrested and wrongfully charged with insulting a public officer. Amdouni’s life was in danger, and she eventually fled to Paris after multiple suicide attempts.
The lack of adequately resourced and trained content moderation teams at social media platforms are blamed for exacerbating the spread of this type of content and its harms.
Privacy violations |
Forms of privacy violations that occurred previously during election times included doxxing and the use of personal information without data subjects’ consent.
For example, months before the 2022 elections in Lebanon, Whatsapp users received unwanted campaign messages and were added to groups without their consent. In an example of gender-based doxxing, a private video of Rebwar Heshu Ali, a Kurdistan Democratic Party candidate in 2018, at a birthday in a dress deemed short by her detractors, was accessed and shared on social media without her consent. In a conservative country like Iraq, where women’s behaviors and ways of dressing are policed by society, the video’s circulation during election time was seen as a move aimed at defaming and shaming her as a woman candidate.
How are offline harms during election cycles reinforced online and vice versa? |
Many elections in the region are hardly a democratic process, often lacking fairness and transparency. In some cases, elections are used as a tool by those in power to give legitimacy (or an illusion of it) to their rule and political programs. Journalists, activists, election observers, and civil society may often find it difficult to call out election fraud and the lack of fairness and transparency in the process, as their work and activities can be met with hate speech, rumors, conspiracy theories, and threats aimed at discrediting and silencing them. Misinformation, disinformation, and malinformation also make it difficult to challenge pro-government narratives around elections.
Online violations, campaigns, and influence operations are particularly harmful and worrisome in the context of elections happening during political transitions. The 2022 Tunisian legislative election and constitutional referendum are a good example. Both were held in a political climate characterized by diminishing freedoms following Kais Saied’s power grab in July 2021 and his dissolution of the parliament. As Kais Saied was undoing Tunisia’s democratic process and paving the way for an “ultra-presidential” system that expanded his powers and weakened the parliament, his supporters on social media resorted to harassing his critics, from judges, citizens, journalists, politicians, and activists. This in itself contributed to an environment that was not conducive to free and fair elections.
Supporters of Kais Saied have escalated their harassment tactics against his critics, resulting in instances of doxxing that not only jeopardize the lives of activists but also take a toll on their mental health, pushing some to attempt suicide.
Not to mention the internet shutdown phenomena in the MENA region, which fertilize the ground for spreading more disinformation. These shutdowns deny people their right to access information and prevent journalists from doing their job of fact-checking misleading information. This particularly occurs during election times, as in the case of Iraq, or during democratic protests, as seen in Algeria. This is especially dangerous because in such countries, there is a lack of a robust media landscape. The few existing independent media outlets might face severe restrictions from the regime, while traditional ones are controlled by the government itself.
Some of the online harms and violations listed above disproportionately target and affect candidates and players that are already marginalized by the political and electoral systems such as independent candidates, smaller parties, and women.
For example, as mentioned previously, independent candidates in the 2022 Lebanese elections were particularly targeted in online propaganda, defamation, and hate speech campaigns by the supporters of the political establishment.
Gender-based violence is also reinforced online, with women candidates in particular, facing the brunt of harassment, including sexist and misogynistic speech. These tactics are very detrimental to women and their participation in public life including elections, resulting in exclusion, and in some cases violence and even murder.
Recommendations and Conclusion |
The upcoming elections in the Middle East and North Africa (MENA) region present a critical juncture for social media platforms to address tech harms occurring during these electoral processes. The complex landscape, marked by previous instances of misinformation, disinformation, harassment, and privacy breaches, necessitates proactive measures to safeguard the democratic process. Our recommendations aim to empower platforms to play a crucial role in supporting election integrity and protecting users.
Additionally, to effectively counter online harms during elections, platforms must understand the diverse range of threats, including misinformation, disinformation, harassment, and privacy violations. An in-depth analysis of previous election cycles in the MENA region reveals common patterns and tactics used by state, non-state, and semi-state actors.
In light of the aforementioned, and based on the Year of Democracy campaign asks, this paper makes a series of recommendations to social media companies to combat digital violations taking place in the lead-up to, during, and after MENA elections and to mitigate the online and offline harms occurring as a result. These recommendations intend to empower platforms to play a solutions-oriented, responsive role in supporting the integrity of elections and in protecting everyday citizens who come under increasing threat during these contested moments.
First, it is critical for platforms to establish augmentation plans six months ahead of a scheduled election. This includes the designation of an increased number of human content moderators with cultural competency; transparent corporate reporting for political campaign financing; and efforts to increase the number of trusted partners who can weigh in with and provide critical information on the political, social, and economic situation of the country, as well as some of the key stakeholders engaging in elections and/or election-related discourse and activity.
Second, social media platforms should establish independent, technical oversight mechanisms that address the particular online harms taking place on platforms. For example, with misinformation and disinformation spreading disproportionately through the mass-forwarding of messages, WhatsApp should place restrictions on the ability of users to automatically mass-forward content in the lead-up to, during, and shortly after elections.
Third, social media platforms should designate financial and in-kind resources that shore up civil society’s capacity to document, report on, and analyze online and offline harms in relation to elections. In doing so, these platforms will have access to an updated understanding of how online violations take place and in turn, will be able to develop updated protocols that more effectively address these violations.
Fourth, platforms should increase transparency by allowing users to see the ads that politicians and campaigns run on platforms. Around elections, these platforms should give users greater visibility into these ad libraries and allow them to opt to see less political and social ads. In doing so, these platforms will empower social media users to understand who is trying to influence the vote, as well as to decide whether or not they’d like to see or engage with these campaigns.
In closing, elections in the MENA region present an unparalleled moment for social media companies to understand how their platforms are being used, the types of violations occurring and the perpetrators involved, and the relationship between these online harms and the offline harms experienced by users. This knowledge can be game-changing for how social media companies assign resources; hire staff, contractors, and consultants; and understand their role in the global community.
Contact: If you’d like to find out more get in touch at [email protected] | |