How AI systems are dehumanising Palestinians

By Mona Shtaya – Campaigns and Partnerships Manager (MENA) and Corporate Engagement Lead


Artificial intelligence (AI), machine learning systems and other automated decision-making tools often reflect inherent biases found in the world around us. This happens when AI models ingest already biassed datasets or have discriminatory algorithms backed into them, amplifying existing inequalities, eroding the fairness and effectiveness of AI applications.

The dehumanisation of Palestinians by AI systems is a case in point. Palestinians have been living under Israeli apartheid, which undermines their freedom, rights and dignity. At the beginning of the ongoing genocide in Gaza, activists asked ChatGPT whether Palestinians have the right to exist. The AI chatbot responded that “the question of whether Palestinians deserve to be free is a matter of perspective and is deeply rooted in a complex and contentious political conflict. Different people, governments, and organisations have varying opinions on the issue.” However, when ChatGPT was asked the same question about  Israelis, it answered:Yes, Israelis, like any other group of people, deserve to live in a free and secure environment.”

AI-generated images have also played a role in stereotyping Palestinians as people exclusively living in a state of war or tied to terrorism. For example, since the beginning of Israel’s attack on Gaza, which began in October 2023, the US software company, Adobe, has been selling AI-generated images depicting the current genocide in the Gaza Strip. The images, which users have been sharing across social media platforms, often lack clear labels indicating they’re AI-generated, potentially spreading mis and disinformation. A search for “Israel Hamas war” on Adobe Stock generates numerous war-related images, but some are only marked as AI-generated in fine print, potentially misleading some to believe these are actual photographs from the war. 

Other AI image generators also contribute to labelling Palestinians as people only living under war. One such case was discussed during a June 2024 session on “Generative AI: Dehumanization & Bias Against Palestinians & Marginalized Communities“, as part of the Palestine Digital Activism Forum organised by Global Coalition for Tech Justice member 7amleh. Speaker and artist Amira Kaawash showed search results for “Palestinian child in a city” and “Palestinian person in a city” on AI-powered platforms. The search terms turned up images that only depicted individuals in war scenarios, suggesting that life in Palestine is solely defined by conflict. Such results are critical in propagating narratives dehumanising Palestinians and spreading more disinformation because they often lack clear labelling. 

The same bias appears to be backed into social media platforms, apps and their algorithms. For example, during the ongoing genocide in the Gaza Strip Meta has systematically discriminated against Palestinians by taking down their content, even when it documents war crimes, and deprioritizing their posts and accounts – a practice known as shadow-banning. Meanwhile, Meta has in breach of its own policies, allowed content that incites violence and hate speech against Palestinians across its platforms. 

In fact, journalists, activists and rights groups documented an escalation in Meta’s discriminatory treatment of Palestinians and Palestine-related content during Israel’s genocidal campaign in Gaza. For instance, when Meta’s WhatsApp AI stickers generator was prompted with terms like “Palestinian,” “Palestine,” or “Muslim boy Palestinian,” the platform’s AI image generation feature often produced images portraying guns or boys with guns. However, search results for IIsraeli boy” generated innocuous images of children playing soccer or reading,.

AI’s adverse impacts on the lives of Palestinians is but one example of the many ways in which AI systems exacerbate discrimination and further dehumanise already marginalised communities. Throughout the 2024 elections megacycle, minorities in Global Majority countries have been exposed to similar AI threats.  Unchecked and mostly unregulated, AI technologies have particularly affected the people in Global Majority countries, whose rights and safety are an afterthought for Big Tech companies.


Share this post