Confiance, authenticité et représentation de la souffrance au temps des images produites par l’intelligence artificielle : comment les acteurs humanitaires s’adaptent

Trust, authenticity and the representation of suffering in the age of images produced by artificial intelligence: how humanitarian actors are adapting

Maria Gabrielsen Jumbert
Maria Gabrielsen JumbertA Senior researcher at the Peace Research Institute Oslo (PRIO) and Senior research associate at the Centre for international studies (CERI) at SciencesPo, Paris. Maria Gabrielsen Jumbert received her PhD in 2010 from the Institut d’Etudes Politiques de Paris (SciencesPo), and served as Director and co-Director of the Norwegian Centre for Humanitarian Studies (NCHS) from 2016-2023. She is specialised in International Relations, drawing on international political sociology and critical security studies. Her research has focused on Europe’s humanitarian borders. Maria has a long-term interest in media narratives of humanitarian crises, more lately leading her to explore the topic of humanitarian communication as an issue in itself – reshaped by new technologies and evolving expectations. (Updated in July 2025)

How can we maintain trust in humani­tarian images in the age of AI? Between new ethical standards and image gen­eration tools, organisations are navi­gating a transformed visual landscape. Faced with growing mistrust and the risk of disinformation, they need to rethink their strategies while preserv­ing the authenticity of their testimony.


Images are central to humanitarian communication, serving both to inform, move, raise awareness and, through that, also prompt engagement and funding. Their visual imprint on our collective consciousness not only tells us which crises to care about – as the absence of images also leads to virtual ignorance of crises we don’t see – but also shapes our understanding of them and what is at stake. Images are also key to reach­ing through in ever more competitive and oversaturated digital landscapes. Images have always been central to, and even constitutive of, the humanitarian field as we know it today, from some of the first photographs used for fundrais­ing, emerging in the aftermath of the Armenian genocide and World War I,[1]Keith David Watenpaugh, Bread from Stones. The Middle East and the Making of Modern Humanitarianism, University of California Press, 2015.to the founding myth of Médecins Sans Frontières (MSF) speaking out on cameras about the war in Biafra in the early 1970s,[2]Marie-Luce Desgrandchamps, « Revenir sur le mythe fondateur de Médecins sans frontières : les relations entre les médecins français et le CICR pendant la guerre du Biafra (1967-1970) », … Continue readingto US and European anti-war protests being fuelled by images com­ing out of the war in Vietnam. Yet the conditions for retaining attention and establishing trust in the information conveyed have fundamentally changed in today’s communication landscape.[3]Mette Mortensen, Stuart Allan and Chris Peters, “The iconic image in a digital age”, Nordicom Review, vol. 38, no. 2, November 2017, pp. 71–86.As I will outline in this article, three sep­arate but inter-related shifts have together contributed to this: first, a shift in the norms and ethics around the choice of image frames and content; second, a shift in the technologies and tools available to produce images; and, third, a shift in the communication landscape with increased polarisation and a fear of disinformation and misinformation.

Illustrating how these shifts are inter­twined is this: humanitarian organisations have become increasingly aware that the way in which they portray crisis-affected people matters – from re­spect of their privacy amidst their most vulnerable moments – to the broader constructs that images may contribute to, whether such images are used for documenting and awareness-raising or for fundraising campaigns.

“Most humanitarian actors are, so far, reluctant to using images generated by artificial intelligence (AI) in their public communications.”

Most humanitarian actors are, so far, reluc­tant to use images generated by ar­tificial intelligence (AI) in their public communications, invoking their need to be authentic to be credible. The case of Amnesty International illustrating its 2023 report on police violence in Colombia, which was met with a massive backlash of criticism, seems to have set a precedent for the time being.[4]Felix Marlowe, “‘It was a mistake’: Amnesty International apologizes after releasing AI-generated images”, Belles & Gals, 17 December 2024.Yet adja­cent uses are emerging, such as opinion leaders using AI images to accompany their social media outcries,[5]An AI image circulated on Facebook and elsewhere of the Gaza surgeon who lost nine of her children in an Israeli airstrike in May 2025: the image shows her surrounded by all her children, in an … Continue readinga grassroot non-governmental organisation (NGO) experimenting with AI images for fundraising or the first AI photo exhibition depicting detainees in the Australian detention camps on Manus and Nauru islands.[6]Maria Gabrielsen Jumbert, “The AI dilemma: Can artificial images of war and suffering stir empathy?”, Global Policy, 15 March 2024, … Continue reading When such images are used, they appear as solutions to a normative dilemma: they are depicting situations that reflect reality, but without exposing real individuals. Alongside this, “fake”, artificial and real images circulate side by side in the visual landscape we are surrounded by, some clearly marked as “AI-generated”, others pretending to be real, and others that are very real but suspected of being fabricated. In this arises another challenge for organisations reliant on being credible when communicating about ongoing crises: trust in very real photographs is rapidly deteriorating. This article will outline and discuss these current challenges, addressing the following key question: how can humanitarian actors navigate this changing landscape?

A changing communication landscape

That information and images coming out of crisis contexts are contested, and often also should be scrutinised, is noth­ing new – this is similar to the awareness that images circulate faster and that multiple sources of information com­pete for our attention. Along with the “digital transformation” of aid,[7]Kristin Bergtora Sandvik, Humanitarian Extractivism: The Digital Transformation of Aid, Manchester University Press, 2023 and Framing Humanitarian AI Conversations: What Do We Talk About When We Talk … Continue readinghuman­itarian actors have adapted to a digital communication landscape, adopting social media communication tools and techniques. What creates new condi­tions for this communication, and the role of images in it, can be seen through three intersecting shifts.

The first shift is a change in the norms and ethics of what can and should be depicted to best represent humanitarian subjects. While traditional forms of witnessing through photographic images have been problematised, for the spectatorship they create of others’ suffering,[8]Lilie Chouliaraki, The Ironic Spectator: Solidarity in the Age of Post-Humanitarianism, John Wiley & Sons, 2013.norms about what should not be depicted have grown stronger in recent decades. Humanitarian organisations are today much more con­cerned with not showing bodily harm and much more careful with depicting children or people in vulnerable situa­tions than was the case a few decades ago.[9]Valérie Gorin, “The place of the child in humanitarian action and communication: moving away from the cliché of childhood as icon”, Humanitarian Alternatives, Issue 19, March 2022, pp. 1–5, … Continue readingThis shift has emerged over the years since the 1990s, but has come more strongly to the fore in the past decade in line with the sector’s wishes to free itself from the colonial heritage and thinking of the past and its efforts to localise aid. To reach audiences with credibility, it is no longer acceptable to reproduce worn-out clichés or to overly expose individ­uals in vulnerable situations. Concerns to not reproduce images reminiscent of “white saviour” visuals of earlier times are also prevalent. For instance, MSF has, in recent years, been both accused of misusing photos to portray victims and patients but also praised for leading the way to raise awareness and encourage self-reflection about the use of images from humanitarian contexts as part of their broader efforts to tackle institution­al racism and discrimination.[10]MSF, Énoncés au sujet du racisme, 5 juin 2020, https://www.medecinssansfrontieres.ca/enonces-au-sujet-du-racisme

The second shift is a shift in how im­ages are produced, disseminated and by whom: from handheld smartphones allowing images to be shared widely and instantly to computer-based pro­grammes allowing them to be rapidly edited and tweaked. Social media plat­forms offer ever new tools to wrap mes­sages in the most enticing ways to draw attention in a competitive landscape, and images and visuals play a key role in cre­ating engaging posts.[11]Moran Yarchi and Lillian Boxman-Shabtai, “The image war moves to TikTok evidence from the May 2021 round of the Israeli-Palestinian conflict”, Digital Journalism, vol. 13, no. 1, 13 December … Continue reading As AI-powered tools for image generation have become available to virtually everyone, sophis­ticated photography-like images are already circulating widely, for serious and less serious purposes. While the reluctance to use such images in their public communication is fierce among NGOs, exceptions are slowly emerging, for example where actual photographs are not possible to take due to security restrictions and so where AI images at least allow the reality to be depicted, or where real photos would be problematic to use and so AI images are used with a reference to this protecting “the real refugees”.[12]Maria Gabrielsen Jumbert, “The AI dilemma: Can Artificial Images …”, art. cit. At the same time, AI is used to generate illustrations: these leave lit­tle doubt that they are artificial and are mainly used for advocacy purposes, but are widely shared when they hit the right message at the right time: a case in point is the “All Eyes on Rafah” image, which reportedly became the most-shared im­age on the internet.[13]Bobby Allyn, “‘All eyes on Rafah’ is the Internet’s most viral AI image. Two artists are claiming credit”, NPR, 3 June 2024.

The third shift is a shift in the commu­nicative landscape itself, where the idea that we are surrounded by disinformation and misinformation generates general mistrust of any source of information, but where there are also risks of actual disinformation targeting humanitarian operations. The visual communication landscape we are surrounded by is no longer made up of journalistic visual communication on the one hand and humanitarian organisations’ images on the other, but also the production of “news” by others about humanitarian responses. Advanced by opinion leaders on social media, these are, to varying degrees, promoting personal opinions, sometimes relaying reliable information and sometimes repackaging pieces of information that may be misleading at best. Whether fully reliable, partially true or wholly fab­ricated, these communications are also part of the humanitarian information ecosystem, and thereby also shape the space in which humanitarian actors communicate and operate. Furthermore, in a context of rife polarisation, humanitar­ian actors’ life-saving missions are ques­tioned more often, with images used in support. For instance, in a contested political climate on migration, activists can create video montages making it look like search-and-rescue operations are effectively cooperating with migrant smugglers, some of which have then been rapidly spread.[14]Sean Healy and Victoria Russel, “The critical risk of disinformation for humanitarians – The case of the MV Aquarius”, Journal of Humanitarian Affairs, vol. 3, no. 1, 2021, pp. 28–39.The other side of that coin is when images of children in Gaza are accused of being “fake”, where a mix of very real images, but also some artifi­cially generated ones, circulate.[15]Vekika Bahl, “What we know about the viral video of a Palestinian girl carrying her wounded sister”, France 24, Truth or Fake, 24 October 2024, … Continue readingWhile suspicions of “staging” suffering is not a new phenomenon,[16]Susan Sontag, Regarding the Pain of Others, Picador, 2004.the way new soft­ware and AI allows real-looking images to be fabricated puts trust in images actually shot by someone “on the ground” at stake.

Key challenges for humanitarian actors now in terms of visual communication

So, how can humanitarian actors navigate such a landscape, between new ways of representing ongoing realities and a possible rapid decline in trust in imag­es? From my initial exchanges with hu­manitarian actors over the past eighteen months, most declare they intend to nev­er use artificial images, as the trust in the authenticity of images they use is key for their credibility and trust.

“Each image used by humanitarian organisations is part of a larger ensemble of what constitutes our collective consciousness.”

Although their own image use is what they can control, I will argue, in the following points, that images of ongoing crises cannot be con­sidered as simply single and separate instances of visual representation. Each image used by humanitarian organisa­tions is part of a larger ensemble of what constitutes our collective consciousness, and misuse of some fake images may af­fect trust in images overall.

Maintaining trust in an increasingly mistrustful environment

A first question humanitarian organisa­tions need to grapple with now is how to protect the trust that is left, in images meant to tell us about ongoing atrocities, violations and suffering. While many large humanitarian organisations seem to lean on the trust they are already ben­efiting from as large, well-known organ­isations delivering aid, it is only a matter of their brand being misused or an image that should not be republished being republished before overall trust in the images they are using may be impact­ed. This trust is crucial at a time when, for instance, proving severe violations of international humanitarian law relies heavily on visual proof – you just have to see the discussions on hospital bombings or attacks on Red Cross ambulances in Gaza to see how important photos and footage are in establishing one idea or another of what has happened. Another question is how humanitarian organisa­tions should respond when they them­selves are targeted by misinformation or disinformation campaigns: is sitting still while trusting their own reputa­tion sometimes an option, or is it worth reacting and pushing back, with other risks involved?

Seeing images as collective of visuals

Thinking about how images are used by humanitarian actors warrants a collective approach rather than an image-by-image approach, where each organisation decides on their take. Public trust in images, especially images of ongoing crises, has the potential to impact all humanitarian actors, even though they each rely on their respective brand and reputation. While images are stand-alone artefacts that can tell their own story, they also interact with other visual representations and our contextu­al knowledge. For example, the AI image of a refugee girl in Greece may tell us something because we have seen many other real images of refugee camps and their conditions. Similarly, a stereotyped image of a victim of famine may, in itself, convey an important story, but it may become problematic in how it is shared if it only reproduces long-held views and stereotypes. Seeing images as part of a greater collective of visuals seems important at a time when humanitarian organisations are discussing ethical uses of images, and notably their positions on using AI-generated images.

A blurred boundary between the real and the artificial

While the difference between a pho­tographer on the ground and an image made on a computer will always be clear – in the sense that one photographs something actually happening while the other merely represents it artificially – some creative uses of AI images raise the question of where to draw the line between the real and the artificial, between misleading uses and acceptable uses. Examples range from AI images al­lowing depiction of a situation of which there are no other photographs, where those affected may provide feedback on how the image should be composed, to AI allowing the creation of aesthetically beautiful images of a pediatrician sur­rounded by her nine children, which is easier to share on social media than any image of the aftermath of a fatal airstrike. Such uses are perhaps closer to artistic representations of war – from drawings and paintings to cartoons – but their photo-realism plays with our confusion between real and fake.

“NGOs have also re-created situations to depict situations that were otherwise not documented.”

NGOs have also re-created situations to depict situations that were otherwise not documented, or played with virtual reality (VR) films,[17]Valérie Gorin, “From empathy to shame: The use of virtual reality by humanitarian organisations”, in Brenda Lynn Edgar, Valérie Gorin and Dolores Martín-Moruno (eds.), Making Humanitarian … Continue reading al­ways clearly communicating their staged and virtual characters. The availability of AI applications, however, may make such re-creations and representations so much easier to deploy, marked to various degrees, playing on their pho­to-realism, and only recognisable for their well-composed character. While this underscores the importance of communicating clearly on what is not real, it also raises questions of how to preserve trust in what is, de facto, very real. As strategies are now adopted, the guideline advanced by NetHope in 2024[18]NetHope Centre for the Digital Nonprofit, Humanitarian AI Code of Conduct, Elizabeth Shaughnessy, 2024. The full formulation is: “Where there are sector-specific high-risk concerns, we agree: 4. … Continue reading is worth taking note of: it states the signatories’ agreement to not use photo-realistic AI images of vulnerable groups for publica­tion. Trust in real images is already put into question, and for all the discursive and narrative tools that AI computer pro­grams allow for, it is also the very trust in humanitarian testimony that is at stake.

Picture Credit: © K. Nateel / HI

Support Humanitarian Alternatives

Was this article useful and did you like it? Support our publication!

All of the publications on this site are freely accessible because our work is made possible in large part by the generosity of a group of financial partners. However, any additional support from our readers is greatly appreciated! It should enable us to further innovate, deepen the review’s content, expand its outreach, and provide the entire humanitarian sector with a bilingual international publication that addresses major humanitarian issues from an independent and quality-conscious standpoint. You can support our work by subscribing to the printed review, purchasing single issues or making a donation. We hope to see you on our online store! To support us with other actions and keep our research and debate community in great shape, click here!

References

References
1 Keith David Watenpaugh, Bread from Stones. The Middle East and the Making of Modern Humanitarianism, University of California Press, 2015.
2 Marie-Luce Desgrandchamps, « Revenir sur le mythe fondateur de Médecins sans frontières : les relations entre les médecins français et le CICR pendant la guerre du Biafra (1967-1970) », Relations internationales, vol. 2, n° 146, 2011, p. 95-108.
3 Mette Mortensen, Stuart Allan and Chris Peters, “The iconic image in a digital age”, Nordicom Review, vol. 38, no. 2, November 2017, pp. 71–86.
4 Felix Marlowe, “‘It was a mistake’: Amnesty International apologizes after releasing AI-generated images”, Belles & Gals, 17 December 2024.
5 An AI image circulated on Facebook and elsewhere of the Gaza surgeon who lost nine of her children in an Israeli airstrike in May 2025: the image shows her surrounded by all her children, in an aesthetically good-looking family photo set-up, with her clinic in the background.
6 Maria Gabrielsen Jumbert, “The AI dilemma: Can artificial images of war and suffering stir empathy?”, Global Policy, 15 March 2024, https://www.globalpolicyjournal.com/blog/15/03/2024/ai-dilemma-can-artificial-images-war-and-suffering-stir-empathy
7 Kristin Bergtora Sandvik, Humanitarian Extractivism: The Digital Transformation of Aid, Manchester University Press, 2023 and Framing Humanitarian AI Conversations: What Do We Talk About When We Talk About Ethics?, PRIO, 2024.
8 Lilie Chouliaraki, The Ironic Spectator: Solidarity in the Age of Post-Humanitarianism, John Wiley & Sons, 2013.
9 Valérie Gorin, “The place of the child in humanitarian action and communication: moving away from the cliché of childhood as icon”, Humanitarian Alternatives, Issue 19, March 2022, pp. 1–5, https://www.alternatives-humanitaires.org/en/2022/03/22/the-place-of-the-child-in-humanitarian-action-and-communication-moving-away-from-the-cliche-of-childhood-as-icon
10 MSF, Énoncés au sujet du racisme, 5 juin 2020, https://www.medecinssansfrontieres.ca/enonces-au-sujet-du-racisme
11 Moran Yarchi and Lillian Boxman-Shabtai, “The image war moves to TikTok evidence from the May 2021 round of the Israeli-Palestinian conflict”, Digital Journalism, vol. 13, no. 1, 13 December 2023, pp. 115–135.
12 Maria Gabrielsen Jumbert, “The AI dilemma: Can Artificial Images …”, art. cit.
13 Bobby Allyn, “‘All eyes on Rafah’ is the Internet’s most viral AI image. Two artists are claiming credit”, NPR, 3 June 2024.
14 Sean Healy and Victoria Russel, “The critical risk of disinformation for humanitarians – The case of the MV Aquarius”, Journal of Humanitarian Affairs, vol. 3, no. 1, 2021, pp. 28–39.
15 Vekika Bahl, “What we know about the viral video of a Palestinian girl carrying her wounded sister”, France 24, Truth or Fake, 24 October 2024, https://www.france24.com/en/tv-shows/truth-or-fake/20241024-what-we-know-about-the-viral-video-of-a-displaced-palestinian-girl-carrying-wounded-sister
16 Susan Sontag, Regarding the Pain of Others, Picador, 2004.
17 Valérie Gorin, “From empathy to shame: The use of virtual reality by humanitarian organisations”, in Brenda Lynn Edgar, Valérie Gorin and Dolores Martín-Moruno (eds.), Making Humanitarian Crises. Emotions and Images in History, Palgrave Macmillan, Cham, 2022, pp. 147–170.
18 NetHope Centre for the Digital Nonprofit, Humanitarian AI Code of Conduct, Elizabeth Shaughnessy, 2024. The full formulation is: “Where there are sector-specific high-risk concerns, we agree: 4. Not to use AI to generate photo-realistic images or videos 3 of vulnerable groups, including children and program participants, for the purposes of publication, including campaigning and fundraising.”

You cannot copy content of this page