Intelligences artificielles génératives et représentations humanitaires : miroir ou solution ?

Generative artificial intelligence and humanitarian representations: mirror or solution?

Valérie Gorin
Valérie GorinValérie Gorin has been a lecturer and researcher at the Geneva Centre for Humanitarian Studies, a joint centre of the University of Geneva and the Graduate Institute of International and Development Studies, since 2011. Since 2020, she has been responsible for the Master’s programme in continuing education in humanitarian action. A historian by training, she obtained her doctorate in Communication Sciences from the University of Geneva in 2013. Her research interests focus on humanitarian history and communication, with an interest in the visual culture of international solidarity organisations and advocacy practices. Her recent publications focus on the links between emotions and images in communication (Making Humanitarian Crises: Emotions and Images in History, 2022) and inequalities in media representations of humanitarianism (Media Representations of Humanitarianism, in Handbook on Humanitarianism and Inequality, 2024). (biography updated in June 2024)

Is generative artificial intelligence really revolutionising humanitarian communication? At a conference recounted by the author, researchers and practitioners agreed that AI repro­duces and amplifies age-old practices of visual manipulation. From the mission­ary photos of the nineteenth century to today’s campaigns, the stereotypes of the “white saviour” and the “passive victim” persist. Algorithms freeze these biased representations in an automated “stereotype bank”. The challenge is not technical but ethical: to rethink the gov­ernance of humanitarian images.


Many societal changes have foreshad­owed the upheavals artificial intelligence (AI) has brought to the visual economy and have played their part in distancing us from the claim to truth conveyed by images. This was one of the key points made by the organisers of the symposium held at the Brocher Foundation, in Geneva, from 4 to 6 June 2025.[1]Symposium “Artificial images in global health: Fakery before and in the era of AI”, organised by Arsenii Alenichev (Institute of Tropical Medicine, Antwerp, Belgium), Sonya de Laat (McMaster … Continue reading Bringing together historians, ethicists, anthropologists, image pro­fessionals, health practitioners and communication experts, those discussions aimed to examine visual manipulation in, and before, the age of AI. In fact, it highlighted the issues that have shaken humanitarian communication since the advent of generative AI and the possibil­ity of creating relatively realistic images, as discussed elsewhere in this issue.

The emergence of social media and the return of amateur photography through widespread use of mobile phones have helped popularise the circulation of images in the public domain since the 2000s, as witnessed by images of the 11 September attacks and the 2004 tsu­nami.[2]André Gunthert, « L’image parasite. Après le journalisme citoyen », Études photographiques, n° 20, 2007, p. 174-187. The advent of automated securi­ty cameras and other facial-recognition systems has also shifted the boundaries between human beings and machines, in what is now known as post-hu­man vision.[3]Bolette Blaagaard, “Post-human viewing: a discussion of the ethics of mobile phone imagery”, Visual Communication, vol. 12, no. 3, 2013, pp. 359–374.

Does this mean that humanitarian organ­isations, from local non-governmental organisations (NGOs) to international institutions, are increasingly using AI-generated images to illustrate situa­tions of distress? If so, to what extent are these artificially created images transforming the way these humanitar­ian representations are produced and disseminated? This article looks at some of the tensions and paradoxes explored at the conference, which raised many questions about the different types of manipulation, the biases of generative AI, the risks of stereotyping and the ethi­cal responsibility of image creators.

New “fake images”?

Questioning the use of generative AI pri­marily involves placing it in the context of the communication practices already established in the sector. It therefore involves examining the very notions of manipulation, distortion, falsehood and artificiality in images. It is the supposed fidelity to reality that is being renegoti­ated by the advent of generative AI and the automation of the creative process, but which nevertheless makes us forget that every image is still a social and po­litical construct.[4]Susan Sontag, On Photography, New York: Farrar, Straus and Giroux, 1977.

“It is the supposed fidelity to reality that is being renegotiated by the advent of generative AI and the automation of the creative process.”

While the history of visual manipulation has focused on war and political propaganda,[5]Fabrice d’Almeida, Images et Propagande, Éditions Casterman, 1995. this analysis remains patchy with regard to human­itarian imagery. Since the communica­tion practices of NGOs have largely been fuelled by photographic genres such as missionary photography or photojour­nalism, there is no doubt that the same methods of deception can be found in representations of suffering.

The first part of the discussions there­fore compared, historically, problematic practices over time, such as staging or photographic retouching. Humanitarian iconography is full of images whose authenticity must be questioned. Among the examples given by the speakers, many images featured posed scenes and the selective framing of episodes of co­lonial violence or famines dating back to the late nineteenth century.[6]Christina Twomey, “Framing atrocity: Photography and humanitarianism”, History of photography, vol. 36, no. 3, 2012, pp. 255–264. While the limitations of cameras may have justi­fied the posed images at the time, these visuals now make up a good proportion of images generated by AI.

We also find repeated, recycled uses of images far removed from the context in which they were taken. The most strik­ing example of this are the images of African famines, where the central fig­ure – that of the starving, lone child – has featured prominently in fundraising campaigns since the famine in Biafra. Context removal, cropping and compo­sition – some of the best-known manip­ulation techniques[7]Dino A. Brugioni, Photo Fakery: The History and Techniques of Photographic Deception and Manipulation, Brassey’s, 1999. have been seen on many posters, from Save the Children to national Red Cross and Red Crescent societies, since the 1960s. This type of totem image nurtures the misery-laden aesthetic created by AI, which provides a profusion of images of racialised people, ultimately allowing the spatio-temporal framework and historical reference to be erased.

Another well-known trick is the visual “before and after” effect, for which one particular campaign paid a heavy price: Action Against Hunger’s “Leila” poster. In January 1994, this NGO published a poster featuring Leila, an emaciat­ed woman (“before”) and then a much healthier woman (“after”), with the cap­tion “Leila 100F plus tard” [Leila 100 francs later].[8]Thomas Pontiroli, « Action contre la faim, le coeur sur l’humain », Stratégies, 14 mars 2017. … Continue reading This technique works by contrasting an image of imminent death with life-saving action. The incentive ef­fect for the viewer was developed in the advertising sector (notably by agri-food multinationals) and then reproduced in humanitarian visuals. This technique raises questions, however, about the temporal exaggeration it seems to sug­gest, and especially about the authen­ticity of the images used (are the people shown really the same before and af­ter?). As Bruno David suggests, is this process, which borrows from misleading advertising, “applicable to NGOs without the risk of deception”?[9]Bruno David, « Action Contre La Faim : Leila, le retour ? », Grotius International, 2 novembre 2012, https://grotius.fr/action-contre-la-faim-leila-le-retour

The question of authenticity also arises in instances where images have had to be reproduced after the event, either because there were no live visuals, or because the situation was deemed too delicate to use real photos. This was the case with the poster series “It’s a matter of life and death”, produced for the launch of the “Health Care in Danger” campaign, organised by the International Committee of the Red Cross (ICRC), in 2011. Designed to raise awareness among the parties to the conflict of the need to respect medical staff and facilities, these posters were artificially produced in a studio in part­nership with Getty Images and photogra­pher Tom Stoddart.[10]See the “behind the scenes” film of the campaign by Getty Images: “Behind the scenes of the Red Cross shoot”, https://www.youtube.com/watch?v=Yu629TmPEXAAs As all the posters showed people who had died (combat­ants and civilians), it was deemed impos­sible to use real images out of respect for the dignity of the dead. In line with its policy of neutrality, the ICRC also took great care not to reproduce any features that would identify specific uniforms or locations – although there are enough visual references to discern a refugee camp, a checkpoint or a bombed city. The aesthetic reproduced in the studio at the time is very similar to what AI achieves today.

A bank of stereotypes

The examples cited above highlight how generative AI perpetuates, rather than renews, the old practices of staging, stereotyping and erasing local subjec­tivity. The numerous studies carried out over the last twenty years on how hu­manitarian issues are represented have demonstrated how such representations have historically been produced by and for the Global North. They highlight pro­found inequalities between the communities portrayed and the audiences, as well as a tendency to make suffering intrinsic to the wider picture. These representations are based on visual and narrative stereotypes that undermine the complexity of the context and rein­force neo-colonial dynamics.[11]Valérie Gorin, « Media representations of humanitarianism », in Silke Roth, Bandana Purkayastha and Tobias Denskus (eds.), Handbook on Humanitarianism and Inequality, Edward Elgar Publishing, … Continue reading

The second part of the conference ques­tioned the structural biases of genera­tive AI, particularly the persistence of gender, class and race biases raised in several recent studies. This is largely due to the inherent flaws of the databases used to train AI, and to the algorithms, but also to the lack of feedback from us­ers. In this way, generative AI holds up a mirror to us: by tapping into databases that are not neutral, it brings us the po­litical, ideological and historical logic in­herent to our view of the world – a view reproduced by AI in its way of classifying and understanding information in order to reproduce it.[12]Kate Crawford and Trevor Paglen, “Excavating AI: the politics of images in machine learning training sets”, AI & SOCIETY, vol. 36, no. 4, 2021, pp. 1105–1116.

“Automated reading using algorithms identifies and enhances these visual tropes.”

With regard to humanitarian imagery, au­tomated reading using algorithms iden­tifies and enhances these visual tropes. Among the most prevalent stereotypes are the figure of the child icon and the over-representation of white saviours. The people of the Global South are often reduced to their status of victim – passive and deprived –, thereby preventing them from being seen as active agents of their own destiny. Minorities remain invisible or stereotyped, reduced to subordinate roles. In a practical session involving ex­perimentation with a range of image-gen­erating AI, conference participants were able to reproduce the same representa­tions as those discussed in the research of Arsenii Alenichev and his colleagues.[13]Arsenii Alenichev, Patricia Kingori and Koen Peeters Grietens, “Reflections before the storm: the AI reproduction of biased imagery in global health visuals”, The Lancet Global Health, vol. 11, … Continue readingBy 2023, this research had used Midjourney (version 5.1) to generate over 300 images based on prompts seeking to invert clas­sic tropes, prompts such as “Black African doctors providing care for White suffering children”. Despite numerous attempts, AI almost systematically failed to invert these tropes, perpetuating the problem­atic pairing of white saviour/black suf­fering, or caricaturing and exaggerating cultural markers of “Africanness”.

On the one hand, these experiments highlight the extent to which “racism, sexism and coloniality are embedded social processes manifesting in every­day scenarios, including AI”.[14]Ibid., p. 1497. On the other, they highlight the lack of trans­parency inherent in the procedures of machine-learning systems: which da­tabases are being used and what is be­ing done to ensure more complex and fairer representations?; how do humans encode images in order to train AI?; how does AI recognise and label images?; what prompts are used?

Which ethics apply?

In this context, a lack of transparency is just one of the ethical issues raised by use of AI-generated images. The cases discussed above are just a few exam­ples that highlight the urgent need to redefine a responsible and inclusive set of visual ethics in a backdrop of digital transformation. This was the focus of the third day of the conference, when codes of conduct and other ethical charters in humanitarian communication were ad­dressed. High-profile cases in recent years have shown that the rights of the people portrayed,[15]See the investigation conducted by journalist Arnaud Robert on Rosius Fleuranvil, a Haitian man who received aid from the Swiss Red Cross, and whose image was still circulating in 2019 despite the … Continue readingincluding the very sensitive issue of consent to use an im­age,[16]Consider Médecins Sans Frontières (MSF), which was widely criticised on social media in 2022 for a case involving a young woman in the Democratic Republic of the Congo who was a survivor of sexual … Continue readingare often problematic.

In this case, the use of AI offers a solution by allowing the problem of consent to be overcome, but it fails to consider the image models used to feed AI – and the rights of the people shown. What are we to think, for example, about some of the artificial images seen at the conference, which showed refugee girls whose pose and green eyes are clearly reminiscent of Sharbat Gula, the iconic “Afghan Girl” photographed by Steve McCurry in 1984? Does she know that her image is being used deceptively to aestheticise a representation of migration generated by algorithms? Here we have a double issue of consent, because a controversy emerged in 2020 criticising McCurry’s forced production method and the fet­ishisation of the little girl.[17]André Gunthert, “La peur dans les yeux”, L’image Sociale, 14 mars 2020, https://imagesociale.fr/8217

This case questions the disproportion­ate power relations in humanitarian contexts, and the difficulty of what con­stitutes informed and iterative consent, especially when the use of the image spans several decades and when it moves from the analogue to the digital sphere. Should we demand greater transparency from companies that agree to allow ac­cess to their image banks? What are we to think about humanitarian aid agencies that could use their visual archives to provide input for the algorithms: is this a pledge of morality, of a responsibility to avert the biases of AI? These are some of the many unresolved questions that demonstrate the need to rethink visual pathways and constructions. Failure to make explicit reference to the artificial nature of the images used in campaigns is a form of deception.

Relatively widespread use of AI cannot replace humanitarian organisations’ duty of accountability. In an ethical re­lationship, consent is more than just au­thorising an NGO to use an image. It is primarily a relationship established with a beneficiary to whom various rights are granted: the right to be informed, the right to delete and correct information, and the right to be forgotten. These le­gal aspects, which are consistent with the General Data Protection Regulation (GDPR), are notably defended by Fairpicture, an organisation of pho­tographers present at the conference.[18]FairPicture, The Platform for Ethical and Compliant Content Production, https://fairpicture.orgInitiatives like these also highlight the need to rethink how we collaborate with the creators of authentic images. In the same way that we need to question AI prompts, we also need to think about the instructions given to photographers and the pressure they are under to meet mar­keting objectives whilst keeping costs down. Despite the “localisation” ex­pressed in NGOs’ communication prac­tices, feedback from the photographers invited to the conference showed that organisations want to retain editorial control while strengthening existing hi­erarchies, as the conference’s organisers point out in another article.[19]Arsenii Alenichev, Koen Peeters Grietens, Jonathan Shaffer et al., “Global health photography behind the façade of empowerment and decolonisa­tion”, Global Public Health, vol. 19, no. 1, 2024.

“In the polysemic world of images, AI cannot be expected to comprehend extremely complex and, above all, very subjective issues, especially when it does not take context into account.”

AI-generated images are therefore caught in a paradox between those who want to use them as a way of saving time and money (techno-solutionism) and those who fear mass use of disin­formation (deepfakes). These deter­minist visions cannot do away with the need to think about the visual policies reproduced by algorithms and how AI entrenches social hierarchies. In the era of analogue images, some researchers were already saying we need to be “less concerned about the presence of [pho­tographic] icons and more concerned about the absence of alternative, crit­ical visualisations”.[20]David Campbell, “The iconography of famine”, in Geoffrey Batchen, Mick Gidley, Nancy Miller et al. (eds.), Picturing Atrocity: Photography in Crisis, Reaktion Books, 2012, p. 89.In the polysemic world of images, AI cannot be expected to comprehend extremely complex and, above all, very subjective issues, espe­cially when it does not take context into account. Diversifying databases to feed AI could help enrich visuals, but this approach cannot make up for the ethical shortcomings of automated learning systems. What is also needed is a radical rethink of AI governance.

Translated from the French by Derek Scoins

Picture credit: Daria Glakteeva

Support Humanitarian Alternatives

Was this article useful and did you like it? Support our publication!

All of the publications on this site are freely accessible because our work is made possible in large part by the generosity of a group of financial partners. However, any additional support from our readers is greatly appreciated! It should enable us to further innovate, deepen the review’s content, expand its outreach, and provide the entire humanitarian sector with a bilingual international publication that addresses major humanitarian issues from an independent and quality-conscious standpoint. You can support our work by subscribing to the printed review, purchasing single issues or making a donation. We hope to see you on our online store! To support us with other actions and keep our research and debate community in great shape, click here!

References

References
1 Symposium “Artificial images in global health: Fakery before and in the era of AI”, organised by Arsenii Alenichev (Institute of Tropical Medicine, Antwerp, Belgium), Sonya de Laat (McMaster University, Canada), Lisa Schwartz (McMaster University), Nassisse Solomon (Western University, Canada), Lulwama Mulalu (McMaster University), Carlos Rosas (McMaster University).
2 André Gunthert, « L’image parasite. Après le journalisme citoyen », Études photographiques, n° 20, 2007, p. 174-187.
3 Bolette Blaagaard, “Post-human viewing: a discussion of the ethics of mobile phone imagery”, Visual Communication, vol. 12, no. 3, 2013, pp. 359–374.
4 Susan Sontag, On Photography, New York: Farrar, Straus and Giroux, 1977.
5 Fabrice d’Almeida, Images et Propagande, Éditions Casterman, 1995.
6 Christina Twomey, “Framing atrocity: Photography and humanitarianism”, History of photography, vol. 36, no. 3, 2012, pp. 255–264.
7 Dino A. Brugioni, Photo Fakery: The History and Techniques of Photographic Deception and Manipulation, Brassey’s, 1999.
8 Thomas Pontiroli, « Action contre la faim, le coeur sur l’humain », Stratégies, 14 mars 2017. https://www.strategies.fr/actualites/marques/1059051W/action-contre-la-faim-le-c-ur-sur-l-humain.html?utm_source=chatgpt.com
9 Bruno David, « Action Contre La Faim : Leila, le retour ? », Grotius International, 2 novembre 2012, https://grotius.fr/action-contre-la-faim-leila-le-retour
10 See the “behind the scenes” film of the campaign by Getty Images: “Behind the scenes of the Red Cross shoot”, https://www.youtube.com/watch?v=Yu629TmPEXAAs
11 Valérie Gorin, « Media representations of humanitarianism », in Silke Roth, Bandana Purkayastha and Tobias Denskus (eds.), Handbook on Humanitarianism and Inequality, Edward Elgar Publishing, 2024, pp. 281–294.
12 Kate Crawford and Trevor Paglen, “Excavating AI: the politics of images in machine learning training sets”, AI & SOCIETY, vol. 36, no. 4, 2021, pp. 1105–1116.
13 Arsenii Alenichev, Patricia Kingori and Koen Peeters Grietens, “Reflections before the storm: the AI reproduction of biased imagery in global health visuals”, The Lancet Global Health, vol. 11, no. 10, 2023, pp. 1496–1498. This work is part of the post-doc research he is conducting on global health.
14 Ibid., p. 1497.
15 See the investigation conducted by journalist Arnaud Robert on Rosius Fleuranvil, a Haitian man who received aid from the Swiss Red Cross, and whose image was still circulating in 2019 despite the fact that he had died in 2016. https://labs.letemps.ch/interactive/2020/longread-haiti-aide-humanitaire-rosius
16 Consider Médecins Sans Frontières (MSF), which was widely criticised on social media in 2022 for a case involving a young woman in the Democratic Republic of the Congo who was a survivor of sexual violence and who had given her consent for her photo to be used by MSF. The controversy was analysed in this review: “Ethical considerations around the use of humanitarian imagery”, Humanitarian Alternatives, issue no. 21, November 2022, pp.92–105, https://www.alternatives-humanitaires.org/en/2022/11/29/ethical-considerations-around-the-use-of-humanitarian-imagery/re
17 André Gunthert, “La peur dans les yeux”, L’image Sociale, 14 mars 2020, https://imagesociale.fr/8217
18 FairPicture, The Platform for Ethical and Compliant Content Production, https://fairpicture.org
19 Arsenii Alenichev, Koen Peeters Grietens, Jonathan Shaffer et al., “Global health photography behind the façade of empowerment and decolonisa­tion”, Global Public Health, vol. 19, no. 1, 2024.
20 David Campbell, “The iconography of famine”, in Geoffrey Batchen, Mick Gidley, Nancy Miller et al. (eds.), Picturing Atrocity: Photography in Crisis, Reaktion Books, 2012, p. 89.

You cannot copy content of this page