Is generative artificial intelligence really revolutionising humanitarian communication? At a conference recounted by the author, researchers and practitioners agreed that AI reproduces and amplifies age-old practices of visual manipulation. From the missionary photos of the nineteenth century to today’s campaigns, the stereotypes of the “white saviour” and the “passive victim” persist. Algorithms freeze these biased representations in an automated “stereotype bank”. The challenge is not technical but ethical: to rethink the governance of humanitarian images.
Many societal changes have foreshadowed the upheavals artificial intelligence (AI) has brought to the visual economy and have played their part in distancing us from the claim to truth conveyed by images. This was one of the key points made by the organisers of the symposium held at the Brocher Foundation, in Geneva, from 4 to 6 June 2025.[1]Symposium “Artificial images in global health: Fakery before and in the era of AI”, organised by Arsenii Alenichev (Institute of Tropical Medicine, Antwerp, Belgium), Sonya de Laat (McMaster … Continue reading Bringing together historians, ethicists, anthropologists, image professionals, health practitioners and communication experts, those discussions aimed to examine visual manipulation in, and before, the age of AI. In fact, it highlighted the issues that have shaken humanitarian communication since the advent of generative AI and the possibility of creating relatively realistic images, as discussed elsewhere in this issue.
The emergence of social media and the return of amateur photography through widespread use of mobile phones have helped popularise the circulation of images in the public domain since the 2000s, as witnessed by images of the 11 September attacks and the 2004 tsunami.[2]André Gunthert, « L’image parasite. Après le journalisme citoyen », Études photographiques, n° 20, 2007, p. 174-187. The advent of automated security cameras and other facial-recognition systems has also shifted the boundaries between human beings and machines, in what is now known as post-human vision.[3]Bolette Blaagaard, “Post-human viewing: a discussion of the ethics of mobile phone imagery”, Visual Communication, vol. 12, no. 3, 2013, pp. 359–374.
Does this mean that humanitarian organisations, from local non-governmental organisations (NGOs) to international institutions, are increasingly using AI-generated images to illustrate situations of distress? If so, to what extent are these artificially created images transforming the way these humanitarian representations are produced and disseminated? This article looks at some of the tensions and paradoxes explored at the conference, which raised many questions about the different types of manipulation, the biases of generative AI, the risks of stereotyping and the ethical responsibility of image creators.
New “fake images”?
Questioning the use of generative AI primarily involves placing it in the context of the communication practices already established in the sector. It therefore involves examining the very notions of manipulation, distortion, falsehood and artificiality in images. It is the supposed fidelity to reality that is being renegotiated by the advent of generative AI and the automation of the creative process, but which nevertheless makes us forget that every image is still a social and political construct.[4]Susan Sontag, On Photography, New York: Farrar, Straus and Giroux, 1977.
“It is the supposed fidelity to reality that is being renegotiated by the advent of generative AI and the automation of the creative process.”
While the history of visual manipulation has focused on war and political propaganda,[5]Fabrice d’Almeida, Images et Propagande, Éditions Casterman, 1995. this analysis remains patchy with regard to humanitarian imagery. Since the communication practices of NGOs have largely been fuelled by photographic genres such as missionary photography or photojournalism, there is no doubt that the same methods of deception can be found in representations of suffering.
The first part of the discussions therefore compared, historically, problematic practices over time, such as staging or photographic retouching. Humanitarian iconography is full of images whose authenticity must be questioned. Among the examples given by the speakers, many images featured posed scenes and the selective framing of episodes of colonial violence or famines dating back to the late nineteenth century.[6]Christina Twomey, “Framing atrocity: Photography and humanitarianism”, History of photography, vol. 36, no. 3, 2012, pp. 255–264. While the limitations of cameras may have justified the posed images at the time, these visuals now make up a good proportion of images generated by AI.
We also find repeated, recycled uses of images far removed from the context in which they were taken. The most striking example of this are the images of African famines, where the central figure – that of the starving, lone child – has featured prominently in fundraising campaigns since the famine in Biafra. Context removal, cropping and composition – some of the best-known manipulation techniques[7]Dino A. Brugioni, Photo Fakery: The History and Techniques of Photographic Deception and Manipulation, Brassey’s, 1999. have been seen on many posters, from Save the Children to national Red Cross and Red Crescent societies, since the 1960s. This type of totem image nurtures the misery-laden aesthetic created by AI, which provides a profusion of images of racialised people, ultimately allowing the spatio-temporal framework and historical reference to be erased.
Another well-known trick is the visual “before and after” effect, for which one particular campaign paid a heavy price: Action Against Hunger’s “Leila” poster. In January 1994, this NGO published a poster featuring Leila, an emaciated woman (“before”) and then a much healthier woman (“after”), with the caption “Leila 100F plus tard” [Leila 100 francs later].[8]Thomas Pontiroli, « Action contre la faim, le coeur sur l’humain », Stratégies, 14 mars 2017. … Continue reading This technique works by contrasting an image of imminent death with life-saving action. The incentive effect for the viewer was developed in the advertising sector (notably by agri-food multinationals) and then reproduced in humanitarian visuals. This technique raises questions, however, about the temporal exaggeration it seems to suggest, and especially about the authenticity of the images used (are the people shown really the same before and after?). As Bruno David suggests, is this process, which borrows from misleading advertising, “applicable to NGOs without the risk of deception”?[9]Bruno David, « Action Contre La Faim : Leila, le retour ? », Grotius International, 2 novembre 2012, https://grotius.fr/action-contre-la-faim-leila-le-retour
The question of authenticity also arises in instances where images have had to be reproduced after the event, either because there were no live visuals, or because the situation was deemed too delicate to use real photos. This was the case with the poster series “It’s a matter of life and death”, produced for the launch of the “Health Care in Danger” campaign, organised by the International Committee of the Red Cross (ICRC), in 2011. Designed to raise awareness among the parties to the conflict of the need to respect medical staff and facilities, these posters were artificially produced in a studio in partnership with Getty Images and photographer Tom Stoddart.[10]See the “behind the scenes” film of the campaign by Getty Images: “Behind the scenes of the Red Cross shoot”, https://www.youtube.com/watch?v=Yu629TmPEXAAs As all the posters showed people who had died (combatants and civilians), it was deemed impossible to use real images out of respect for the dignity of the dead. In line with its policy of neutrality, the ICRC also took great care not to reproduce any features that would identify specific uniforms or locations – although there are enough visual references to discern a refugee camp, a checkpoint or a bombed city. The aesthetic reproduced in the studio at the time is very similar to what AI achieves today.
A bank of stereotypes
The examples cited above highlight how generative AI perpetuates, rather than renews, the old practices of staging, stereotyping and erasing local subjectivity. The numerous studies carried out over the last twenty years on how humanitarian issues are represented have demonstrated how such representations have historically been produced by and for the Global North. They highlight profound inequalities between the communities portrayed and the audiences, as well as a tendency to make suffering intrinsic to the wider picture. These representations are based on visual and narrative stereotypes that undermine the complexity of the context and reinforce neo-colonial dynamics.[11]Valérie Gorin, « Media representations of humanitarianism », in Silke Roth, Bandana Purkayastha and Tobias Denskus (eds.), Handbook on Humanitarianism and Inequality, Edward Elgar Publishing, … Continue reading
The second part of the conference questioned the structural biases of generative AI, particularly the persistence of gender, class and race biases raised in several recent studies. This is largely due to the inherent flaws of the databases used to train AI, and to the algorithms, but also to the lack of feedback from users. In this way, generative AI holds up a mirror to us: by tapping into databases that are not neutral, it brings us the political, ideological and historical logic inherent to our view of the world – a view reproduced by AI in its way of classifying and understanding information in order to reproduce it.[12]Kate Crawford and Trevor Paglen, “Excavating AI: the politics of images in machine learning training sets”, AI & SOCIETY, vol. 36, no. 4, 2021, pp. 1105–1116.
“Automated reading using algorithms identifies and enhances these visual tropes.”
With regard to humanitarian imagery, automated reading using algorithms identifies and enhances these visual tropes. Among the most prevalent stereotypes are the figure of the child icon and the over-representation of white saviours. The people of the Global South are often reduced to their status of victim – passive and deprived –, thereby preventing them from being seen as active agents of their own destiny. Minorities remain invisible or stereotyped, reduced to subordinate roles. In a practical session involving experimentation with a range of image-generating AI, conference participants were able to reproduce the same representations as those discussed in the research of Arsenii Alenichev and his colleagues.[13]Arsenii Alenichev, Patricia Kingori and Koen Peeters Grietens, “Reflections before the storm: the AI reproduction of biased imagery in global health visuals”, The Lancet Global Health, vol. 11, … Continue readingBy 2023, this research had used Midjourney (version 5.1) to generate over 300 images based on prompts seeking to invert classic tropes, prompts such as “Black African doctors providing care for White suffering children”. Despite numerous attempts, AI almost systematically failed to invert these tropes, perpetuating the problematic pairing of white saviour/black suffering, or caricaturing and exaggerating cultural markers of “Africanness”.
On the one hand, these experiments highlight the extent to which “racism, sexism and coloniality are embedded social processes manifesting in everyday scenarios, including AI”.[14]Ibid., p. 1497. On the other, they highlight the lack of transparency inherent in the procedures of machine-learning systems: which databases are being used and what is being done to ensure more complex and fairer representations?; how do humans encode images in order to train AI?; how does AI recognise and label images?; what prompts are used?
Which ethics apply?
In this context, a lack of transparency is just one of the ethical issues raised by use of AI-generated images. The cases discussed above are just a few examples that highlight the urgent need to redefine a responsible and inclusive set of visual ethics in a backdrop of digital transformation. This was the focus of the third day of the conference, when codes of conduct and other ethical charters in humanitarian communication were addressed. High-profile cases in recent years have shown that the rights of the people portrayed,[15]See the investigation conducted by journalist Arnaud Robert on Rosius Fleuranvil, a Haitian man who received aid from the Swiss Red Cross, and whose image was still circulating in 2019 despite the … Continue readingincluding the very sensitive issue of consent to use an image,[16]Consider Médecins Sans Frontières (MSF), which was widely criticised on social media in 2022 for a case involving a young woman in the Democratic Republic of the Congo who was a survivor of sexual … Continue readingare often problematic.
In this case, the use of AI offers a solution by allowing the problem of consent to be overcome, but it fails to consider the image models used to feed AI – and the rights of the people shown. What are we to think, for example, about some of the artificial images seen at the conference, which showed refugee girls whose pose and green eyes are clearly reminiscent of Sharbat Gula, the iconic “Afghan Girl” photographed by Steve McCurry in 1984? Does she know that her image is being used deceptively to aestheticise a representation of migration generated by algorithms? Here we have a double issue of consent, because a controversy emerged in 2020 criticising McCurry’s forced production method and the fetishisation of the little girl.[17]André Gunthert, “La peur dans les yeux”, L’image Sociale, 14 mars 2020, https://imagesociale.fr/8217
This case questions the disproportionate power relations in humanitarian contexts, and the difficulty of what constitutes informed and iterative consent, especially when the use of the image spans several decades and when it moves from the analogue to the digital sphere. Should we demand greater transparency from companies that agree to allow access to their image banks? What are we to think about humanitarian aid agencies that could use their visual archives to provide input for the algorithms: is this a pledge of morality, of a responsibility to avert the biases of AI? These are some of the many unresolved questions that demonstrate the need to rethink visual pathways and constructions. Failure to make explicit reference to the artificial nature of the images used in campaigns is a form of deception.
Relatively widespread use of AI cannot replace humanitarian organisations’ duty of accountability. In an ethical relationship, consent is more than just authorising an NGO to use an image. It is primarily a relationship established with a beneficiary to whom various rights are granted: the right to be informed, the right to delete and correct information, and the right to be forgotten. These legal aspects, which are consistent with the General Data Protection Regulation (GDPR), are notably defended by Fairpicture, an organisation of photographers present at the conference.[18]FairPicture, The Platform for Ethical and Compliant Content Production, https://fairpicture.orgInitiatives like these also highlight the need to rethink how we collaborate with the creators of authentic images. In the same way that we need to question AI prompts, we also need to think about the instructions given to photographers and the pressure they are under to meet marketing objectives whilst keeping costs down. Despite the “localisation” expressed in NGOs’ communication practices, feedback from the photographers invited to the conference showed that organisations want to retain editorial control while strengthening existing hierarchies, as the conference’s organisers point out in another article.[19]Arsenii Alenichev, Koen Peeters Grietens, Jonathan Shaffer et al., “Global health photography behind the façade of empowerment and decolonisation”, Global Public Health, vol. 19, no. 1, 2024.
“In the polysemic world of images, AI cannot be expected to comprehend extremely complex and, above all, very subjective issues, especially when it does not take context into account.”
AI-generated images are therefore caught in a paradox between those who want to use them as a way of saving time and money (techno-solutionism) and those who fear mass use of disinformation (deepfakes). These determinist visions cannot do away with the need to think about the visual policies reproduced by algorithms and how AI entrenches social hierarchies. In the era of analogue images, some researchers were already saying we need to be “less concerned about the presence of [photographic] icons and more concerned about the absence of alternative, critical visualisations”.[20]David Campbell, “The iconography of famine”, in Geoffrey Batchen, Mick Gidley, Nancy Miller et al. (eds.), Picturing Atrocity: Photography in Crisis, Reaktion Books, 2012, p. 89.In the polysemic world of images, AI cannot be expected to comprehend extremely complex and, above all, very subjective issues, especially when it does not take context into account. Diversifying databases to feed AI could help enrich visuals, but this approach cannot make up for the ethical shortcomings of automated learning systems. What is also needed is a radical rethink of AI governance.
Translated from the French by Derek Scoins
Picture credit: Daria Glakteeva
