intelligence artificielle artificial intelligence

With a view to publishing its 32nd issue in July 2026, the review Humanitarian Alternatives is issuing a call for papers for the focus of this edition, whose provisional title is “Artificial intelligence: uses, tensions and issues”. If you are a researcher, a player or an observer in the field of humanitarian action and international solidarity and you would like to submit a draft article on this theme, then please send us a summary of your chosen issue, a provisional structure for it and a short biography of the author or authors (two pages maximum) by 30 January 2026 to the following e-mail address: contact@alternatives-humanitaires.org. You will get a reply within 15 days of sending your e-mail.

The final article, written in French or English, should be delivered by 26 May 2026 at the latest. The average volume for an article is 15,000 characters, including spaces (which is around 2,400 words in French and 2,200 words in English). Seven to nine articles will be selected for the focus of this issue.

For each edition, we also take an interest in draft articles on themes in humanitarian action and solidarity beyond the edition’s main focus. Such articles are published in our sections Perspectives, Transitions, Innovations, Ethics, Reportage or Opinion. So we are inviting you to put forward draft articles for these sections too.


Artificial intelligence: uses, tensions and issues

The focus of this edition is coordinated by Pierre Gallien, Director of Innovation, Impact and Information at Humanity & Inclusion and member of the editorial board of Humanitarian Alternatives, and by Boris Martin, the review’s Editor-in-Chief.

 

Artificial intelligence (AI) is no longer knocking at the door of the humanitarian sector. It has already invited itself into its offices, out into the field and thus into the daily practices of aid organisations. Not so long ago, such uses were still discreet. Yet recently they have become widespread at a speed that leaves us with little perspective on the matter today. That is precisely why we need to tackle the issue.

Like in most sectors in our society, whether in profit-driven or not-for-profit fields, this acceleration is raising a question as simple as it is uncomfortable: are we allowing ourselves to get carried away with a wave of technology whose direction and effects we do not control, or will we together manage to develop responsible use of artificial intelligence? AI promises undeniable gains in efficiency. But it is also transforming the contexts in which players in solidarity work, the way in which they operate and the very foundations of their relationship with reality, decision-making and the people they serve.

This edition of Humanitarian Alternatives focuses on today’s pivotal period that the emergence of this technology has brought about. Of course, the focus of this edition is not about embracing technophilic enthusiasm, nor about encouraging futile resistance to change. Rather, it is about sizing up, through critical reflections, a transformation that is already affecting organisations, jobs, principles and responsibilities specific to the aid sector. We are therefore issuing a call for papers on three underlying topics that together give us a lens for properly understanding this (r)evolution that is currently underway.

Operational contexts transformed: when AI rewrites the rules

AI is not only changing the work tools used by non-governmental organisations (NGOs). It is also transforming the environments in which aid workers operate and, doubtless to a lesser extent, the environments in which players in development, ecology and human rights work too. Indeed, AI is already being used in weaponry in modern conflicts. For example, it is used to identify targets, to coordinate drone attacks or to flood the internet with falsified content. With their software Lavander, Israeli forces made AI a central tool in their targeting policy in Gaza, claiming to have achieved a surgical level of precision that reduces civilian casualties.[1] Elizabeth Dwoskin, “Israel built an ‘AI factory’ for war. It unleashed it in Gaza”, The Washington Post, 29 December 2024. In Ukraine, drones are now fitted with AI, as much for strengthening defensive capabilities as for making attacks autonomous and reaching their ground target in the face of electronic jammers that cut the link with the drone’s pilot.[2]Emmanuel Grynszpan, « Ukraine : sur le champ de bataille, l’IA décuple la précision des frappes », Le Monde, 14 novembre 2025.

This militarisation of AI raises crucial questions for humanitarian action. If warfare becomes algorithmic, how can the principles of international humanitarian law, including distinction and proportionality, still apply? How can we negotiate access for aid in environments where decisions to strike are virtually delegated to opaque systems? How can we protect civilians when the enemy becomes a category generated by a predictive model with its integrated margin of error?

Moreover, we need to raise a major political issue here: the imbalance between military innovation and humanitarian innovation. Recent conflicts show a blatant lack of proportion. On the one hand, in military capabilities, considerable means are being rolled out for forecasting, monitoring and designating targets. On the other hand, in aid work, there are limited means for access, protection and a humanitarian response. The notion of warfare becoming supposedly cleaner thanks to algorithms is a powerful narrative. Yet aid workers know that technological cleanliness does not completely eradicate violence, nor ambiguity, nor human damage. What dynamic is supporting military innovation and what model is supporting humanitarian innovation? Can we hope to see the same trend serving contrasting goals? Or must we demand common ground in technology, open standards, safeguards, and public investment in protection rather than domination?

AI is not only transforming armed conflict. In other contexts, such as earthquakes, floods and landslides, it is promising unforeseen forecasting, including predictive climate models, real-time analysis of satellite images and instant mapping of damage. But this promise of acceleration bears risks too: the risk of hasty decisions based on incomplete data, the risk of too much trust being placed in calculative models that speak well even when mistaken, and the risk of populations being marginalised. Indeed, AI is amplifying structural biases. What data is feeding these systems and for which organisations, whether civilian or military? In which languages? And with what representations of the world? There is a real risk of standardisation of technical responses that are disconnected from local knowledge, community priorities and unquantifiable social dynamics.

For this first topic, we are asking for papers that document, analyse and question these contextual transformations. In concrete terms, how is AI rewriting the rules of engagement in conflicts and the rules of intervention following catastrophes? What impacts are there on protection of civilians, access for aid, and the responsibility of different parties in a conflict? How can aid organisations navigate in environments where information itself becomes a weapon? What lessons can we learn from recent experiences in Ukraine and Gaza, but also from other contexts? How can we foresee excesses, preserve principles and reaffirm the primacy of international humanitarian law in the face of an illusion – upheld indecently – of supposedly cleaner warfare?

Humanitarian jobs shaken up: skills, hierarchies and new balances

AI is not only transforming contexts. It is also deeply reconfiguring humanitarian jobs and aid in general. The initial promise is simple: efficiency increased, repetitive tasks automated and time freed up for activities with greater added value. But the reality is more complex than that. Indeed, AI is not just speeding things up. It is redistributing skills, reorganising professional hierarchies and calling into question the very value of a humanitarian worker.

Let us take the example of assessment. Traditionally, an assessor has needed solid methodological expertise, good listening skills, a capacity for qualitative analysis and detailed knowledge of local dynamics. But today, algorithms can pick up trends from thousands of verbatim accounts, can spot weak signals and can produce elegant summaries. So is a human assessor still indispensable? Should a human assessor interpret, contextualise and compare results from the field, providing an ethical judgement? Or is a human assessor now just a validator of algorithmic output?

In searches for funding, AI can write concept notes, align a project with a funder strategy, and optimise storytelling to maximise the chances of success. Is this a great tool, now available for small organisations that have lacked resources? Or is it a new competitive factor that favours those who already master the right tools, the right calculative models and the right prompts? In short, we face the following question: does editorial innovation risk replacing social innovation?

In logistics, AI can optimise supply chains, foresee shortages and simulate alternative scenarios. For example, it can imagine several possibilities for conveyance in the face of an impassable route. Yet it can also create an illusion of control: a calculative model optimises what you feed it with, not what is true – and even less so at a given moment. But the humanitarian field works in real time with routes that disappear, authorisations that are renegotiated, markets that collapse and risks that move. Tools should serve work in the field, rather than work in the field serving tools.

For this second topic, we are asking for papers that explore these professional reconfigurations. What humanitarian jobs are most affected by AI? What skills are becoming obsolete? What skills are becoming stronger? And what skills are emerging? How are organisations managing these transitions? What training courses, what human resources policies and what investments can support these changes?

Interdisciplinary issues: reality, decision-making, inequalities and resilience

Beyond contexts and jobs, AI is raising fundamental questions that cut across the whole aid sector. These interdisciplinary issues form a backdrop that should give us food for thought for our collective reflections.

The first issue concerns our relationship with reality itself. AI is able to produce appearances of reality on a large scale, from texts and voices to images and videos. Yet not only does it create imitations; it sows doubt too. And in humanitarian action, doubt comes at a cost. Doubt delays interventions, weakens negotiations, exposes teams and erodes trust among communities. In a world filled with artificial content, how can we uphold the credibility of real-life witness accounts and analyses?

The second issue concerns decision-making. AI can be helpful, but it should not arbitrate. The temptation is well known: in the face of complexity, let’s delegate tasks to a calculative model. Yet in humanitarian action, decisions are not based purely on technical optimisation, but also on values, dilemmas, rights and priorities, which are sometimes debatable. An AI system cannot bear moral responsibility for a choice made. So how can we create safeguards? How can we set a simple rule to ensure that a final decision remains explicable, defensible and debatable? How can we stop ourselves from hiding behind automation as a refuge to avoid the difficulty of decision-making?

The third issue concerns inequalities. If we are not careful, AI could widen certain gaps: gaps between the largest organisations and smaller ones, gaps between the most widely used languages and less common ones, and gaps between well-connected urban zones and forgotten rural ones. Efficiency should not become a new generator of inequalities. How can we develop uses that are relevant and inclusive? How can we ensure that gains in productivity are also enjoyed by local players, by all structures in civil society and by populations themselves?

Lastly, there is the often-forgotten issue of resilience. What would happen if the AI bubble burst? What would happen if forms of dependence – other than those painfully revealed recently – turned out to be critical? How can the aid sector, already weakened by multiple attacks, indeed often orchestrated by certain leaders in AI themselves, keep its traditional expertise, including analytical capabilities, robust methods, checking procedures, tried-and-tested logistical know-how and negotiation skills? AI can enhance these forms of knowledge. But it should not replace them.

For this third topic, we are asking for papers that raise questions about these interdisciplinary dimensions. How can AI change our perception of truth and our capacity to document reality? What ethical and legal frameworks could oversee automation of humanitarian decision-making? How can we stop AI from aggravating inequalities between the Global North and the Global South or between small and large structures? What resilience strategies are there in the face of risks of technological dependence? And how can we develop sovereignty of use that is neither a technophobic form of refusal nor submissiveness to the logic of tech giants or military players?

Types of papers sought

Like for every issue of our review, this edition seeks to promote a wide range of formats and approaches in order to mix perspectives and enrich our collective reflections. We are asking for papers that combine analytical rigour with a rooting in reality on the ground, papers that document concrete experiences while putting forward general notions, and papers that ask awkward questions without giving up on nuances. Cross-disciplinary approaches are especially welcome, for example approaches that combine international humanitarian law, applied ethics, political science, the sociology of organisations, and critical studies of technology. Another priority category is real-life accounts from humanitarian organisations that have actually experimented with AI already. Such documented, thoughtful accounts would indeed be precious, whether they share success, failures, unresolved tensions or ethical dilemmas. We are asking for papers that accept the complexity of this whole issue, papers that recognise the limits of AI as much as its benefits, and papers that share real lessons learned, rather than slick communication.

We also welcome analyses of regulatory and governance frameworks. How do states, international organisations and funders oversee – or, indeed, not oversee – use of AI in humanitarian work? What standards are emerging? What shortcomings persist? How can we combine technological innovation with respect for humanitarian principles, personal data protection and accountability?

Lastly, papers that look to the future critically are also welcome. Where are we heading? What kind of future do we want? What kind of future do we want to avoid? How can we develop, together, a form of AI that serves humanity, instead of humanity serving AI? For these reflections to bear fruit, they should be rooted in in-depth knowledge of humanitarian work and they should avoid hasty generalisations or forecasts that are disconnected from reality.

 

Support Humanitarian Alternatives

Was this article useful and did you like it? Support our publication!

All of the publications on this site are freely accessible because our work is made possible in large part by the generosity of a group of financial partners. However, any additional support from our readers is greatly appreciated! It should enable us to further innovate, deepen the review’s content, expand its outreach, and provide the entire humanitarian sector with a bilingual international publication that addresses major humanitarian issues from an independent and quality-conscious standpoint. You can support our work by subscribing to the printed review, purchasing single issues or making a donation. We hope to see you on our online store! To support us with other actions and keep our research and debate community in great shape, click here!

References

References
1 Elizabeth Dwoskin, “Israel built an ‘AI factory’ for war. It unleashed it in Gaza”, The Washington Post, 29 December 2024.
2 Emmanuel Grynszpan, « Ukraine : sur le champ de bataille, l’IA décuple la précision des frappes », Le Monde, 14 novembre 2025.

You cannot copy content of this page