A fascinating advance currently underway in the field of artificial intelligence is the development of models to assist with peace negotiations. In collaboration with the White House, a think tank in Washington has been developing a simulator that generates potential peace-agreements to end the war in Ukraine, and also scores the likelihood of each deal being accepted by the combatants and other stakeholders. Another project, the development of which is based on a partnership between the British Foreign Office and the University of California, Berkeley, is aimed at providing advice for negotiators by generating a range of possible ‘voices’ or perspectives. The idea is that the AI model will enable officials to anticipate possible responses from various parties, whether superiors back home or hostile interlocutors. This presents the possibility of their being prepared to reframe negotiating positions quickly and to maintain momentum in talks.
AI Hawks, AI Doves
Interestingly, in test runs of different negotiating models, some have proved to be rather escalatory, opting to use force too readily, while others have been somewhat risk averse – or overly conciliatory. What does it mean to say that an AI negotiator is too escalatory or excessively conciliatory? Such judgements make sense to us, but in what sense can we attribute these qualities to machines? Obviously, an AI model has no conscious grasp of a conflict situation or what is at stake for the various parties involved, so there is no sense in which it can really be either risk-averse or hostile. All that we can mean, therefore, is that its coding is such that it produces certain types of output in response to specific types of input. The important point here is that these judgements are human: an AI model can only be escalatory or conciliatory in our opinion, relative to our own perception of a situation and the ends that we wish to achieve. To say that a machine is too ready to resort to force means only that in the same situation – and given the manifold ends in view – we would have made greater efforts and perhaps been willing to concede more in order to avoid such an outcome.
The Human Factor
Artificial Intelligence models can of course be trained or programmed to respond differently. Indeed, there are projects underway to improve the responses of AI negotiation models, one of which aims to convert information about appropriate and inappropriate human language and actions into code. Thus, the aim is to render the model more human and to respond in a manner closer to our own. However, it is not clear what this would entail. As has been demonstrated in several contexts recently, negotiation is complex and there is no straightforward ‘human’ response to resolving conflicts. Different parties, while agreeing that they desire peace, will take radically positions on what the conditions are to be.
This, however, is the point: however sophisticated any negotiating model might become, however apparently human and however capable of foreseeing possible responses and generating alternative proposals, it can only function on coding based on our own preferences and judgements (or those of the parties by whom it has been trained). In matters of politics and in morality, complete neutrality is all but impossible. There is no such thing as complete objectivity or an objectively optimal outcome. Any outcome must always have its foundations in the judgement of the parties involved, and their weighing of considerations such as national interest, what constitutes a fair settlement, the significance of environmental damage, the value of human life and the availability and best use of resources, together with the strength of their desire for prosperity and peace rather than conflict.
For this, certain virtues ought to be exercised and we always hope for the display of qualities such as prudence, wisdom, justice and moderation. As excellences of character cultivated over time, these are uniquely human and cannot be possessed by machines. Where artificial intelligence can support leaders and negotiators in achieving peace, it is to be welcomed, but decisions can only ever be made by human beings exercising their faculties of judgement, hopefully informed by the requisite virtues – qualities of character that can shape the training of AI models, but never be replicated or replaced.
Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.
Image courtesy of Freepik (www.freepik.com)