Is Peace a dream or a reality in the AI international landscape?
Updated: Jan 19, 2022
Contribution for The Good AI Community.
Article also available on their website here Is Peace a dream or a reality in the AI international landscape? - The Good AI.
Today, and this week, we celebrate the International Peace Day established by the United Nations General Assembly in 1981 :
“Acknowledging (…) that it would be appropriate to devote a specific time to concentrate the efforts of the United Nations and its Member States, as well as of the whole mankind, to promoting the ideals of peace and to giving the positive evidence of their commitment to peace in all viable ways.
[And] considering that, through the declaration and proper celebration of (…) an International Day of Peace, it would be possible to contribute to strengthening such ideals of peace and alleviating the tensions and causes of conflicts, both within and among nations and peoples.”
Since then, we, peacemakers and citizens, as well as the international community reaffirm our commitment to Peace every year.
This year, it has a particular flavor in the international community as the UNESCO will invite its member states to adopt its final draft on ‘Recommendation on the ethics of artificial intelligence (AI) in November. If adopted, these recommendations will be the first international instrument reaffirming the application of international law and its principles to artificial intelligence including the common objective of promoting and preserving peace.
It is of importance in a context where there is a risk of fragmentation due to the multiplication of national strategies and codes of conduct and in a realm where there is a consensus to admit that there might be unforeseen or serious consequences due to the use of artificial intelligence applications at international level.
When it comes to Peace, if AI is serving Peace, it also increases the threats. For example, algorithms on social media platforms or on the internet can influence citizens and local populations using micro-targeting or misinformation campaigns to destabilize the political scene and increase division among the population or targeted ethnic groups, thus increasing identity-based conflicts. Therefore, it is of high importance – at least diplomatically – that 102 member states and 49 observers including the United States and The Holy See are willing to collectively reaffirm their commitment to Peace in an international instrument.
At the same time, member states are to reaffirm their commitment to international law in particular to treaties and protocols such as the International Covenant on Civil and Political Right and The Optional Protocol to the International Covenant on Economic, Social and Cultural Rights. Thus, the draft on ‘Recommendation on the ethics of artificial intelligence’ reaffirm the human-centered and human rights-centered approach of Artificial Intelligence chosen by the United Nations since 2005 and reaffirmed in its digital cooperation strategy whilst member states accept to make it a steering principle in the larger framework of the Sustainable Development Goals (SDGs).
Another way that the ‘Recommendation on the ethics of artificial intelligence’ supports Peace is by adopting a socio-technological definition of Artificial Intelligence that includes the ethical, sociological, psychological, ecological, and legal consequences of AI.
For the first time in an instrument, the implications on the human cognitive capacities are formally and directly addressed:
“Member States, companies and civil society [are also invited to] study the sociological and psychological effects on human beings with regard to their decision-making autonomy”.
This interdisciplinarity approach is to be welcomed as the psychological and cognitive implications of AI’s applications remain unrecognized and/or underestimated. In research that I conducted in 2018, it was already clear that “echo chambers” and the personalization and micro-targeting through increasingly precise and powerful algorithms reduce the development of individuals’ cognitive abilities. Children raised by – and with – algorithms are subject to a ‘digital determinism’ whilst adults who are increasingly assisted (predictive analyses, mapping, etc.) in their day-to-day see their decision-making capability diminish or are strongly influenced to the point of losing their critical thinking and their ability to challenge or oppose a preconceived scenario as in Milgram’s experience.
Interestingly also when it comes to promoting peace, the UNESCO’s project covers all commonly accepted ethical risks underlying the importance to be given to cultural diversity in the development of natural language processing (NLP) tools and the voice to be provided to minority groups such as indigenous populations and vulnerable groups.
Why are those latter two points important for Peace?
Because there is ‘negative peace ’related to preventing and resolving conflicts for which new AI’s applications already exist such as mapping or large-scale 1-on-1 dialogue providing a real-time at-scale survey of targeted groups and populations.
But there is also a ‘positive peace’ that needs a culture of peace and a renewed commitment to peace, which includes diversity, tolerance, and critical thinking to make deliberate choices towards peace.
This year’s International Day of Peace is another chance to stand symbolically for peace and asks ourselves what can we do in AI to support peace.
Virginie MARTINS de NOBREGA