The digitalization of peacebuilding is now an undeniable reality. From simple online dialogues to the use of complex systems that deliver early warning signals for conflict prevention, digital tools are now being used in a myriad of ways to assist peacebuilding practitioners. Such tools promise to provide more accurate data about conflict, increase the capacity to pinpoint changing trends, anticipate future developments, expedite preventive action, and provide opportunities for data- and evidence-based conflict resolution.
However, the well-intended use of technology may have many unintended consequences for technology users, peacebuilding actors, peace processes, and project or program objectives. These include risks of technical malfunction, the manipulation of digital tools by governments or other powerful entities to harm people or risks of compromising ongoing peacebuilding operations.
How, then, can we guarantee the ethical use of digital tools in peacebuilding? What is the best way of mitigating harmful effects? Does the principle of ‘do no harm’ suffice, or must peacebuilding actors be held to a higher standard? If so, what standard might that be?
These questions were addressed during our workshop titled “Towards ethics-driven digital peacebuilding: How to promote the good use of technologies beyond ‘do no harm’” organized by the Kofi Annan Foundation with the Geneva Graduate Institute’s Centre on Conflict, Development and Peacebuilding (CCDP), the Peace Research Institute Oslo (PRIO), and Search for Common Ground as part of Geneva Peace Week.
Drawing on classical strands of philosophical ethics, participants were prompted to reflect on the various approaches to engage with the risks and challenges of digital approaches, as well as their moral, social, and political implications. Some of the main points which transpired were the following:
Perspective 1: Global standard-setting is not sufficient
In theory, the most obvious way to curtail the risk posed by digital tools would be formulating and enforcing international standards for their proper and ethical use. Establishing good practices that are recognized by all may limit the potential abuse of digital tools and provide guidelines for actors who are brought to engage with them. This approach also emphasizes the importance of preventing rather than responding to risk.
Does it make sense to formulate standards in a “one-size-fits-all” manner?
In practice, however, global standard-setting introduces the questions of legitimacy and authority. Who has the legitimacy to establish these standards? Can these standards be easily enforced globally, especially given the current geopolitical environment? Does it make sense to formulate standards in a “one-size-fits-all” manner? Workshop participants seemed to agree that the tentative answer to these questions is “no”. Global ethical standards cannot replace the need for international legal regulation of this field – an admittedly tedious and bureaucratic process which provides little nuanced guidance for specific contexts. Therefore, combining general legal rules with more context-specific but non-binding guidance to best practices might be the best way forward.
For standards to be effective, they must be designed for the contexts in which they will be implemented in consultation with key stakeholders. For instance, our guest speakers from Sri Lanka showcased the importance of including civil society in multi-stakeholder responses to digital risks. In the context of hate speech and disinformation, civil society can play a crucial role in helping identify harmful behaviour stemming from both platform users and the platforms themselves. As is the case with Hashtag Generation and Search for Common Ground, civil society organizations serve as important cultural intermediaries to help platforms determine what is harmful within a given context.
Perspective 2: Ethical solutions may require pragmatic means
Promoting the ethical use of digital technology in context-specific settings may, however, come at a cost. Workshop participants observed that standards are often enforced with the help of government and private sector actors who, at times, are also responsible for the unethical use of digital tools. Employing digital technologies thus commonly comes with tradeoffs.
Is it possible to balance the ethical and unethical use of digital platforms to achieve long-term benefits?
In Sri Lanka, for example, the state has both put in place certain measures to mitigate the spread of disinformation online and, at the same time, repeatedly used these same platforms to intimidate the opposition and promote ethno-nationalist discourse. This behaviour went unchecked by social media platforms that often do not see a cost-benefit to investing in contextual monitoring systems. As a result, the burden of regulation and monitoring most often falls on the shoulders of civil society or on-the-ground practitioners, who should not be solely responsible for upholding ethical standards.
Therefore, can we consider that partnering with harmful actors is justifiable when trying to achieve peaceful outcomes? Is it possible to balance the ethical and unethical use of digital platforms to achieve long-term benefits? Is there such a thing as calculated risks when it comes to peacebuilding? More generally, do the ends justify the means?
Perspective 3: Values might be better than hard rules
Regardless of how standards are thought about, we might also want to question whether they are truly the answer in a world where rules are regularly dismissed. Examples of the malevolent or irresponsible use of digital technology by states, the private sector, big tech, international organizations, and other key actors have shown that regulatory frameworks rarely achieve their purpose when there is no willingness to prioritize ethical concerns in the first place. In fact, it can be argued that context-specific standards cannot even be established without a comprehensive understanding of the values that would underpin them.
What makes certain beliefs ‘ethical’ and others ‘unethical’?
As an alternative, the workshop participants discussed the option of complementing efforts to regulate digital approaches with a value-based approach that promotes the ethical use of technology on a personal level. Here, the emphasis would be placed on core values and beliefs held by key actors in given contexts. In fact, private sector companies and governments already employ digital technologies based on certain values – sometimes tacitly and sometimes explicitly. This, however, is not all that simple, given that values and beliefs are often heterogeneous. Appealing to users’ personal sense of ethics requires a good understanding of their concerns, priorities, and operating methods. Such an approach also comes with a series of unanswered questions: through what means are values identified? How to determine which values and belief systems to prioritize in polarized settings? What makes certain beliefs ‘ethical’ and others ‘unethical’? Who decides?
Following a value-based approach, stakeholders would strive to negotiate, maintain, cater to, and promote ideas they believe in rather than trying to adhere to a set of hard rules. These may be national values, but also community beliefs that can be drawn on to establish “social contracts” on various scales. As such, engaging with the question of ethical use of digital spaces also requires going beyond the realms of technology – engaging in public deliberation of what is “rights” but also developing different actors’ capacities to promote a value-based approach.
The workshop marked the beginning of an important discussion and raised crucial considerations. Discussions on the ethics of digital peacebuilding tend to bring up more questions than solutions. However, pinpointing the key concerns surrounding this topic is critical to advancing the debate and, perhaps, provoking awareness amongst peacebuilding actors, which, in turn, can lead to a stronger and more profound engagement with the risks and challenges of digital peacebuilding.
Lead author: Amanda Kutch, Editing contributors: Kristoffer Lidén (Peace Research Institute Oslo) and Andreas Hirblinger (Center on Conflict Development and Peacebuilding – Graduate Institute)
About the workshop
At least 40 peace practitioners, researchers and representatives from international organizations came together on the 2nd of November 2022 in Geneva to discuss how various ethics perspectives could help in the planning, implementing, and evaluating of digital peacebuilding interventions. The in-person workshop was part of Geneva Peace Week and was organized by the Kofi Annan Foundation, the Graduate Institute’s Centre on Conflict Development and Peacebuilding (CCDP), Peace Research Institute Oslo (PRIO), Search for Common Ground and Hashtag Generation.
The workshop was moderated by Sofia Anton, Program Officer for the Youth, Peace & Trust Program at the Kofi Annan Foundation. It featured expert inputs from Senel Wanniarachchi (Co-Founder and Director of Hashtag Generation) and Kiruthika Thurairajah (Digital Peacebuilding Specialist at Hashtag Generation), who provided critical insights from the digital peacebuilding landscape in Sri Lanka. Dr Andreas Hirblinger (Senior Researcher at the CCDP, Switzerland) and Dr Kristoffer Lidén (Senior Researcher at PRIO, Norway) provided reflections on the risks and challenges of digital peacebuilding and how ethical perspectives can be used to address them. The reflections in this article are drawn directly from the discussion held during the workshop and remarks made by panellists or participants. The views expressed in this piece do not necessarily reflect the views of the Kofi Annan Foundation or individual participants.