Note: This post is under construction.

1. Introduction

With the recent widespread adoption of AI technologies, lively debates have emerged regarding the associated risks. These discussions span a spectrum from concerns about AI replacing jobs to the potential threat of autonomous robots causing harm to humans. However, these debates often remain superficial, particularly among the general public, as terms like “AI” or “Risk” carry vague meanings. Seeking a deeper understanding of the current landscape of AI risks and exploring effective approaches to address them, I participated in an enlightening educational workshop on Responsible AI for Peace and Security in Malmö, Sweden, held on November 16 and 17, 2023. This post aims to distill the insights gained during the two-day workshop, hoping to contribute to a clearer perspective on the challenges posed by AI risks. This post mainly focuses on AI risks for peace and security.

2. AI risks on peace and security

2.1. Definitions of peace and security

To begin, let’s delve into the definitions of “peace” and “security.” Both terms carry conventional and contemporary interpretations. Here, we present the definitions of peace:

  • Conventional peace (negative peace): Denotes the absence of conflicts.
  • Contemporary peace (positive peace): Encompasses the establishment of sustainable societies where individuals experience dignity, equality, and safety.

Now, let’s explore the definitions of security:

  • Conventional security (objective security): Signifies the state of not being threatened.
  • Contemporary security (subjective security): Involves feeling confident and free from danger.

As evident, significant overlaps exist between peace and security, leading to the standard reference to the two concepts as “peace and security” or P&S in short. One pivotal distinction between conventional and contemporary definitions lies in the fact that the former primarily centers on P&S among nations, whereas the latter extends its scope to individuals. In simpler terms, when we engage in discussions about P&S today, it is common to contemplate peace and security on the individual citizen’s level.

More info: Defining the Concept of Peace

2.2. Definitions and categorification of risk

Let’s now turn our attention to the definition of risk. According to the Cambridge Dictionary, a risk is the possibility of something bad happening. A risk is comprised of two key components: likelihood and magnitude. The severity of a risk is determined by the extent to which these two components are elevated.

Concerning the risks associated with AI, there are three primary types: accidental, misuse, and structural. The table below outlines the definition and provides an example for each type.

Type Definition Example
Accidental The potential for unexpected negative outcomes arising from a proper use of AI. An AI recruiting system makes a biased decision that unfairly disadvantages certain groups.
Misuse The potential for negative consequences resulting from an intentional and malicious use of AI. A deepfake video featuring a politician’s altered speech is disseminated on social media.
Structural The potential for widespread influence that could be disruptive or harmful. Big corporations dominate the AI market and possess extensive datasets about individuals.

Please note that the examples are somewhat connected to P&S: the AI recruiting system undermines equality, the politician’s deepfake video compromises the subject’s dignity and sparks political conflicts, and market domination contributes to the concentration of wealth. As you can see, numerous situations exist where AI undermines P&S. Organizing them into these three types somewhat facilitates a clearer consideration of AI risks.

2.3. Risk factors

Then, why are there so many AI risks on P&S? Behind this lies a set of risk factors particular to AI. Below, I will enumerate some of them.

  • Dual use: The same AI technology can be utilized in the military and civilian domains. AI technologies typically advance more rapidly in the private sector, where regulation and control are often more challenging than in the military industry. Developments under lax rules in the private sector may subsequently be repurposed for military use, posing risks.

  • Intangibility: An AI model lacks a physical presence, making it easy to copy and disseminate. While this accessibility benefits everyone, it also increases the potential for misuse.

  • Lack of Understandability: Deep Neural Networks (DNNs) are typically black boxes, challenging even for specialists to comprehend. The probabilistic and opaque nature of DNNs can occasionally lead to accidental risks.

  • Open/Closed Development: Some companies opt to make their models open, while others keep them proprietary and provide APIs. Each approach comes with its own set of advantages and disadvantages. Open-sourcing models enhance transparency and availability but also increase the risk of misuse. On the other hand, closed strategies mitigate misuse risk to some extent but come at the cost of reduced transparency and availability.

Understanding these AI-specific risk factors makes it easier to contemplate countermeasures and can lead to preventing issues beforehand.

3. How to mitigate the AI risks

3.1. International governance in general

We are currently facing various transnational challenges, such as Covid-19 and climate change. International governance refers to the institutions, policies, norms, etc. that are designed to address these transnational challenges. International governance involves various actors and is a multi-stakeholder endeavor. The main actors include:

  • Governmental: United Nations, states
  • Non-governmental: NGOs, academia, companies, individuals.

The involvement of many actors can lead to good governance that takes into account different perspectives. However, it can also lead to slow decision-making. This delay in decision making can be a concern, especially in the governance of AI. Details are discussed in Section 3.3.

Next, let’s explore a classification of international governance mechanisms. There are three types of them.

  • Soft law: non-binding or less formally binding principles, guidelines, and standards
  • Hard law: legally binding formal agreements or treaties that are enforceable by legal mechanisms
  • Self-regulation: the ability or process by which individuals, organizations, or industries set and enforce their own standards and guidelines without external imposition.

Each type has its advantages and disadvantages.

Type Establishment Flexibility Effect Compliance
Soft law Easy High Moderate Low
Hard law Hard Low Large High
Self-regulation Easy High Small Moderate

Due to its legal enforceability, a hard law is difficult to reach a consensus on and establish. In addition, the considerable effort required to amend it makes it inflexible. Once enacted, however, it carries considerable authority, enabling enforcement and compliance. A soft law, on the other hand, is easy to enact or amend, but carries the risk of non-compliance by the parties involved. Self-regulation can be seen as a form of soft law that involves one or a few parties and thus shares similarities with soft law.

In the realm of technology governance, two contrasting approaches emerge: enabling and limiting.

Direction Definition Example
Enabling facilitating the advancement of technology IEEE/ISO standardizations
Limiting regulating rapidly advancing technology GDPR, export control, disarmament

Both approaches are necessary for sound governance.

3.2. Current state of AI governance

3.3. Challenges

4. Responsible innovation of AI

5. Possible scenarios

6. Thoughts

製薬の話、日本の立場、