Blog
28.09.2023

Charting a Course for Global AI Governance: Options and Lessons from Nuclear History

David Backovsky and Professor Bryson unpack their recent publication in the journal Horizons, “Going Nuclear? Options and Precedents for Transnational Governance of AI?”, in which they analyse the current landscape of global AI governance and reflect on possible lessons to be taken from several decades of global governance of nuclear technology.

In response to the recent proliferation of technological innovations that are in part subsumed under the term "Artificial Intelligence" (AI), policymakers are increasingly exploring regulatory approaches to minimize potential harms. In this article we make two key arguments regarding the global governance of AI. Firstly, we assert that the current landscape of transnational governance of AI is overly fragmented and insufficient for the task at hand, providing a particular critique of one of the frontrunners, the Global Partnership on AI (GPAI). Secondly, we delve into the debate on AI and nuclear governance, arguing that there are critical lessons to be learned from the International Atomic Energy Agency’s (IAEA) history and the role of American strategy in generating a legitimate nuclear governance regime. We argue that these larger structural lessons are currently not being heeded, and that they will ultimately prove much more important than unnecessarily scrupulous comparisons between less relevant differences in nuclear and AI governance.

The Global Partnership on AI and the Fragmented Terrain of AI governance 

The current terrain of global AI governance remains fragmented, with a series of international organisations and institutions competing for legitimacy and influence. This generates a largely dysfunctional and ineffective global governance regime that is insufficient for the enormity of the task at hand. The key actors in AI global governance include:

  • Global Partnership on AI (GPAI)
  • Organisation for Economic Cooperation and Development (OECD)
  • International Telecommunications Union (ITU)
  • United Nations Educational, Scientific, and Cultural Organisation (UNESCO)

We begin by providing a focused critique of the Global Partnership on AI (GPAI), stemming from its special position with powerful nations and leveraging Professor Bryson's first-hand experience as a member of GPAI's inaugural cohort of experts. Established in 2020 as a Franco-Canadian initiative, GPAI’s membership quickly grew to 29 states. It remains the international organisation involved in AI with perhaps the strongest level of state support, as can be seen in its recent involvement in the G7 Hiroshima AI process

GPAI was initially rumored to be an AI equivalent to the Intergovernmental Panel on Climate Change (IPCC). AI in this sense was conceived as natural in kind, like the climate, with a new science required to understand "machine behavior". This initial understanding was already problematic – unlike climate, AI is not a natural entity but a set of engineering techniques, and is inseparable from its governance context. 

Early critics questioned whether GPAI was aimed to counter the rising influence of China in AI or whether it was designed to push back against tech giants. Other early doubts emerged as to its role and impact on innovation (although current research is clear on regulation’s positive impact on innovation). In either case, GPAI's remit was significantly impeded from the start. This was most prominently seen in the Trump administration's insistence on neutering its ability to do "normative" work, which was later understood as making governance of AI not admissible for expert study at GPAI. This was an astonishing interpretation to many external observers, who had understood the governance of AI to be GPAI's primary intended role. 

While GPAI has accomplished some excellent work, it falls short of its purported goal of advancing AI governance. Thus, GPAI's first three years have left many experts disappointed, questioning whether the organisation can meaningfully impact global AI governance. With AI's growing importance to the global geopolitical and economic balance, the stakes are too high for a global governance mechanism to be so ineffective. We believe that lessons from the history of the governance of another potentially harmful emerging technology can hopefully prove more instructive.   

The Atomic Debate

Recent debates have contrasted AI and nuclear governance, with experts in responsible and ethical AI implementation suggesting a similar oversight model. However, nuclear experts, based on their experience of the complicated and far-from-perfect nature of nuclear governance, have urged caution. Despite clear limitations in the nuclear analogy, we argue that core structural lessons – particularly from the International Atomic Energy Agency (IAEA) and the history of nuclear governance – are applicable. 

Whether by design or accident, Eisenhower's decision in 1953 to push the Atoms for Peace initiative helped lay the groundwork for effective nuclear governance around the world. The IAEA's history provides us with critical lessons about developing effective, technologically focused international organisations. The IAEA was originally established in 1957, but safeguards were not its primary concern, instead focusing on promotional aspects of nuclear energy and technical coordination in early nuclear technology applications. It wasn't until the mid-1960s, when the Chinese nuclear tests at Lop Nur culminated a worrying increase in nuclear proliferation that saw the United States and other states take decisive, cooperative action in establishing the Non-Proliferation Treaty (NPT), which came into force in 1970. The treaty made the IAEA one of the core pillars of the non-proliferation regime, as the  watchdog in charge of nuclear safeguards – a legitimate and honest broker in the field. It was over a decade after its founding that the now mature organisation took on the responsibility of a larger international treaty. Later, the Agency would face and address many challenges to maintain its legitimacy; from the tragedy of Chornobyl, to countering US claims on Iraqi weapons of mass destruction and the crisis in Ukraine, the IAEA has largely been a successful organisation. This historical context emphasises the potential for success when an organisation is centralised and focused on a specific technology. It also illustrates the importance of allowing time for organisations to build their expertise and infrastructure to meet emerging international mandates. We argue that, while nuclear technology governance, like AI governance, requires unique approaches, the lessons of a centralised approach with technical capacity are universal. 

Now, the IAEA has become one of the most essential organisations in managing nuclear technology globally – not only in its watchdog power to audit private and public facilities around the world, but its ability to act as a technical coordinator in a wide range of fields, from agriculture to law enforcement and medicine. In fact, many have argued that it is one of the most efficient and effective international organisations in the world, and one that would have to be invented if it didn't already exist. The IAEA staffs experts in nuclear physics, engineering, and high-level data analysis and manages world-class nuclear laboratories across different countries. The Agency also collaborates with organisations on both the national and subnational level across the world. These decades of work prove just what is possible for an international organisation with a single technological focus. 

Certainly there are core differences between AI and nuclear fission technology, among others its physicality (although computation is ultimately a physical process), ease of proliferation, and pathways to harm. We agree with the critics of nuclear governance as a model on AI on many issues – AI and nuclear technology are far from analogous. However, we argue that ultimately the IAEA and the non-proliferation regimes have been one of the most successful models of governing a potentially harmful emerging technology that we have ever seen globally – and their structural lessons are worth heeding. Our key points are as follows:

  • Establish a centralised organisation modeled after the IAEA to combine political and technical functions. This means any such agency needs both a strong and politically insulated secretariat and extensive in-house technical expertise.
  • Ensure the organisation embodies the IAEA's unique blend of political accountability and strong independence to achieve legitimacy on the international stage.
  • Recognising that international organisations with technical mandates require time to mature, it is imperative to establish such an agency expeditiously. A case in point, the IAEA was established in 1957, but only became the nuclear watchdog in the 1970s with the implementation of the Non-Proliferation Treaty.
  • Once established and matured, the agency should be prepared to assume expanded international responsibilities, in reaction to whatever treaty and regime the international community agrees regarding AI. This mirrors the IAEA's role in implementing the Non-Proliferation Treaty.
  • Begin by imbuing the agency’s mandate with a technical coordination power, facilitating responsible AI implementation across borders. A legitimate international organisation with the expertise to help responsible implementation of AI would be in and of itself of great value. 
  • Equip the agency with robust auditing capabilities of AI models, similar to the powers granted to the IAEA under the NPT, for inspecting both public and private entities.
  • Acknowledge that the governance of AI, much like that of nuclear technology, will follow a sui generis trajectory, underlining the urgency of immediate action to permit the development of effective governance.
  • The success of nuclear governance globally has, at least in part, been driven by the United States’ capability to establish legitimate global governance. Given its dominant position in the AI commercial sector, which parallels its early dominance of nuclear technology, the United States should aspire to lead in forming a credible and inclusive governance framework.

A Layered Solution

Finally, we draw inspiration from the Intergovernmental Panel on Climate Change (IPCC), an organisation that has frequently been cited in discussions about AI governance. However, our perspective diverges from the norm: we advocate modeling AI governance on the process through which the IPCC was created rather than focusing on its operational methods. Our argument unfolds in the following sequence: First, the IPCC was established by building upon an existing framework that included the United Nations Environmental Program (UNEP), the World Meteorological Organisation (WMO), and various other partner organisations. Similarly, a stronger AI agency could be constructed by layering it onto the current landscape of GPAI, OECD, UNESCO, and ITU. Second, while the IPCC stands as a successful example of a scientific coordination organisation, its power structure is not robust enough to serve as an ideal blueprint for the transnational governance of AI. Lastly, the IPCC came into existence under politically challenging conditions, specifically during the climate-skeptical Reagan administration. This should remind us that even in seemingly inopportune circumstances, it is possible to establish legitimate and effective global institutions. In sum, we suggest the possibility of a more centralised agency layered and in coordination with current actors in the global AI governance space. 

Conclusion

In conclusion, the pressing challenge of AI governance demands a coherent, centralised and transnational approach. This article posits two central arguments: first, that current efforts, particularly those by the Global Partnership on AI (GPAI), are both fragmented and insufficiently ambitious; and second, that the governance of AI could benefit profoundly from structural and historical lessons drawn from the nuclear sector and the International Atomic Energy Agency (IAEA).

Our focused critique of GPAI, drawing on insider experience, reveals that it has not fulfilled its initial promise. Conceived with great expectations but hampered by unclear objectives and limited scope, GPAI demonstrates that a half-measured, disjointed approach is inadequate to address the complexities and global implications of AI. We further contend that the atomic analogy, while not without its limitations, offers invaluable lessons in crafting an international governance framework for AI. Just as the IAEA evolved into a mature and effective institution, a similar trajectory is conceivable for an international body dedicated to AI governance.

Our recommendations outline the essential characteristics this entity must possess, not least among them an independent and accountable political structure, strong auditing capabilities, and the mandate for technical coordination. Additionally, our examination of the Intergovernmental Panel on Climate Change (IPCC) reveals another viable path for global AI governance. By layering upon the existing fragmented institutional frameworks, a new international body can emerge with strengthened capabilities, much like the IPCC, but with traits more comparable to the Atomic Energy Agency.

Though AI is often presented as a natural entity, in fact it is a set of engineering techniques with a diverse economic and security potential. Nations which wish to lead, including the United States, should carefully examine the successes achieved – not least by the Americans in strategically yet cooperatively driving nuclear governance and the non-proliferation regime – and apply these lessons to the current failings of GPAI and other candidate agencies. Whenever the United States has decided to strategically support global governance this has served as a catalyst, enabling the international community to form regimes that have been largely effective in governing emerging and potentially dangerous technologies.

The authors would like to extend their heartfelt gratitude to Dr. Ronny Patz for his invaluable contributions during the conceptualisation phase of this piece and his insightful editorial comments that greatly enhanced the overall quality of the article.

Teaser photo by Joshua Kettle on Unsplash.