Generative AI in Diplomacy and WMD Non-Proliferation: Navigating Opportunities and Challenges

24 September 2024 • 
Event
A VCDNP workshop, led by Dr. Natasha Bajema, delved into the emerging role of generative AI in diplomacy and WMD non-proliferation. The session illuminated AI's potential to revolutionise the field while emphasising the critical need for caution, ethical considerations, and human oversight.
Share this:

On 4 September 2024, the Vienna Center for Disarmament and Non-Proliferation (VCDNP) hosted a timely workshop to explore the rapidly evolving role of generative artificial intelligence. The interactive workshop provided participants with the foundational knowledge necessary to understand and navigate the complex challenges and opportunities of emerging AI-applications in the realm of weapons of mass destruction (WMD) non‑proliferation. The training was led by Dr. Natasha Bajema, Senior Research Associate at the James Martin Center for Nonproliferation Studies (CNS), and brought together 21 diplomats and policymakers from Vienna's international community. 

The workshop opened with a historical overview of developments in AI dating back to the 1950s and charting its early hype cycles and setbacks. Dr. Bajema noted that, while the field initially inspired great enthusiasm, it faced periods of stagnation—the so‑called "AI winters"—before recent breakthroughs reignited interest. However, Dr. Bajema cautioned against overestimating AI's current capabilities, especially in high-stakes areas like WMD non-proliferation where accuracy and reliability are paramount.

To provide a foundation for participants, Dr. Bajema introduced key AI concepts, including machine learning (ML), neural networks, and the different types of learning methods used to train AI models:

  • Supervised Learning: where the model is trained on labelled datasets.
  • Unsupervised Learning: allowing the model to develop its own set of rules and categories by identifying patterns in unlabelled datasets. This method is often used to initially train Large Language Models (LLMs) like ChatGPT.
  • Reinforcement Learning: teaching the model through trial and error to improve its performance and maximise the expected reward in a specific environment.

Dr. Bajema then turned to generative AI (GenAI), which has garnered global attention for its ability to generate novel content, from text to visual art, based on learned patterns. The ability to create novel content is a key difference between GenAI and the more traditional AI models, which the speaker referred to as predictive AI. Predictive AI models are "narrow" tools, designed to predict a particular outcome within a specific context. As a result, predictive AI models, such as those used for anomaly detection, can produce evidence-based outcomes with a certain level of accuracy that could aid decision-makers in identifying illicit nuclear activities. GenAI, on the other hand, is designed to produce novel content every time, and while this inherent randomness could be useful in creative fields, extreme caution and refinement of models is needed when introducing GenAI into WMD non-proliferation areas, where precision is critical, and errors could have severe consequences.

“The nuclear dimension is being impacted by AI in very troubling, indirect ways. Particularly in the nuclear decision-making space, where these models are used as intelligence analysis support tools, providing information to the highest level of decision-makers."

Dr. Bajema also stressed the intrinsic risks in advanced AI's "black box" problem, where decisions made by AI models lack transparency and are difficult to explain. Without robust oversight, this opacity could lead to misinterpretations or even misguided policy decisions. Furthermore, AI models implemented in key decision-making systems may pose significant cyber vulnerabilities. Backdoor access to an AI model, gained through targeted hacking, could allow attackers to alter the fundamental functioning of the model itself. Such unauthorised access may persist for some time before being noticed, presenting a significant risk when implementing AI in critical systems.

Ethical concerns were a major theme of the workshop, particularly regarding the potential for AI models to perpetuate bias or plagiarise content. Dr. Bajema pointed out that many AI models are trained predominantly on Western data, and this could inadvertently skew outcomes in politically sensitive contexts like international negotiations. If left unaddressed, this bias could have far-reaching impacts on decision-making. 

In responding to public fears, Dr. Bajema reassured participants that AI is far from achieving Artificial General Intelligence (AGI), a level where machines match human cognitive abilities. Instead, Dr. Bajema emphasised, machine learning remains fundamentally distinct from human cognition. While AI models analyse discrete pieces of information, they do so without the integrated, intuitive understanding humans possess. Given this, Dr. Bajema encouraged participants to view AI as a tool to complement human expertise, rather than replace it.

To conclude the workshop, Dr. Bajema demonstrated practical applications of GenAI, such as using ChatGPT for drafting policy briefs and Google's Gemini for simulating diplomatic scenarios. Participants were particularly interested in Dr. Bajema's advice on prompt engineering, tailored to provide Vienna-based diplomats and policymakers with actionable frameworks to further improve their AI-assisted work. While these tools can enhance productivity in day-to-day tasks, Dr. Bajema cautioned that their utility in more nuanced areas, like drafting negotiation texts, remains limited, for now.


Registration & Questions
We kindly ask you to RSVP using our online registration.
Should you have any questions, please e-mail or call us.

Related Experts

Elena K. Sokova
Executive Director

Related Content

Ghana’s Successful Implementation of Phase 1 of the IAEA Milestones Approach

5 October 2021 • 
On 22 September 2021, during the 65th IAEA GC, the VCDNP and the Permanent Mission of Ghana organised a hybrid side event on Ghana’s successful implementation of phase 1 of the IAEA Milestones Approach.
Read more

Dr. Nikolai Sokov: "Russia is deploying nuclear weapons in Belarus. NATO shouldn’t take the bait."

27 April 2023 • 
Senior Fellow Dr. Nikolai Sokov comments in the Bulletin of the Atomic Scientists on the recently announced nuclear sharing arrangement with Belarus.
Read more
1 2 3 95
cross
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram