On 29 February 2024, the VCDNP welcomed Dr. Ian Stewart, Executive Director of the Washington, D.C. office of the James Martin Center for Nonproliferation Studies, to deliver a seminar examining generative AI’s challenges and opportunities for the non-proliferation domain. The event was moderated by VCDNP Executive Director Elena K. Sokova.
In his opening remarks, Dr. Stewart underlined the challenge of forecasting the capabilities of this nascent and rapidly evolving technology. Generative AI has caught popular attention thanks to ChatGPT's ability to generate text. Other large language models have introduced image and video capabilities. While these new tools have the potential to transform many sectors, practical use cases are still emerging.
Dr. Stewart explained that, without a common definition of AI, current discussions about the risks and opportunities it may hold lack a shared frame of reference. Public debates often mix different concepts of AI, ranging from machine learning capabilities that have been used in military and civilian analysis efforts for decades to the dystopian idea of a superhuman consciousness.
Summarising ongoing efforts by the United States and the European Union to delineate and regulate AI, Dr. Stewart concluded that there is a need for greater clarity about which capabilities and impacts of AI models are both important and feasible to control.
Focussing on the role that large language models might play in the production of nuclear, chemical, and biological weapons, Dr. Stewart outlined several use cases. Next to the already widely discussed potential use of AI in creating pathogens for biological warfare, AI could also help design and produce other weapons parts and components. Additionally, with text-based and – in future - image and video-based diagnostic capabilities, AI could help to fix machining flaws and enhance production flows in weapons development.
Furthermore, large language models can speed up the coding process for computational simulations, for example of the fluid dynamics of ballistic and cruise missiles or for neutron flux modelling in nuclear material and equipment. Widely available AI models could replace the need for specialist software for these purposes.
At the same time, AI capabilities could, in future, support non-proliferation efforts. Primarily, AI can speed up data collection and analysis, which is key for investigations of potential CBRN weaponisation attempts, including by the International Atomic Energy Agency (IAEA) and the Organisation for the Prohibition of Chemical Weapons (OPCW). AI systems can also be useful for data extraction, for example, identifying actors, locations, or dates, based on a given dataset. Additionally, there are ongoing efforts to integrate AI into the analysis of satellite imagery, which is an otherwise time and resource-intensive process.
During the Q&A session, participants discussed the use of AI for disinformation campaigns, how AI could be used to evade export controls, and the feasibility of controlling the spread of the hardware and technology needed to develop and train large language models.
For more detailed information on the potential impact of generative AI on non-proliferation, read Dr. Stewart’s recent paper, titled “A Framework to Evaluate the Risks of LLMs for Assisting CBRN Production Processes”.