Artificial Intelligence and NC3: P5 Perspectives

3 April 2024 • 
Event
The VCDNP and the European Leadership Network (ELN) hosted a public hybrid event on the ELN report "AI and nuclear command, control and communications: P5 perspectives” to explore the risks of AI in nuclear decision-making.
Share this:

On 21 March 2024, the VCDNP and the European Leadership Network (ELN) hosted a hybrid panel discussion on the report "AI and nuclear command, control and communications: P5 perspectives,” which explores the potential benefits and risks of integrating artificial intelligence (AI) in nuclear command, control, and communications (NC3) systems. The speakers provided an overview of the report's findings and discussed the varying perspectives in four nuclear-weapon states: the United Kingdom, China, France, and Russia.

Moderated by VCDNP Executive Director Elena K. Sokova, the speakers featured in the panel discussion were:

  • Alice Saltini, Research Coordinator, European Leadership Network (ELN)
  • Fei Su, Researcher, Stockholm International Peace Research Institute (SIPRI)
  • Héloïse Fayet, Research Fellow, Security Studies Center, French Institute of International Relations (IFRI)
  • Oleg Shakirov, Researcher and PhD student at Johns Hopkins School of Advanced International Studies.
Content not available.
Please allow cookies by clicking Accept on the banner

The discussion started with Alice Saltini providing an overview of the report. She explained that the implications of integrating AI in nuclear decision-making and related systems are not yet fully understood. While it is possible that ongoing technological developments could solve some of the problems related to current technological limitations, they could also generate new, unanticipated risks.

Ms. Saltini identified four key challenges associated with advanced AI models in nuclear decision-making. These include their unreliability, for example, their likeliness to “hallucinate” by generating false data, which, in the nuclear domain, could mean “hallucinating” an incoming attack. Furthermore, she described the inherent “black-box” nature of deep learning AI models, in which it is hard to understand the internal decision-making process that leads to a certain output. AI can also render systems more vulnerable, especially to cyber attacks, which could expose sensitive military data and lead to data manipulation. Finally, there remains the challenge of ensuring that AI systems align with given goals and values.

Ms. Saltini highlighted that, given these shortcomings in AI applications, it is important that the use of AI be safeguarded by significant checks and redundancies. To this end, she recommended that nuclear-weapons States develop a risk assessment framework when contemplating integrating AI into critical military platforms, such as NC3 systems. Until such a framework is agreed and rigorously evaluated, a moratorium on the integration of AI into critical NC3 functions should be put in place.

Ms. Saltini also provided an overview of the United Kingdom’s perspective on AI integration into military systems. She stressed that the UK has placed significant emphasis on the responsible use of AI in military systems with a strong focus on safety, legality and ethics. At the same time, the UK views the integration of AI into military systems as important to prevent adversaries from gaining advantages. While the use of AI has been highlighted to support decision-making processes and assist in situational awareness, the UK has stated that it is not interested in fully autonomous functions for decision making.

Alice Saltini (centre) presenting the ELN report on AI in NC3 systems

Fei Su highlighted the role of AI in China’s decision-making and explained that since 2017 there has been a surge in Chinese literature on AI-enabled NC3, from the support to decision-making, improvement of early warning systems and targeting, to the enhancement of autonomous strike capability of nuclear weapons. Despite China’s interest in AI’s potential in that regard, her research showed that China is concerned with security risks related to the accuracy and reliability of AI systems, the risk of strategic miscalculation, and technology abuse and proliferation by non-state actors.

Ms. Su further emphasised that, among the Chinese expert community, there is consensus that AI should remain in a supportive role when it comes to command and control at the strategic level, and that humans should be the ones to make the ultimate decision. China’s official stance also emphasised the importance of human control, though it does not specify nuclear weapons systems in this regard. It was recommended that the P5 begin serious dialogue on the specific applications of AI in NC3, highlighting that there presently is a lack of benchmarks to assess acceptability of the extent and manner in which AI could be integrated and what kind of development could be perceived as escalatory.

Fei Su (right) explaining the Chinese perspective on AI in NC3 systems

Héloïse Fayet discussed France’s perspective on the use of AI in NC3 systems. She explained that French officials find the technology to be too nascent to project its potential challenges and consequences for nuclear decision-making. The French think tank community agrees that France should not pursue AI integration into NC3 systems and raises questions regarding how France could properly protect and strengthen its nuclear weapons systems in response to AI integration in the arsenals of adversaries, which may affect the balance of nuclear deterrence.

Two main concerns regarding AI integration into NC3 systems surround the impact on launch on warning and the increase in detection capability. Ms. Fayet reported that, more recently, interest in AI and nuclear weapons appears to be growing, highlighting a conference that was held to assess how AI could enhance detection and risk assessment models. However, given the divide between the expert community and government on the AI‑nuclear nexus, she recommended building a stronger community of experts, strengthening dialogue between the public and private sector, and supporting a P5 initiative to discuss AI and risks related to NC3.

Héloïse Fayet sharing the status quo and potential future of AI in French nuclear weapons

Oleg Shakirov provided insights on the Russian perspective on integrating AI into NC3 systems and explained that, while no formal document exists on how Russia should or would use AI in the military domain, there is a lot of interest in this technology. While most of the discussions are not focused on nuclear weapons, there is a growing interest in the use of AI for day-to-day maintenance of nuclear forces and security. The topic has also prompted discussions among practitioners regarding the balance between automation and human involvement. It is agreed that, while early warning systems may utilize automated processes, the human element remains integral in decision-making processes.

Mr. Shakirov stressed that, rather than focusing merely on the technology, it is important to acknowledge the implications of geopolitical aspects, as it is not possible to decouple one from the other. In this regard, he suggested a closer exchange between the military field and the private sector on how AI risks are being assessed, especially to better understand issues surrounding interpretability or reliability. He also noted that the impact of conventional AI-enabled weapons on NC3 should be discussed. To assist in these conversations, he recommended that the P5 develop a glossary on AI relevant terms to avoid misinterpretations and enhance understanding of specific technologies and issues involved.

Oleg Shakirov speaking about AI integration into NC3 systems in Russia

All four panellists agreed that there is a shared understanding among the P5 that fully automated weapons systems in NC3 systems are inconceivable and should remain so. Human involvement and judgement will always be an integral part of nuclear decision‑making, though, it was highlighted, that the interpretation of “human-in-the-loop” varies from state to state.

During the Q&A session, participants discussed the idea of risk mapping, with some noting that, while it is important, it also only offers a snapshot of what is happening at that point in time. Ms. Saltini addressed these concerns by explaining that risk metrics can be used for granular analysis on both current and future integrations of any AI technology. The discussion emphasised that AI is not new. Still, current AI technologies are distinctly different from prior technologies and present vastly different risks and concerns. In light of rapid technological advancements, the need for political will to establish standards and regulations is important. It was stressed that discussions among the P5 in this context would be very welcome.


Registration & Questions
We kindly ask you to RSVP using our online registration.
Should you have any questions, please e-mail or call us.

Related Experts

Elena K. Sokova
Executive Director
Mara Zarka
Research Associate and Project Manager

Related Content

Recommendation Paper from the Advanced Nuclear and Emerging Technologies Workshop Series

Resulting from a series of three workshops held between April 2019 to November 2020, the VCDNP compiled a recommendation paper on work that could be done in order to strengthen nuclear safeguards and export controls in managing new challenges posed by advanced nuclear and emerging technologies.
Read more

The Future of US-China Nuclear Relations

13 April 2022 • 
On 7 April 2022, the VCDNP held a virtual event on the future of US-China nuclear relations featuring Dr. David Santoro, President of the Pacific Forum.
Read more
1 2 3 23
cross linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close