
Artificial Intelligence (AI) models and systems are advancing rapidly, creating benefits for the nuclear sector as a whole but also potentially posing risks to nuclear security internationally, as the rapidly-increasing capabilities of large frontier AI models, could be misused by criminals and other malicious actors.
VCDNP Senior Fellow Dr. Sarah Case Lackner and AI safety researcher Zaheed Kara authored a new report focusing on whether frontier AI could assist a malicious actor in compromising the security of nuclear materials and facilities internationally. The report focuses on three phases of an adversary’s attack (target selection, attack planning and skill building, and execution) and considers how frontier AI could enhance existing adversary capabilities or even provide new ones.
The report considers relevant risks under each of the three phases and offers three key conclusions:
From these conclusions, the report draws two recommendations for the international nuclear security community and frontier AI developers:
The report draws on two virtual meetings held in July and October 2025, and a series of consultations with experts from both the AI and nuclear security communities facilitated by the Vienna Center for Disarmament and Non-Proliferation (VCDNP) and the Frontier Model Forum (FMF) over the second half of 2025.
The VCDNP thanks the Frontier Model Forum for its support and collaboration resulting in this report.


