Situational Awareness, Command, and Control: The Impact of AI

20 October 2021 • 
Event
On 13 October 2021, the VCDNP held its third webinar in the DET Series, focused on the impact of AI on situational awareness and nuclear command and control, featuring Dr. Ingvild Bode and Dr. James Johnson.
Share this:

On 13 October 2021 the Vienna Center for Disarmament and Non-Proliferation (VCDNP) held the third webinar in the Deterrence and Emerging Technologies (DET) Webinar Series, devoted to the impact of artificial intelligence (AI) on capabilities related to situational awareness, as well as nuclear command, control and communications.

The panel of speakers included Dr. Ingvild Bode (Associate Professor of International Relations, Centre for War Studies, University of Southern Denmark) and Dr. James Johnson (Lecturer in Strategic Studies, Department of Politics and International Relations, University of Aberdeen). VCDNP Research Associate Noah Mayhew moderated the webinar.

Mr. Noah Mayhew, Dr. Ingvild Bode and Dr. James Johnson

Dr. Bode set the scene for the session by illustrating the potential risks of introducing AI into decision-making and weapon systems, and the ways to mitigate these risks. She indicated that weaponised AI is not new. Militaries around the world have been integrating autonomy and automation into weapon systems for decades. In fact, integration of forms of AI into targeting functions started as early as in the 1960s.

Dr. Ingvild Bode

Dr. Bode explained the quest for situational awareness as one of the main reasons why States are using military applications of AI. Increasing situational awareness and lifting the fog of war are portrayed as key advantages of weaponised AI. AI is supposed to make the battlespace more legible, to facilitate the identification of enemies and thereby increase operational control. She explained that AI is perceived as a tool that humans can use to make better decisions, but is not recognised as fundamentally changing human agency and human-machine interaction.

She contrasted the military vision of AI enhancing situational awareness, something completely positive with the algorithmic fog of war. She highlighted the importance of understanding what she called the “algorithmic fog of war” not only as AI introducing new vulnerabilities, such as bias or the potential for adversarial attacks, but also how military applications of AI fundamentally change and circumscribe human agency over the use of force.

Dr. Bode illustrated that AI is quite a speculative field of technological development. There is an idea of overcoming human error by using AI, transitioning from human weaknesses to something superior through technological development that does not follow a clear line of progress. States often point to the inevitability of integrating more and more AI into weapons systems or military applications. In this narrative,  weaponising AI or military applications of AI becomes part of an overwhelming process that policymakers cannot change, when in fact policymakers must and do have input into policy decisions that affect AI’s integration (or not) into weapon systems.

She highlighted that while one of the reasons for integrating AI into weapon systems is the operational need for fast action and reaction, exceeding human capacities and using speed as a tactical advantage, there are also strong strategic reasons for slowing down the speed of the decision-making process. Retaining human slowness to decelerate events and large-scale wars may be a good thing. Dr. Bode posited there is no specific international law or regulation to govern weaponized AI or military applications of AI.

In his remarks, Dr. Johnson illustrated how and why using AI capabilities might cause or exacerbate escalation of risks, complicate deterrence, and impact the stability dynamic in the strategic conflict between nuclear-armed rivals. He focused on the automation of strategic decisions in machines and the risk of an accidental and inadvertent nuclear use.

Dr. James Johnson

He argued that a key factor is the degree to which AI disproportionally affects a country’s perception of the current balance of power. On one hand, mixing various levels of potentially unknown levels of human-to-machine interaction can create confusion and uncertainty. On the other, AI is reacting to events in non-human ways at machine speed could increase inadvertent risk. This risk is compounded further by an overconfidence and reliance on AI decisions known as automated bias.

He illustrated how AI tools in the hands of non-State actors might drag nuclear adversaries into conflicts. In the digital era, the catalysing the chain of actions set in motion by non-State actors taking two or potentially more nuclear arm rivals to the brink is becoming a plausible scenario. He added that decision-makers are exposed to greater volumes of information coming, for example, from satellite imagery, 3D modelling, and geospatial data which, during a crisis, can create novel pathways to manipulate and propagate campaigns of misinformation and disinformation.

He concluded the presentation by outlining what can be done to mitigate risks, including: enhancing the safety of nuclear weapons and hardening nuclear systems and processes; upgrading command-and-control protocols and mechanisms; designing robust measures to contain the consequences of errors and accidents and modifying existing arms controls and verification approaches, norms and behaviours; and encouraging bilateral and multilateral confidence-building measures and strategic dialogue. He also expressed the view that, that given the rapid phase of developments in AI, now is the time for positive interventions and discussions to pre-empt possible destabilising effects of AI and direct its use towards enhancing stability and the maturity of the technology developments.


Registration & Questions
We kindly ask you to RSVP using our online registration.
Should you have any questions, please e-mail or call us.

Related Experts

Noah Mayhew
Senior Research Associate

Related Content

Expanding Access to Peaceful Uses: State Experience with the Code of Conduct on the Safety and Security of Radioactive Sources

25 March 2024 • 
The VCDNP hosted a hybrid panel discussion on the Code of Conduct on the Safety and Security of Radioactive Sources and its importance to the expansion of peaceful uses of nuclear science and technology.
Read more

Lethal autonomous weapon systems: Where are we and what’s next?

19 April 2024 • 
At a VCDNP webinar, speakers discussed lethal autonomous weapon systems and highlight the need for an international legally binding instrument to regulate their development and use.
Read more
1 2 3 19
cross
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram