Artificial Intelligence in the Military Domain: Technical, Legal, and Ethical Perspectives

2 April 2024 • 
Event
The VCDNP hosted a webinar exploring the rapidly evolving use of artificial intelligence in the military domain from technical, legal, and ethical perspectives.
Share this:

On 26 March 2024, the VCDNP hosted a webinar on the use of artificial intelligence (AI) in the military domain. Rapid advances in AI are transforming battlefield tactics, leading to a wide range of efforts to ensure that AI is developed, deployed, and used in a safe and secure manner. This webinar sought to explore the use of AI for military purposes and potential avenues for its regulation through technical, legal, and ethical perspectives.

Content not available.
Please allow cookies by clicking Accept on the banner

Moderated by VCDNP Research Associate and Project Manager Mara Zarka, the panel of speakers included:

  • Dr. Thomas Reinhold, Researcher, Peace Research Institute Frankfurt (PRIF)
  • Dr. Elisabeth Hoffberger-Pippan, Senior Researcher, Peace Research Institute Frankfurt (PRIF)
  • Dr. Alexander Blanchard, Senior Researcher, Stockholm International Peace Research Institute (SIPRI)

In her opening remarks, Ms. Zarka reflected on a recent effort to address AI capabilities in the military domain. In February 2023, the Responsible AI in the Military Domain Summit was held in The Hague. Here, the United States launched its Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, aiming to build international consensus around responsible behaviour.

Dr. Reinhold provided a technical perspective of AI in military affairs. Described by the aforementioned Political Declaration as “the ability of machines to perform tasks that would otherwise require human intelligence,” AI is a term for a broad range of abilities, but in common usage generally refers to systems that mimic neural networks. The abilities of such systems have recently experienced rapid advancements and are expected to evolve further in the coming months and years.

Dr. Thomas Reinhold

Particularly relevant are ‘explainable AI’ systems, attempting to give humans insight into how AI models make decisions to account for potential biases and assess accuracy of outputs. The relevance of AI for military affairs comes from its ability to manage vast amounts of data and pre-process it to assist human decision-making. AI and the autonomy it brings allow for increased operational capacity in uncertain conditions and faster decision-making, both of which are crucial in military contexts.

Dr. Hoffberger-Pippan explained that both when AI is built into a weapons system and when it is applied externally, e.g., in a decision-support system, it presents unique legal challenges. Article 36 of the Geneva Conventions’ Additional Protocol I requires that states establish whether the use of specific weapons systems would violate their obligations under international law, and discussions are ongoing on how this requirement should be applied to weapons systems incorporating AI and whether this requirement applies to non-signatories. It was highlighted that there is a growing trend that considers Article 36 weapon review obligations to cover AI, including decision-support systems, given the significant impact it can have on military decision-making processes generally and offensive capabilities in military operations specifically.

Another challenge is the applicability of human rights law given the opaque decision-making nature of AI deep learning models, in which assessing whether AI-powered weapons systems could be used adequately and in line with international humanitarian law is challenging. The term “meaningful human control” as a standard for the relationship between humans and AI in military affairs has emerged as the major buzzword of international negotiations but remains controversial. The Political Declaration only provides non-binding guidelines on use, and more elaboration on the legal status of AI in the military domain will be necessary.

Dr. Elisabeth Hoffberger-Pippan

Dr. Blanchard emphasised the massive potential uses for AI in national defence, highlighting that, as its application moves from support functions to adversarial use, ethical considerations become more complex. A common approach in the civilian sphere consisting of the development of a set of principles on responsible AI use has gained popularity in the military domain, being especially popular in Western countries. This approach comes with its own challenges, especially since it offers little to ensure its standards are met in practice. Looking forward, Dr. Blanchard stressed the need for shared standards across defence ministries and armed forces on the use of AI and emphasised the need for multi-stakeholder involvement to tackle ethical questions.

Dr. Alexander Blanchard

When asked about the greatest risks posed by the use of AI in the military domain, the panellists highlighted the potential for outsized trust in systems whose motives are difficult to understand, opening the door to unintended consequences. A potential benefit is the reduced humanitarian cost of wars, but it remains to be seen whether this benefit will be realised. In response to further questions on the ethical debate and issues stemming from a lack of synergies between different tracks of discussions on AI in the military domain, panellists emphasised that the ethical conversation is moving more into a global discussion and warned against letting responsible use principles overshadow efforts on prohibiting certain systems and important debates on how much control is appropriate to cede to machines.

In their concluding remarks, panellists stated the need to be conscientious of the digital divide between countries that are developing and using AI models and those who are not, which creates greater difficulties in regulation. They also stressed the need for taking into account the geopolitical context when drafting measures and seeking transparency efforts, and called on the establishment of appropriate institutions, processes, and capabilities to ensure that AI acts as we want it to.


Registration & Questions
We kindly ask you to RSVP using our online registration.
Should you have any questions, please e-mail or call us.

Related Experts

Mara Zarka
Research Associate and Project Manager

Related Content

AUKUS Nuclear-Powered Submarine Deal – Non-proliferation Aspects

A brief overview of the nuclear non-proliferation and safeguards aspects of the proposal for Australia to build and operate nuclear-powered submarines, authored by VCDNP Non-Resident Senior Fellow John Carlson.
Read more

Ten Years On: VCDNP By the Numbers

14 December 2021 • 
For 10 years the VCDNP has been a hub in Vienna for training diplomats and practitioners in the nuclear field, convening results-oriented dialogue on critical issues, and supporting the strength of the non-proliferation regime through partnerships with governments, international organisations, academic institutions, other non-governmental organisations and other groups.
Read more
1 2 3 76
cross linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close