The Ethics of AI-Assisted Nuclear Weapons Systems: Violation of The Laws of War or Efficiency Enhancers?

10–15 minutes

·

·

Photo Source: “The Militarization of Artificial Intelligence”. Nuclear Weapons

By Anne-Maria Matejas

The fog of war has thickened in the age of artificial intelligence. The 21st century marks a progressive stage of the “Promethean discrepancy” within which the pace of the development of new technologies, chiefly artificial intelligence (AI), outperforms that of legal regulation of the boundaries of ethical conduct in warfare. (Anders 496) While AI-assisted military technology is already being deployed in active combat zones, exemplified by the Lavender system utilised by the Israeli Defense Forces in Gaza, the potential for further implementation in other weapons systems looms large. Given the increasingly destabilised environment of international relations, the degree to which the military implementation of AI is considered by the world’s powers can be expected to increase, not least in their nuclear arsenals. Reconciling the implementation of AI technology in nuclear weapons systems (NWS) nevertheless presents a unique challenge to the laws of war, in particular its constitutive principle of proportionality, defined as the practice of balancing the gain of military interest so that it is not excessive in relation to its foreseeable harm. The scale of destruction promised by the prospect of AI-assisted nuclear war stretches the boundaries of proportionality’s application, putting into question the extent to which the potential implementation of AI in NWS can be seen as compatible with the principle of proportionality.

To begin with, judgments of proportionality are uniquely difficult to make in the context of nuclear warfare for at least two reasons. Firstly, the scale of harm to be expected from the deployment of nuclear weapons exceeds the collateral damage routinely assessed in the use of conventional weapons. Besides the immediate destruction upon detonation, the ‘invisible’ negative consequences of nuclear weapons use, such as radiation and environmental destruction, render any present-minded proportionality calculus highly complex. (Granoff and Granoff 57) Secondly, the assessment of foreseeable harm struggles with the inherent uncertainty of measuring the impact of a nuclear bomb’s detonation. As exemplified by the Castle Bravo experiment conducted by the United States in 1954, estimates of the yield of nuclear weapons are difficult to make given that any changes in the environmental conditions can result in disastrous consequences that cannot always be accurately predicted. (Brown 41) This unpredictability of accurate estimation of foreseeable harm from NWS use seems to be antithetical to the consequentialist pragmatism of the proportionality principle. In short, if the ends cannot be predicted given the nature of the chosen means, justifying those means emerges as exceedingly challenging. 

Furthermore, the need to consider volatile outcomes that the use of NWS can bring upon the aggressor themselves creates another obstacle to issuing proportionality judgements. This is the case because the military advantage to be gained from their use might insinuate unforeseeable harm to self, further complicating the articulation of a robust ad bellum argument in the context of the “necessity to protect the future existence of both political societies.” (Colonomos 234) Indeed, some scholars have argued for the inherent incompatibility of nuclear warfare with the proportionality principle on account of there being “no value” for which nuclear warfare could represent a justifiable strategy considering its consequences. (Forge 31; Wigg-Stevenson 40) Therefore, attempting to apply the proportionality principle in the context of NWS use, even without the implementation of AI technology, emerges as complicated, calling into question whether proportionality judgements about the use of NWS can be made in full faith in light of their circumstantial complexity and uncertainty. 

Introducing AI technology into proportionality calculi about the use of NWS presents several practical risks as well. The automation of decision-making that any such implementation would generate aggravates the instability of potential NWS use and carries dangers of inadvertent escalation. The executive timeline of AI exceeds that of humans, effectively

accelerating every step of the decision-making process and thus “heightening the speed of warfare” in times where hesitation might result in greater good than efficiency. (Johnson 18) The unpredictability of human behaviour can present a paradoxical antidote to the mistake-avoidant approach of AI automation when confronting nuclear crisis management. Indeed, acts of disobedience like that of Stanislav Petrov in 1983 represent a mixture of luck and an inherently human inclination for doubt that AI lacks. (Chernavskikh and Palayer 8) Certainly, one can imagine that if the Soviet early-warning system were observed by AI, whose proportionality judgments can be expected not to involve emotional stakes, instead of a disobedient soldier’s hesitation and “gut instinct,” the efficiency of response would have been rewarded by disastrous escalation. (Topychkanov 70) Hence, the implementation of AI in NWS can be perceived as amounting to an effective erasure of the benefits of the unpredictability of human behaviour that can serve as a crisis de-escalating element in the making of proportionality judgements about the use of NWS. 

Additionally, the potential implementation of AI in NWS can be understood as an omen of instability, given that it might exacerbate the fragility of defence responses and susceptibility to foul play while increasing mistrust in global politics. Despite foreseeable improvements in the efficiency of defensive NWS, such as AI-drone swarming tactics, implementing AI automation in nuclear defence could significantly undermine the robustness of the defence apparatus, since any event of a vigorous cyberattack could instantaneously disarm a considerable segment of the defence infrastructure. (Johnson 23) Combining this with the need to act decisively in the spirit of ‘time is of the essence’, which underscores nuclear contingency strategies, exposes a severely fragile framework of responsiveness brought about by the implementation of AI. Such fragility could then be expected to translate into rash proportionality judgments driven by retribution rather than restraint, which might exacerbate the affinity towards last-resort imperatives to use offensive NWS. Beyond cyberattacks, AI can be expected to debilitate the enemy’s “situational awareness,” and, in the process, lay the groundwork for a “deception-dominant world” where efforts to apply the proportionality principle become further enmeshed in strategic uncertainty. (Geist 8) In brief, the assessment of the practical risks of implementing AI in NWS outlines bleak prospects for its compatibility with principled observance of the proportionality principle.

This rather dystopian assessment of practical risks introduces the need to consider the ethical implications of the matter. Within the deontological position, all life is inherently valuable and ultimately non-commensurable, notwithstanding whether it is that of one’s own population or the enemy’s. Crucially, the proportionality judgements produced on this matter by AI algorithms seem susceptible to biases and preferences that do not align with the deontological position. For instance, it can be assumed that different states’ AI algorithms would be trained on vastly different understandings of the proportionality principle, some of which might be less than non-discriminate in their assessments of the value of lives of different parties to the conflict and ridden with biases. (Colonomos 221) Besides the susceptibility to bias, given that there is “no one ‘right’ way to reason about uncertainty,” instead of AI propelling the world towards more nuanced proportionality measurements, it could instead cause the world to waste vast resources “without any guarantee that we can find a good enough answer.” (Geist 5) Indeed, given that deontologically-minded humans confront the issue of non-commensurability of human lives in proportionality judgements, and AI is trained on information produced by humans, it seems naive to assume that AI could miraculously overcome the issue of commensurability simply due to its greater-than-human processing capacity. 

Furthermore, the in bello application of the proportionality principle stumbles upon an insurmountable hurdle when it comes to potential AI-assisted NWS deployment. It has been suggested that the current practice of applying the proportionality principle often results in “(counting) evils of all the kinds (war) will cause, with no limits on their content.” (Hurka 46) However, under the deontological imperative, one must consider the expected harm not just in terms of its end result–the loss of lives–but also the means through which that end is achieved. In the case of NWS deployment, the harm to human dignity illustrated by the WWII atomic bombings of Hiroshima and Nagasaki inspires a categorical rejection of subjecting civilians to such a degree of long-lasting bodily and psychological harm when other options of seeking military advantage remain. Moreover, the proportionality assessments of the foreseeable harm ought not to consider merely the immediate collateral damage, but also must consider damages as a crime against the future. This is the case due to the intergenerational and interspecies nature of the damage caused by nuclear warfare, damage which prevents the blossoming of future generations as much as that of the environment. In short, if we are to understand warfare as a domain in which ethical considerations have a place, the imperative of efficiency ought to be replaced by one of respect for human dignity in the production of proportionality calculi, the understanding of which cannot presently be expected from a non-human agent like AI. 

Lastly, when devoid of deontological considerations, the principle of proportionality emerges as rather normatively thin, offering space for the assumption of compatibility with AI implementation in NWS. This is especially evident when considering the deployment of tactical low-yield nuclear weapons in narrowly defined and legally admissible scenarios, such as the destruction of an isolated nuclear submarine. (ICJ 301) In such scenarios, the principle of proportionality seems compatible with AI-assisted use of NWS since it envisions a low degree of “foreign collateral damage,” and is currently cautiously accepted by some nuclear-weapon states, such as the United States. (Sagan and Weiner 129) However, such an acceptance denotes a misunderstanding of the core of the proportionality principle rather than true compatibility. (Sagan and Weiner 141) This misunderstanding lies in the flawed interpretation of the proportionality principle as fundamentally one of symmetry or lex talionis of offence, rather than its substance of restraint. In fact, if understood in such a way, the potential observance of the principle of proportionality rests on the assumption that responsible agents, including AI, would be perfectly guided by restraint, while the reality seems to indicate that restriction on use is by far not a “guarantee.” (Royden 137) 

Likewise, the grey zone on the deployment of low-yield NWS seems to constitute a radical failing of the international legal regime given the aforementioned missing considerations of the invisible and long-lasting harm inflicted on human dignity and the environment. The presented loophole for compatibility of AI implementation in NWS and proportionality can then only be perceived as appropriate if our faith in the responsibility and foresight of the military-industrial complex is equally strong, a criterion difficult to aspire to at best. On balance, when seen as compatible with AI-assisted NWS use, the proportionality principle seems to be inadequately ethically robust to prevent the exercise of excessive force. Given that paying attention to proportionality alone still makes it “too easy, too acceptable, even too legal, to kill (…) millions of innocent civilians,” the better principle to observe might be one of “nuclear necessity,” which would greatly reduce the scale of possibilities for disastrous judgement mistakes. (Lewis and Sagan 67-71) Indeed, when nuclear weapons and AI are involved, weighing lives on a cost-benefit analysis devoid of moral feeling in itself seems categorically disproportionate to the expected scale of harm.

Overall, the potential implementation of AI systems in NWS can be rejected as compatible with the principle of proportionality on account of the potential practical security risks and ethical concerns stemming from the adoption of the deontological position on the possibility of commensurability of human lives and dignity. Instead of aiding the production of well-rounded proportionality judgements, AI is an agent of impairment when considered within the ethics of warfare, indicating that the emphasis on the norm of proportionality is perhaps ill-advised in the context of potential nuclear escalation and ought to be replaced by an emphasis on necessity. Perhaps overcoming the Promethean gap we find ourselves in lies in embracing the “courage to fear” and transferring that fear towards devising novel means of facing the challenges of the emerging technologies that promise to make the fog of war thicker than ever before. (Anders 498)

References 

– Anders, Günther. “Theses for the Atomic Age.” The Massachusetts Review, vol. 3, no. 3, 1962, pp. 493–505. 

– Brown, April L. “Looking Back: No Promised Land: The Shared Legacy of the Castle Bravo Nuclear Test.” Arms Control Today, vol. 44, no. 2, 2014, pp. 40–44. – Chernavskikh, Vladislav, and Jules Palayer. “Impact of Military Artificial Intelligence on Nuclear Escalation Risk.” Stockholm International Peace Research Institute, 2025, pp. 1-12. 

– Colonomos, Ariel. “Proportionality as a Political Norm.” Weighing Lives, edited by Claire Finkelstein, Jens David Ohlin, Larry May. Oxford University Press, 2017, pp. 217-237. 

– Forge, John. “Proportionality, Just War Theory and Weapons Innovation.” Science and Engineering Ethics, vol. 15, no. 1, Mar. 2009, pp. 25–38. 

– Geist, Edward. Deterrence under Uncertainty: Artificial Intelligence and Nuclear Warfare. Oxford University Press, 2023. 

– Granoff, Dean, and Jonathan Granoff. “International Humanitarian Law and Nuclear Weapons: Irreconcilable Differences.” Bulletin of the Atomic Scientists, vol. 67, no. 6, Nov. 2011, pp. 53–62. 

– Hurka, Thomas. “Proportionality in the Morality of War.” Philosophy & Public Affairs, vol. 33, 2005, pp. 34-66. 

– Johnson, James S. “Artificial Intelligence: A Threat to Strategic Stability.” Strategic Studies Quarterly, vol. 14, no. 1, 2020, pp. 16–39. 

Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, International Court of Justice, ICJ Rep. 1996, p. 301 Justia, https://law.justia.com/cases/foreign/international/1996-icj-rep-66.html. 

– Lewis, Jeffrey G., and Scott D. Sagan. “The Nuclear Necessity Principle: Making U.S. Targeting Policy Conform with Ethics & the Laws of War.” Daedalus, vol. 145, no. 4, 2016, pp. 62–74. 

– Royden, Alexa. “An Alternative to Nuclear Weapons? Proportionality, Discrimination, and the Conventional Global Strike Program” The Future of Just War: New Critical Essays, edited by Caron E. Gentry, and Amy E. Eckert, University of Georgia Press, 2014, pp. 124-138.

– Sagan, Scott D., and Allen S. Weiner. “The Rule of Law and the Role of Strategy in U.S. Nuclear Doctrine.” International Security, vol. 45, no. 4, Apr. 2021, pp. 126–66. – Topychkanov, Petr. “Autonomy in Russian Nuclear Forces.” The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Volume I Euro-Atlantic Perspectives, edited by Vincent Boulanin, SIPRI, 2019, pp. 68-76. – Wigg-Stevenson, Tyler. “More Than Moralism: How Values Matter to Nuclear Security.” The Review of Faith & International Affairs, vol. 9, no. 3, Sept. 2011, pp. 37–44.