Menu Close

AI as a Force Multiplier: The Ethics of Autonomous Weapons

The Ethics of Autonomous Weapons as Force Multipliers 

By Shravan Krishnan Sharma

Napoleon has been credited as saying: “he who is on the side of technology, wins.” This is a lesson that military planners have taken to heart, time and time again, seeking to find the best weapons of war that they can use to unleash death and damnation upon their enemies. The progression of this concept has taken militaries from strength to strength, as they have adapted force structures to integrate everything from bows and arrows to hypersonic cruise missiles. Technology will continue to be a reckoning force in the foreseeable future, most importantly as a force multiplier. “Force multiplication” is a concept that arose during the Cold War – essentially the targeted use of battlefield technologies to multiply the effect of a given number of combatant troops. What this implies for a battlefield commander is that they have the capability to deploy effective and expendable technology alongside a smaller number of troops and still achieve the effect of deploying a larger number of troops. As the world pivots more towards the use of artificial intelligence and autonomous weapons technologies on the battlefield, we in the business of strategic studies are left asking what the future of force multiplication is, given the legal and ethical issues surrounding the use of artificial and autonomous fighting systems. 

While autonomous fighting systems are making their debut across land, air and sea, the most interesting perhaps, lies in the air. Unmanned aerial vehicles (UAV) are not a new phenomenon. Militaries have fielded the technology since at least the end of the Cold War, and the platform became infamous during the War on Terror as they took on more offensive roles, carrying out airstrikes instead of their traditional C4ISTAR (Command, Control, Communications, Computers, Intelligence, Surveillance, Target Acquisition and Reconnaissance) roles. The move away from using what should have been a tool as a weapon raised a number of questions regarding their ethicality, specifically the removal of the human element in combat, ie, disconnecting pilots from their targets. But there is no denying that UAVs are effective. They have a long loiter time, they are small, easily deployable, relatively easy to maintain, and far more expendable than a multi-million-dollar fighter aircraft with a human life strapped inside of it. It was therefore no surprise that militaries would make the move towards autonomous aerial systems, to completely eliminate the human element in order to maximise efficiency. 

There are two major examples of autonomous aircraft platforms currently in service in the Western world: the Northrop Grumman X-47B UCAS and the Boeing Australia Airpower Teaming System (aka “Loyal Wingman”). The X-47B is a carrier based autonomous strike aircraft, capable of carrying out strike missions independently, taking only mission parameter orders from its command station. The Loyal Wingman is designed along similar lines, to operate alongside conventional fighter aircraft as a “loyal wingman”, taking orders from an aircraft it is paired with, and helping to “pave the way” into a combat zone for a flight complement. Both aircraft are, in simple terms, advanced killer robots, whose only interest is to serve the mission and protect those designated as their friendlies. Proponents argue that the employment of such platforms allows militaries to field larger complements of pilots and increase their fleet sizes while increasing mission capabilities by a greater proportion, ie, force multiplication – and nobody would dispute this. Anyone with even an inkling of understanding of basic resource allocation economics will be able to determine the absolute operational benefits of employing these systems. From a strategic standpoint, the deployment of a cold, emotionless, fast-calculating computer armed with Hellfire missiles into a combat zone with a clear mission parameter to send enemy combatants to kingdom come is the ultimate Clausewitzian dream. Coupled with the ability to linger on the battlefield longer and do the force security job that would otherwise take a whole other unit to fulfil, autonomous unmanned combat platforms like the Loyal Wingman and the X-47B are without a doubt, the future of the warfighting business. 

My concern, however, is ethical. It is often said that in statecraft, there must be a willingness to do objectively unethical things in service of a larger goal. The oft misattributed quote that “good people sleep soundly because rough men stand ready to do violence on their behalf” is a ringing testament to this. War is fundamentally the business of killing, and is endemic to the human condition, but as we have evolved, we have developed rules of war, and ethics around the conduct of the killing machine. We take those rules seriously as a recognition of the common humanity of combatants on both sides, and we hold those who fail to fight by those rules to account. Pilots, soldiers, sailors, marines – these are all people. They are human beings and as much as they are flawed, and capable of making errors of judgement, they can be held to account when they commit errors or wilfully ignore the rules. Any machine, even the most intelligent, can be programmed to make objective value judgements, but a machine is not inherently a conscious, thinking being. 

Asimov’s three laws of robotics maintain that a machine should not harm humans under any circumstances, but that it must obey the orders of humans except when it puts other humans in harm’s way and protect itself except when it comes into conflict with the previous two laws. How then can an artificially intelligent machine be programmed to kill? Would it be ethical for it to protect friendly humans by harming other humans who would be considered unfriendly? Does an autonomously machine have the base intuition to accept an enemy combatant’s surrender? Asimov may have written science fiction, but the dilemmas posed with respect to artificially autonomous machines are not irrelevant in the real world. 

Humanity debates questions of ethics and morality because we are self-aware, and of the idea that our actions have consequences. Does a machine possess similar capabilities? And what does it say about the principles of a country that chooses to kill not even by having a human minimally seated at a screen pull the trigger, but by leaving the decision to lines of code, a program designed to make value judgements. And what if that machine is wrong, based on incorrect information that the entire intelligence apparatus behind it has produced? Can an autonomous strike aircraft be court- martialled? Who becomes responsible for that mistaken strike? The ethical quandaries are numerous, and to condense them into a singular piece of writing would be reductive at the very least. 

I am a military man myself, and I would welcome with open arms any technology that makes my team that much more effective, that much more lethal, and that much safer on the battlefield. If we can risk fewer lives to achieve the same effect, the objective value judgement would be to absolutely adopt the technology that does so. In that respect, autonomous weapons are a no-brainer – they cost less, require one less (minimally) life to be risked, are expendable, and have greater mission endurance than even the best trained of humans. There is no doubt that as force multipliers, from the perspective of securing objectives and mission success, autonomous weapons platforms are the way forward. But war is still fundamentally a human activity, requiring human judgement and human intuition. If our society is already plagued with the ethical concerns of fighting a war from behind a computer screen, even with a human pulling the trigger, can we really accept a line of code doing the same? 

The views expressed in this article are the author’s own, and may not reflect the opinions of the Sciences Po Cybersecurity Association.

Image source: https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html