Menu Close

AI in Military Defense: Risks and Rewards

By Jordan Rosenquist, 19 March 2023

      Source: Creative Commons

Defense systems enhanced with AI technologies will completely revolutionize the art of war. AI is an incredible tool, demonstrated by its ability to outrank human performance in a variety of settings, but it can be deceptively capable in undermining strategic goals. Its risks and weaknesses must be exhaustively understood. The potential risks and benefits of integrating AI enhanced technologies into military and defense systems stretch across the tactical, operational, and strategic levels. 

Understanding AI in the Military Context 

Militaries have an inherent interest in staying ahead of their competitors, which often involves pursuing the capabilities of emerging technologies. The adoption of emerging AI technology in military and defense systems requires a comprehensive assessment of the risks associated with its implementation, such as increasing the risk of war and threat to international stability. 

AI can enhance the performance of existing military technologies and revolutionize the development of future defense and weapon systems. Analyzing these risks is complicated by the fact that it is not a specific platform tool such as submarines or missiles, but rather a general purpose technology that has a more diverse range of implications. It is a force multiplier in the sense that it gives its users a distinct advantage, such as exponentially increasing both the projected power and the amount of force necessary to complete an objective. These advantages can be undermined if militaries fail to incorporate it strategically – or at all. Since these inventions yield a significant battlefield advantage, it can be tempting to incorporate them too hastily and thus overlooking its potentially significant vulnerabilities, such as a lack of ample training data for the machine learning systems. AI is a revolutionary tool, but when used outside their designated context or containing technical errors, such as being trained with inadequate data sets, it can be misleading and dangerous. Although the Department of Defense takes the correct approach in their AI strategy summary by calling for AI systems that are “resilient, robust, reliable, and secure”, this may prove to be a challenge given the issues of reliability caused by dynamic war conditions and algorithmic kinks that must be promptly ironed out. 

Today, warfare is conducted by humans through physical machinery; the decision-making is almost always reserved for humans. But if we were to relinquish some of the low-stakes autonomous decisions to an AI algorithm, we would have the ability to optimize performance of the physical machinery. The integration of these algorithms into military decision-making contexts has a wide range of implications that are ingrained within the operational, tactical, and strategic levels. For instance, AI will be used to shorten the kill chain – an attractive objective to every military. The optimization function of AI will provide human decision makers with more accurate and comprehensive information whereas the autonomous AI systems could largely remove humans from the process and execute instantaneous kill orders. There is no doubt that AI will change the art of war as it evolves from increasing efficiency in military operations to developing autonomous lethal weapon systems. 

Challenges Facing AI Integration 

AI is used mainly in two contexts. The first refers to its impressive optimization capabilities, making processes and actions more efficient and giving its user a better chance at success. These systems are however limited by human abilities. The second context refers to autonomous AI systems, most notably lethal autonomous weapon systems (LAWS), which can make their own decisions, minimize human error, and expand beyond characteristically human limitations. 

The main challenges of developing secure and reliable AI enhanced defense systems include the need for large sets of training data, the risk of errors from the algorithm, and the potential for AI to outpace human decision-making processes. The significant battlefield advantage that AI

technologies bring can be undermined if they do not have ample training data. These systems rely on very large sets of training data; so when the data does not exist in vast quantities, the reliability of the system will be weakened. 

Conversely, AI agents playing real-time computer strategy games have already shown levels of coordination, precision, and tenacity that are uncharacteristic of human behavior. Humans would have trouble replicating these decisions simply because, in this context, an AI algorithm is more accurate and automatic; it can quickly adjust to unexpected variables and take calculated risks – both tactically and strategically. They are not limited by fear or human cognitive and emotional biases. But today’s AI systems also have their limitations, such as having major difficulty interpreting broad guidance and commands or exhibiting the contextual understanding necessary for “common sense”. These limitations will likely be overcome in the near future. However, an overreliance on AI technology can be problematic when it fails to fulfill its performance objectives. To mitigate this risk, training the users on how to operate the technology without AI enhancements will allow them to not be completely dependent on it. 

Autonomous systems may also pose challenges relating to their adversary’s interpretation of whether their behavior was consistent with human intent. But this is not a new problem. Regardless of the technology, nations may not know whether the actions of their opponents are fully aligned with the guidance of their leaders. The difference here is that the ambiguity lies not with which human the action was aligned with but if it was consistent with any human’s command. This issue is most present in the context of lethal autonomous weapon systems, however the same problems could arise operationally with algorithms designed for strictly optimization purposes. 

Another challenge of integrating AI technologies refers to the fear surrounding the relinquishing of control to trusted machines in decision-making contexts. If AI technologies cause warfare to evolve to a point where the pace of combat outpaces the rate at which humans can respond, control over military operations would essentially be delegated to the trusted machines. Entrusting the AI system with the control of escalation pace could mean that even minor tactical missteps would have the ability to spiral before humans could intervene. 

To mitigate these risks, it is essential to maintain human oversight and to establish and enforce clear ethical guidelines for AI use in military operations. However, there is a tradeoff between the extent of human intervention and the performance of the AI systems, the balance of which is heavily influenced by the nation’s competitors. The arms race of AI will cause any nation’s military to be at a distinct and significant disadvantage if they cannot match the capabilities of their adversaries. 

Ethical Concerns 

With the adoption of this technology comes concerns regarding ethical implications. These concerns are minimal as it relates to AI optimization systems. But autonomous AI has the potential to be used in ways that violate human rights such as programming lethal autonomous weapon systems to make deadly decisions without human intervention, which could open doors to targeting people based on certain demographics. This technology could also be vulnerable to hacking or malicious manipulation. 

AI is simply a tool. It can be used to achieve noble or evil objectives. Personifying AI technologies as “good” or “bad” is equivalent to labeling a pen as good or bad based on the words and symbols its user has produced. The responsibility lies with the user and how they utilize, regulate, and understand the tool in their possession. Ethical guidelines and regulations are inherently limited in the sense that they are best at addressing a selective issue rather than taking a holistic approach to the technology as a whole. Ethical developments must be weaved into technological developments rather than playing catch-up. In order to achieve this lofty goal, it is necessary to dissect the abstract ethical principles into practical tools and approaches that can be readily implemented. 

AI Technologies and the Legislative Branch 

The law is notoriously slow at reacting to technological advances and the legislation of AI implementation is certainly no exception. If policymakers lack comprehensive – or even adequate – education about AI systems, they would likely either enact restrictions that are ineffective at curbing the negative impacts and uses of AI or impose restrictions that address what they perceive as the current issues without taking into account the strategic long-term scenario. 

This would essentially deem their legislation ineffective because the technology has already surpassed the purview of the restriction. Even when the risks and capabilities of certain technologies were accurately predicted, they often provided regulations that were ill-suited to the specific forms taken by these inventions as they matured in the military defense system. Since policymakers tend to assess the situation based on capabilities, intentions, and limitations, this technology would make it more difficult to judge military potential, leading to inaccurate information and costly miscalculations.

But the problem is the solution. Instead of using people to anticipate the outcome of a conflict powered by AI enhanced technologies, use AI to complete that very task. Integrating AI into the legislative branch would provide incredible benefits, including reduction of human biases and a more educated decision-making population. Although their presence is worth noting, an analysis of these benefits is beyond the purview of this conversation. As legislators’ trust in AI systems increases, their risk threshold will also rise. Using AI to collect data and provide intelligent analysis could increase domestic transparency regarding military power, making it easier for policymakers to provide efficient legislation. 

Conclusion 

The implementation of AI technologies into military and defense systems requires a comprehensive risk assessment, careful strategic planning, and effective regulation. While AI can – and will – provide unmatched advantages to its users, its risks and weaknesses must not only be acknowledged but exhaustively understood while simultaneously maintaining their advantageous position relative to their adversaries. The implications of AI integration stretch across the strategic, operational, and tactical levels; therefore, military strategists and policymakers must work together to ensure that the implementation of these technologies are effective, strategic, and democratic.


Sources 

  • Horowitz, Michael C., and Paul Scharre. “Military Uses of AI: A Risk to International Stability?” AI and International Stability: Risks and Confidence-Building Measures. Center for a New American Security, 2021. http://www.jstor.org/stable/resrep28649.5.