CSIS - Center for Strategic and International Studies Inc.

07/08/2024 | Press release | Distributed by Public on 07/08/2024 13:46

Calibrating NATO’s Vision of AI-Enabled Decision Support

Calibrating NATO's Vision of AI-Enabled Decision Support

Photo: metamorworks/Adobe Stock

Commentary by Ian Reynolds andYasir Atalan

Published July 8, 2024

This series-featuring scholars from the Futures Lab, the International Security Program, and across CSIS-explores emerging challenges and opportunities that NATO is likely to confront after its 75th anniversary.

In the future, NATO will be the global test case for the integration of artificial intelligence (AI) with military operations, setting the standards for interoperability, organizational applications, and responsible use of AI.

In the spring of 2024, NATO released a promotional video laying out aspects of how the alliance is thinking about the role of AI in decisionmaking. According to the release, NATO conceptualizes AI as being fundamental for "precise and timely decision-making" and vital for contending with the complexity of modern war. These are understandable goals for the alliance, but NATO efforts to integrate AI cannot lose sight of ongoing practical roadblocks in favor of streamlined dreams of technical performance.

There are three roadblocks that should remain at the top of NATO officials' minds related to AI and decisionmaking. First is the ongoing problem of system interoperability. While the importance of interoperability, or the ability for aspects of a system to successfully and efficiently communicate, is recognized in the NATO AI strategy, it remains an ongoing challenge. If AI is going to be used to support NATO decisionmaking, then ensuring relevant data can be successfully accessed and shared when and where it is needed will be fundamental, a fact highlighted by U.S. Army undersecretary Gabe Camarillo. Without such capabilities, decisions could be made on old or irrelevant data and coordination between allies could be inhibited, significantly limiting some of the perceived advantages of AI-enabled decision support systems. Part of the challenge will be resolving the relationships among data quality, model training, and downstream model performance. Moreover, while integrating AI can improve situational awareness, data sharing between partner systems will not be straightforward.

Second, NATO must contend with the common assumption that AI necessarily brings clarity and precision to war. Research has shown the problems with assuming direct relationships between advanced information technology and victory. They have likewise exposed that AI-enabled systems, if applied haphazardly, could just as easily contribute to further confusion in conflict environments. Such findings help to better contextualize AI as a tool that could be helpful, but also that this outcome is not predetermined based on simplistic notions of technological capacity alone.

Third, NATO faces the challenge of setting standards for the responsible use of AI in the military domain. While the 2021 AI strategy lays down the main principles-lawfulness, responsibility and accountability, explainability and traceability, reliability, governability, and bias mitigation-each of these principles will require NATO to provide detailed guidance. This task is complicated by varying levels of ethical and threat assessments as well as the differing infrastructure capabilities of member states. Progress on standard setting is made difficult because the alliance has no enforcement mechanism for AI standards.

In combination, these challenges demonstrate issues that NATO will have to contend with on the path toward integrating AI. While challenges remain, NATO can take practical steps to begin addressing some of these issues. First is tackling and conceptualizing interoperability as one of the bedrock challenges for integrating AI. Coordination from the start is key to avoiding divergent national approaches to system architecture and integration as well resolving how data will be shared between allies. NATO committees, such as the Digital Policy Committee, should continue their coordination efforts on interoperability, integrate learned lessons from NATO member experiments and wargames, such as the U.S. military's Combined Joint All Domain Command and Control experiments, and continue work to establish AI and digital standards across the alliance.

Equally important is the need to create and curate targeted and well-tailored data sets to support decisionmaking use cases and operational planning efforts. Such initiatives could be coordinated by NATOs Data and AI Review Board, which is currently tasked with overseeing responsible AI implementation throughout the alliance as well as serving as a forum for discussion between industry, government, and academia. A one-size-fits-all approach to data collection, model training, and use-case implementation will likely push models to failure. As AI scholars suggest, there remains no "master" benchmarking data set in which AI's generalized performance can be measured.

Second, NATO must recognize that while AI may be an important component, it is not a catch-all solution to political and military problems. AI alone is not simply a technology divorced from human experience and judgment. Left to their own devices, model outputs are commonly documented as reflecting bias or offering plausible but incorrect information. Productive collaboration between AI and human decisionmakers will require close integration between developers and military experts, beginning from square one of system development. Moreover, it will require decisionmakers to be well versed on the possible failure modes of AI models and be ready to use their own expert judgment in such contexts. While AI can be a tool for making sense of the world, it can also impart confusion and uncertainty back onto an organization. Driving the outcome in a direction beneficial to NATO will therefore require cultivating sustained expertise on how AI-enabled systems and humans interact at the nexus of the technical and social spheres.

Third, as NATO builds norms for the alliance, and globally, it will need to be more transparent about the nature of the six principles for the responsible use of AI. While NATO countries must communicate openly to establish a common understanding of these principles, the alliance must ensure that these principles do not compromise member states' military competency. Additionally, this task will require more effective engagement between NATO and the European Union. Such engagement has thus far been limited. While the recent EU AI Act avoids the military applications of AI, the dual-use nature of these systems will likely subject them to the law. Therefore, the legal framework should be clearly delineated when setting the standards for these systems.

Faster and better decisionmaking is not a predetermined result of integrating AI into NATO operations. The alliance should focus on coordination issues between allies, issues of data quality and model training, and standard-setting processes, as well as contextualizing any assumptions that AI will necessarily bring greater clarity and precision to war. In doing so, NATO can uncover productive uses of AI that can assist in decisionmaking contexts.

Ian Reynolds is a postdoctoral fellow with the Futures Lab at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Yasir Atalan is an associate fellow with the Futures Lab at CSIS.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2024 by the Center for Strategic and International Studies. All rights reserved.

Ian Reynolds

Post Doctoral Fellow, Futures Lab
Image
Associate Data Fellow, Futures Lab, International Security Program