Enable breadcrumbs token at /includes/pageheader.html.twig

AI Integration Is a Strategic Imperative for National Security

Artificial intelligence offers immense potential for our fighting forces.
By Dmitry Blatchley-Mikhailov
Image
Analysis

Today, we find ourselves at the precipice of a new technological revolution: Artificial intelligence (AI). As a strategic imperative for national security, AI presents unparalleled opportunities for strengthening our defense capabilities, similar to how space and cyberspace technology transformed our approach to warfare and reconnaissance.

These technologies have the potential to revolutionize military operations, serving as force multipliers that augment existing capabilities and enable the development of novel operational concepts. Consequently, it is essential for military leaders and policymakers to recognize the strategic importance of AI and integrate it into planning and decision-making processes.

Although advancements in AI have transformed various sectors of modern society, including business, finance and production, the national importance of AI as a strategy is still not being adequately reflected in the published strategies of different branches of government. Despite the memorandum issued by Deputy Secretary of Defense Kathleen Hicks in May 2021, which directed the Department of Defense’s (DoD’s) holistic, integrated and disciplined approach to responsible AI, many military leaders have yet to integrate AI strategies into their decision-making processes.

Many of these documents, such as the aforementioned memorandum, focus more on the responsible AI aspects, which while important, lack focus on interesting ways to leverage AI as a fighting force.

    AI and machine learning (ML) technologies have the potential to transform various aspects of military operations. We have already seen the increase in AI-generated malware based on a given description of the target system. While many of the tools employed by the U.S. Air Force and other branches may leverage components of ML, they have yet to fully exploit the potential uses of AI. Some examples of this can include:

    • Counter-unmanned aerial systems. AI can be employed to detect, track and counteract unauthorized or hostile unmanned aerial systems.
    • Joint training exercises: AI-driven simulations can replicate complex conflict scenarios, providing valuable insights into potential outcomes and enabling the refinement of strategies and tactics. These simulations take into account various factors, such as terrain, weather and enemy capabilities, to create realistic and dynamic scenarios that challenge military planners and decision-makers.
    • Submarine warfare: AI has the potential to revolutionize undersea warfare by enhancing the capabilities of submarines and other underwater systems. AI can also be used to improve the effectiveness of submarine communication systems, such as the extremely low frequency and very low frequency systems, which are essential for maintaining command and control while submerged.

    AI technology is continuously advancing at an astonishing pace, leading to innovative integrations and capabilities that are emerging daily. Some applications may seem futuristic, but recent developments showcase the potential impact of AI in the near term.

    For instance, Palantir, a government-focused data analytics company, recently conducted a demonstration highlighting the potential of generative AI in tactical operations. Using advanced AI models like Google’s Fine-Tuned Language Net-5 (FLAN-T5) XL, EleutherAI’s GPT-NeoX-20B and Databricks Incorporated’s Dolly-v2-12b large language models, an operator receives an alert regarding enemy activity and consults an AI chatbot for further intelligence and potential courses of action. The AI chatbot then provides pertinent information and proposes various tactical options, such as deploying an F-16, utilizing long-range artillery or launching Javelin missiles. Palantir’s system streamlines and automates numerous aspects of warfare, with operators primarily seeking guidance from the chatbot and approving its suggestions.

    However, as we increasingly integrate large language models into military operations, it is crucial to swiftly address and mitigate the inherent challenges and risks associated with their deployment. For instance, large language models are prone to “hallucinating” or fabricating information, which could have dire consequences in the field. Moreover, we must account for the unique vulnerabilities AI deployments present, such as the potential for adversarial exploitation, to ensure a reliable and secure AI-driven future for our armed forces. By proactively tackling these challenges, we can harness the full potential of AI in advancing military capabilities and national security.

    The integration of AI into military operations, while presenting significant advantages, is not without inherent risks. It is crucial to safeguard AI technologies against adversaries seeking to undermine our capabilities through cyber attacks targeting our AI deployments. A multifaceted approach is essential for protecting sensitive information while still harnessing the benefits of AI advancements.

    Image
    AI and machine learning technologies have the potential to transform aspects of military operations. Credit: Shutterstock/SeventyFour
    AI and machine learning technologies have the potential to transform aspects of military operations. Credit: Shutterstock/SeventyFour

    The following principles form the foundation of securing and deploying AI in a manner that mitigates mission risks associated with implementation. This list is not hierarchical or exhaustive but serves as a starting point for strategists aiming to incorporate AI as a core capability within our military forces:

    • Federated learning enables collaborative AI model training across multiple devices or organizations while preserving data privacy. By sharing only model updates and not raw data, federated learning reduces the risk of data leakage and ensures that sensitive information remains secure.
    • Robust adversarial training is a technique to create malicious input data designed to deceive AI algorithms, leading to incorrect predictions or classifications. Incorporating adversarial examples into the training process helps AI models become more resilient against attacks.
    • Differential privacy adds carefully calibrated noise to data or query results to protect individual data points’ privacy. By employing differential privacy, we can prevent adversaries from extracting sensitive information through model inversion attacks, which aim to reveal training data from the model’s output.
    • Secure enclaves are protected areas within a processor that prevent unauthorized access to data and code execution. By deploying AI models within secure enclaves, we can protect them from attacks such as memory probing, which attempts to extract sensitive information from a model’s internal memory.
    • Model watermarking embeds unique, imperceptible signatures within AI and ML models, enabling their origin and ownership to be traced. This technique can detect model theft or unauthorized use, helping protect intellectual property and ensure the integrity of AI systems.

    By understanding and addressing the challenges of AI integration, we can better harness their strategic and tactical potential, ensuring our nation remains at the forefront of technological advancements. A comprehensive and actionable road map for successfully adopting and implementing AI technologies in the military domain should focus on specific applications, risks and solutions, allowing us to maintain our competitive edge while safeguarding AI infrastructure.

    Military leaders and policymakers must address the following strategic imperatives, building upon existing DoD strategies and taking into consideration new perspectives:

    • Prioritize investment in AI research, development and collaboration to maintain technological superiority and stay at the forefront of advancements in national security contexts.
    • Formulate a national cyber strategy that includes clear objectives, milestones and metrics for the integration and advancement of AI technologies in the military.
    • Implement adversarial training methods to improve the robustness of AI and ML systems against attacks, reduce vulnerability to malicious input data and ensure seamless integration across different branches of the military.
    • Develop an AI framework that integrates across all domains of military operations, promoting interoperability and collaboration.
    • Develop and retain a diverse and talented workforce capable of supporting AI and ML technologies in national security operations.
    • Update military processes, concepts and doctrines to accommodate AI perspectives and risks, facilitating successful adoption and implementation of these technologies.

    As we reflect upon our accomplishments in space and cyber capabilities, we must recognize AI’s immense potential for our fighting forces and national security. Integrating AI into our defense infrastructure will help us tackle complex challenges with greater efficiency, accuracy and speed. It is essential to develop a comprehensive AI strategy that addresses the urgency and intricacy of modern warfare, fosters collaboration between the military, academia and industry, and ensures immediate developmental follow-up to maintain our position as a foremost leader in military capabilities.

    AI stands as a strategic imperative for our national security, just as space and cyber technology did in the past. By leveraging the lessons from these advancements, we can effectively harness the power of AI to maintain our strategic edge, protect our nation’s interests and ensure a safer and more secure future. The integration of AI into our defense capabilities will not only revolutionize the way we conduct operations but also serve as a testament to our unwavering commitment to technological innovation in the pursuit of peace and security.

     

    Dmitry Blatchley-Mikhailov, a senior member of IEEE, is a consultant for PMG’s Cyber Threat Management group. He also is an Air & Space Force Association CyberPatriot technical mentor and serves on the Technology Policy Council of the Association for Computing Machinery.

    The opinions expressed in this article are not to be construed as official or reflecting the views of AFCEA International.