AI ‘Prompt Engineering’ for Senior Military Leaders

The integration of artificial intelligence (AI) into organizational operations has revolutionized industries and driven efficiency, spurring military leaders to prompt their workforce to take advantage of the well-publicized opportunities that AI presents. However, the successful implementation of these new technologies depends on their ability to address specific organizational challenges.
These two prompts are significantly different:
- Find things we can use AI for.
- Find a specific AI tool that will solve this particular problem.
When leaders seek to adopt AI for AI’s sake, they fail to harness the full potential of today’s technological advancements, or worse, they burden the workforce with tools that work in theory but fail in practice. In the world of AI, prompt engineering is the practice of designing inputs for AI tools that will produce optimal outputs. While senior leaders are less likely to interface with actual software, their ability to provide effective “prompts” for the workforce will enable their service to best leverage AI for strategic advantage.
The first and most critical step in aligning AI with organizational needs is a thorough definition of the existing problem. This requires engagement with end users to understand their perspective, as well as a comprehensive assessment of existing processes, data availability and overall objectives. This information can later be paired with understanding the relative applicability of various AI applications. First, engage with your workforce to identify what you want to do better, faster, cheaper, etc., and you’ll be best prepared to include AI in the list of potential solutions without mistaking every challenge as a potential nail for the proverbial AI hammer.
Once the problem is defined, a baseline knowledge of various AI techniques will help you determine which, if any, are suitable for your specific needs. Here are a few advanced analytics to keep in mind:
Machine learning algorithms can uncover patterns and trends in data, enabling predictive modeling and anomaly detection. Machine learning goes beyond earlier “rules-based” AI models to allow users to derive insights into data they may not know exists. These models can analyze vast amounts of data very quickly and provide insights that would not be possible through simple human analysis. However, they require large amounts of organized data and may not enable users to explain how they reach their conclusions, creating a “black box” challenge for decision-makers.
Deep learning, a more complex subset of machine learning, is particularly effective for tasks involving complex data structures, such as image and speech recognition. It uses neural networks to identify relationships between complex patterns in data. These neural networks mimic the function of the human brain. Structured in layers for processing, they learn from large amounts of data and identify intricate relationships that would be difficult for traditional machine learning algorithms to detect. They offer many of the same costs and benefits as other machine learning models but require even more data to train effectively.
Natural language processing is a subset of AI that allows machines to understand, interpret and generate human language, facilitating tasks from text classification and translation to sentiment analysis. Natural language processing often uses large language models, recurrent neural networks, trained on huge amounts of textual data to process and generate human language. Once trained, these powerful generative AI models can be fine-tuned for specific tasks, customizing the format and tone of what they produce. Natural language processing may be a useful solution to create efficiency in administrative tasks or improve user interaction with existing systems by integrating it into the user interface.
Generative AI uses existing data to create new content in the form of text, images, code or even videos. It uses machine learning algorithms to identify patterns in existing data and then generate new content similar to those patterns. These models require access to all the data you want them to leverage, which can be a challenge with sensitive information. Depending on the data used to develop and train them, they can also present bias in their outputs. They also require massive amounts of computing power and large data sets to be useful. Despite these limitations, these tools are some of the most popular in commercial markets and are emerging on .mil networks, as they are leveraged to summarize material or generate images from text.
Computer vision allows AI systems to process and analyze visual information, enabling applications in areas like object detection and autonomous vehicles. These tools provide maximum benefit when used to augment tasks traditionally born by human analysts, especially jobs like finding or identifying items within a visual field. However, they require significant amounts of data to train the model on correct identification. They will likely need someone to “check” their work and will take a lot of computing power to run effectively. They may be a good solution if you have a large amount of visual data you aren’t using due to workforce limitations.
Econometrics use traditional statistical techniques to reveal causal relationships between real or experimental data. These models are repeatable, giving the same results based on the same inputs. This allows users to adjust variables to determine the influence each plays on a given outcome. Given their relative simplicity, these models are inexpensive compared to machine learning solutions. However, because they are applied statistical models, they will likely require assumptions about some causal relationships as problems become more complex.
Rule-based automation uses simple “if-then” models applied to data to provide insights. It works best in situations where problems are well-defined and questions are not open-ended. For that reason, they are limited to the context and rules that designers give them and can be very inflexible. The good news is that these relatively simple algorithms are straightforward, transparent and one of the lowest-cost types of AI to implement.

With an understanding of the capabilities and limitations of these models, leaders can place their problem in context and begin to ask appropriate questions that will tailor potential AI solutions for the organization’s problem. How much and what kind of data do you have to analyze? Do the solutions provided by your AI tool need to be repeatable, or can solutions vary with the same inputs? Do you need to be able to explain the analysis that created your AI’s solution, or is a “black box” process acceptable? This will help you avoid overpaying for a tool that exceeds your requirements and recognize tools promising results beyond their performance capability.
One critical, intermediate step goes before adopting any AI solution. To ensure successful AI implementation, organizations must prioritize computing capacity, as well as data quality and accessibility. As noted in their descriptions, many AI models require significant amounts of compute power, which presents a barrier to entry for .mil networks that may lack investment in storage and bandwidth. Additionally, high-quality data is the foundation of effective AI models. Data cleansing, normalization and feature engineering (organizing your data in a format that an AI tool can use) are essential steps to prepare data for analysis. Finally, your organizations will need to establish a “data plan” to ensure new information is formatted/stored effectively. Once an infrastructure and workforce are adequately prepared, investment in tailored AI solutions will provide its maximum benefit. However, be warned: these steps often take longer and cost more than many leaders want to hear.
One final note on user interface (UI). UIs are a notoriously overlooked aspect of technology acquisition within the government sector. Potentially because requirement definitions focus on the sheer utility of products throughout their development, end users are often left with a tool that performs tasks as required but is so unintuitive or cumbersome to use that the performance of the task is as difficult or worse than before. Make sure that UIs are part of your requirements from early on and aim to involve your end users in the design and review of solutions you consider. An effective interface will play a critical role in the successful adoption of the tool by the workforce.
In summary, the successful integration of AI into organizational operations requires a clear and concise view of the problem as well as a strategic approach that aligns AI solutions with specific challenges. By understanding problems first, evaluating applicable AI techniques, and prioritizing data quality and infrastructure, organizations can avoid adopting AI for AI’s sake and make the best use of tools to create competitive advantage.
Hopefully, this brief set of guidelines can serve as quick-reference card for senior leaders in the early stages of AI education. An effective next step would include expanding your knowledge through one of the many online or in-person trainings available to broaden your base of technical expertise. By continuing to understand the rapidly evolving capabilities presented by AI, senior leaders can develop better “prompts” to guide innovators throughout the workforce to create tailored solutions and get the best use of the capabilities that emerging AI technologies provide.
Capt. M. Scott Austin is a Fellow at the Carnegie Mellon Institute for Strategy and Technology. Over the course of more than 20 years, he served as a deck watch officer, direct action team leader, Rescue and Airborne Use of Force Helicopter pilot, senate fellow and intelligence capability developer. He has a bachelor’s degree in government from the U.S. Coast Guard Academy and a master’s in homeland defense and security studies from the Naval Postgraduate School. His military decorations include a number of awards commensurate with his time in service and unique experiences.