Enable breadcrumbs token at /includes/pageheader.html.twig

Games More Than Contest

Playing games may do more than simply train an operator in a particular skill. Experts are using games to discover how participants behave in certain situations that only recently defied analysis.

The game SCUDHunt directs players to employ limited assets to locate Scud launch systems in a grid. But it also provides analysts with a means of determining how effectively players team to improve their chances of success.
Assessment and analysis open up a new realm in modeling and simulation.

Playing games may do more than simply train an operator in a particular skill. Experts are using games to discover how participants behave in certain situations that only recently defied analysis.

The use of games to train people is not new. What is new is that having analysts observe participants playing simple games is yielding insight into specific decision-making and team-forming processes. In turn, this capability is opening up the possibility of using games to model terrorist and counterterrorist activities. Future iterations may mirror the ongoing military force transformation in terms of both new technologies and new roles.

Researchers are finding different ways to exploit game technology based on small-, medium- or large-scale applications. New technologies are enabling greater use of games both by aiding in development of new types of games and by extending the options available to participants. However, the rush of technology also imperils the fidelity of gaming, as complexity threatens to overwhelm accuracy.

With the end of the Joint Simulation System (JSIMS), the U.S. Defense Department began to explore a variety of alternatives for simulation and training. One alternative was the use of different types of training methods such as games and collaboration tools. Studies found that many approaches based on games could serve vital roles for small teams that fall in the yawning gap between large-scale exercises and distance learning.

Dr. Peter P. Perla is on the research staff at CNA Corporation, Alexandria, Virginia. He works primarily with the corporation’s Center for Naval Analyses, which is a federally funded research and development center for the U.S. Navy. Perla emphasizes that he distinguishes between war gaming, modeling and simulation, and exercises. War gaming focuses on decision making; modeling and simulation draws upon physical causes and effects; and exercises help provide information on how actual forces carry out decisions made by commanders.

He currently is working with the Naval War College on new ideas for war gaming, both for the Navy and for the Defense Department. CNA is beginning a new project with the Naval War College to develop new techniques for gaming in fourth-wave warfare. This will require determining the basic ideas behind the current views of the future of warfare. Perla relates that the war college is receiving requests to explore avenues that differ from the traditional style of war gaming featuring large forces going toe-to-toe. Instead, users want games to deal with terrorism and counterterrorism operations, effects-based operations and advanced aspects of network-centric warfare.

“A lot of the old principles and techniques of gaming don’t seem to be quite as applicable or desirable,” Perla declares.

The coming innovations in gaming, modeling and simulation may not depend very much on technology, he continues. The real revolution will emerge from new approaches and applications, not from new simulation technologies. “The game is in the mind of the players, and not in the computer,” he emphasizes.

Among the new approaches already in use is gaming for analysis rather than for effects. “The goal of analytical gaming is to understand the why,” Perla points out.

He adds that gaming often enables users “to learn what you didn’t know you didn’t know.” He describes this as a key element that is extracted from the game by having analysts observe and collect the data from the game.

Gaming analysis can occur at the macro level or at a more detailed scientific level. In the latter application, the game serves as a scientific testbed where analysts can learn about microprocesses. A middle level, which has not been explored fully yet, focuses on the interface between the individual person and the system. Closer to the macro level than to the scientific level, this approach largely has been tried among nonmilitary clients, Perla notes.

Julia Loughran, president of ThoughtLink Inc., Vienna, Virginia, relates how scientific analysis of gaming is helping to unveil methodologies that normally would have been overlooked. Her company works with U.S. government agencies and nongovernmental organizations (NGOs) on how to use technology to improve organizational effectiveness. Some of the firm’s most popular tools are games and collaboration technologies.

“You can get a rich data set from a very simple game,” Loughran declares. “You don’t have to have a multimillion-dollar simulation with 100 different contractors supporting it.”

For example, one of the company’s methods is to employ gaming using collaboration tools, including putting synthetic simulation events in a collaborative environment. In one effort, company officials added tools that had videoconferencing, text chat and e-mail capabilities into a collaborative environment. The firm then compared results from a group using these tools with those from a group using a standard tabletop seminar game. While the degree of learning was the same for the two groups, the products developed by the group using collaboration tools in a distributed environment were clearly superior to those from the tabletop environment, Loughran reports.

The SCUDHunt controller station provides a record of how players approach the need to combine assets to locate the targeted missile launchers. Analysts have been surprised by some of the players’ actions as they track their activities.
Her company has examined how commercial games might match requirements for the U.S. Department of Homeland Security or the Defense Department. Loughran relates that the firm extensively reviewed more than 100 models, simulations and games over a two-year span for the Department of Homeland Security alone. After collecting more than 1,200 exercise and training requirements from the department, the company determined which requirements mandated teams and the types of tools that they needed.

One result was the establishment of an online tutorial that helps inform and instruct state and local officials. This tutorial included case studies of how other groups had used specific products, either in tabletop or full-scale exercises. The tutorial also aims at building a community of practice for using these models and simulations, and it includes a decision-support system to walk officials through a step-by-step process for determining which system or product is best for their needs and available resources.

The company also seeks to determine how distributed groups build shared awareness. Issues include shared visualization screens and how efforts are affected by poor quality information, for example.

One of the company’s most useful tools is a game known as SCUDHunt, which ThoughtLink developed in conjunction with Perla. Based on the concept of the classic game Battleship, SCUDHunt challenges players to find hidden Scud launch systems in a grid environment.

But as the players seek to locate the Scuds, SCUDHunt allows analysts to gauge the ability of teams to build a shared picture of a situation, including a look at communications and shared visualization. Loughran’s team employed it to determine what its players think when they collect and gather information.

Different people had different information as they played different roles. An individual playing the role of a spy, for example, could see only a small section of the board, but that person’s information was of higher quality than that of other players. Players serving as remote sensing systems saw more of the board, but their information was not as high-quality. So, all of these players had to share information to achieve their goals of Scud discovery.

The players thought they were trying to win the game by finding Scuds, but they did not know that they were being measured for the way they played the game. Had the players known the game’s true metric, their behavior and the analysis might have been affected.

Loughran relates that the game’s planners developed a quantitative value for defining shared situational awareness. After every turn—where players had placed their assets on the board and received information from their assets—each player was instructed to offer a recommendation to their higher command, indicating three or more squares where Scuds might be hidden. If all four players on a team listed the same three squares, then that added up to perfect shared situational awareness. However, if the players voted for different squares, then their situational awareness score would be low.

Analysts discovered many unexpected results. For example, given poor information, players who recorded false positives would go back to the square in question and re-check it, often with another asset. However, a false negative would not send a player back to re-check a square.

One version of SCUDHunt took an agent-based approach. Because SCUDHunt team members built trust over time, this version’s agents also built trust over time. Ultimately, observers viewing a game playback could not distinguish between the effects from a team of players or from agents, Loughran reports.

Loughran relates that the Army Research Institute has used SCUDHunt for training of teams and increasing understanding of other team members’ roles. Her company has used it for other Defense Department and Department of Homeland Security team analysis and training.

And, her firm has come up with an idea for a more complex game called TerrorHunt. This game would feature intelligence agencies, such as the FBI and the CIA, all with different capabilities that emphasize strengths under different conditions. The goal would be for players to share their information successfully before terrorists strike the U.S. homeland.

“I see a trend in general toward accepting games as a viable alternative for doing both analysis and training,” she declares.

While analysis of gaming on the scientific level continues to advance, middle-level analysis is still in its infancy. However, Perla describes its effects in one game conducted with personnel from the U.S. Centers for Disease Control and Prevention (CDC). This game simulated an attack directly on a CDC facility, and it measured how individuals would react. The game showed how that response occurred, but it also gave insight into how the people tasked with protecting the U.S. public would react to an attack on their own organization. Describing it as a cross between a game and an exercise, Perla relates how participants discussed many factors entailing how they deal with interpersonal issues.

The past 15 years have seen growth in the science of complexity and agent-based modeling, Perla observes. In one example, each pixel on a computer screen would represent an individual. When specific software would cause them to move, the pixels would encounter each other and conflicts would arise. Ultimately, sophisticated military-style behavior such as envelopments and breakthroughs would begin to appear among the pixels’ movements. Just a handful of inputs would drive the entire model as individual pixels—agents—chose their moves based on their own weighing of options.

Modeling this emergent behavior can provide new insight into cause and effect. However, Perla warns against over-modeling. Some researchers are striving to build agent-based models featuring millions of agents. The problem that arises from that large a model is that too many variables will affect the outcome. Not only does this slow the game process, it also increases the complexity and decreases the ability to understand the outcome.

Loughran echoes Perla’s concern over simulations that may represent every possible type of data that can be input. Simulations must focus on what is important, she says, or the effect cannot be measured amid all the variables. “By creating simple abstractions, you can actually understand a problem much better than by trying to re-create a mirror image of the world.”

She supports collaborative gaming environments that feature multiplayer games as opposed to single-user games. People play either their roles or someone else’s roles. “But, we’re not quite there in games doing behavioral representations, of building agents that can play an NGO, for example,” she offers.

“Models and simulations and games are tools,” Perla warrants. “And, like any tool, if you know how to use it effectively and safely, you can do a good job. If you don’t, you are going to hurt yourself.”

The force transformation affecting military doctrine may be simply another type of military evolution, but it is taking place at a faster pace on a rising curve, Perla offers. In the same manner, the techniques for gaming must advance as fast as the emerging technologies and concepts that must be represented in a simulation.

Force transformation is exerting another influence over gaming. Where modeling and simulation used computer representations of combat systems to explore their interactions, information systems must be factored into the equation. “Computer systems within a computer system become part of the simulation by acting the way they act in the real world,” Perla relates.

Many new military information technology systems actually serve as simulators. Engineers feed simulated sensor input—such as radar returns—into the system to train users on the actual gear. While the new technology is making the system representation more realistic, it requires that the inputs fed into the simulation be more accurate. Some of these inputs are command and control decisions.

However, the changes in technology and capabilities are affecting the ability to simulate these systems in a game. The ways of representing information technology systems must change as quickly as new capabilities or peripheral technologies come on line. Another challenge is determining how those systems can be used to represent and to help develop ideas about the future.

“The danger continues to be an overreliance on technology and tending to create models that you think represent the world better—because you can do fancier things in more detail,” Perla says.

The classic conundrum always has been the tension between detail and scope, he continues. One problem with the development of war games has been the tendency of some designers to generate a realistic environment for decision making by creating an accurate physical representation of a war. While that approach offers many positive returns, a drawback is that this dedication to detail comes at the expense of the goal of helping a commander make more realistic decisions. If those decisions are dictated primarily by the representation of physical reality, then an inaccurate physical depiction could generate a wrong decision.

Another issue is whether a game should model every conceivable outcome or just focus on likely scenarios. “Some thinking remains to be done on how exactly do you integrate the randomness and the unforeseen into the construct of what you are trying to do with the game,” Perla maintains.

In some sense it’s a philosophy of technology issue,” he offers. “You need to understand the foundations of what you are trying to do, and then how you translate that into the technologies that you use to do it.

“If the players interact with the game in the same way that real decision makers interact with the real world, [then] that is a realistic game.”


Web Resources
CNA Corporation: www.cna.org
ThoughtLink: www.thoughtlink.com
SCUDHunt: www.scudhunt.com