Interactive Data Display Devices Help Commanders Get the Picture
Virtual reality takes mission planners on a ride through three-dimensional battlespace data.
Holodecks may only exist in the realm of science fiction, but work underway at the U.S. Air Force Research Laboratory will allow military personnel to not only view a deluge of data but also interact with it. Many of the technologies that are key to this effort are still in their infancy; however, researchers are examining some currently available commercial products that meet requirements identified by commanders. Today’s data display systems allow military personnel to view substantial amounts of data on one interactive screen. Tomorrow’s systems would invite commanders to step inside a scenario virtually and become immersed in situational awareness.
Developing technologies that facilitate the collection of battlespace data continues to be a priority for the armed forces. However, once the information is acquired, it must be displayed in the most comprehensive manner possible. This approach allows commanders to move away from a fragmented view of the battlespace and gain more insight into activity by organizing pieces into a complete picture. Despite the deployment of a plethora of technologies, in recent operations commanders still had to move from workstation to workstation to view incoming data, then use traditional paper maps to plot out strategies. Large display systems would vastly improve this practice by allowing decision makers to view all of the data and manipulate it in one place.
Two projects currently under development at the information directorate of the Air Force Research Laboratory (AFRL), Rome, New York, aim at providing these capabilities to commanders. According to Peter A. Jedrysik, advanced display and intelligent interfaces team leader, AFRL, groups working on these programs have been keeping abreast of developments in commercial hardware and software so that they could leverage as many of these items as possible. This approach keeps costs down while at the same time allows for modifications without total redesign work, he explains.
The interactive datawall, essentially a 3-foot by 12-foot computer screen, acts as a huge canvas where large amounts of information can be displayed contiguously. The datawall is powered by a Silicon Graphics ONYX machine with three Reality Engines, the company’s graphics card. Three horizontally tiled video projectors provide a resolution of 1,200 pixels by 4,800 pixels. Providing high-quality images has been a top priority for the team because the goal is to replace traditional paper maps with these devices, and the quality must be comparable, Jedrysik explains. By networking several systems, command center operators will be able to work collaboratively on the same physical display screen, he adds.
Although the display is an improvement over current systems, the project’s team wanted to provide more than just a larger monitor to commanders. With this goal in mind, the group began examining additional technologies that could increase the usefulness of the device.
One of the most impressive novelties of the datawall is a light-pen interface that lets users standing near to or far from the datawall to manipulate items on the screen, Jedrysik says. Cameras behind the display feature red filters that only see a black screen until the laser pointer hits the front of the screen. Using laboratory-developed software, the camera information is transferred to the computer system, enabling the user to move a cursor on the screen in the same way a mouse is used to move the cursor on a traditional computer monitor. “Anything you can do with a conventional mouse, you can now do with a laser pointer, and you can do it from a distance,” he explains. This feature was important to the development team because one goal was to allow users to stand away from the datawall yet still be able to interact with it, he adds.
Mission personnel can also interact with the datawall using voice commands. The device’s designers chose Nuance 6, a speaker-independent, continuous speech recognition system developed by Nuance Communications Incorporated, Menlo Park, California. “First, we define a set of commands, such as open file, and the system reacts the same way as if the person uses a mouse. Most speech systems require a training period to allow it to learn speech patterns. In this project, the capability needed to be speaker independent because so many people could be involved in an operation. Basically, anyone can come in. As long as they know the terms, they can use it. So, any commander can come in and use it immediately,” Jedrysik explains.
Project participants believe the interactive datawall allows the dynamic exchange of data, increased operator collaboration, a comprehensive and concise view of the battlespace, and reduced duplication of effort. It uses almost entirely commercially available products. “We did not want to be specialized because we wanted to run things without any modification. If a group runs on something like UNIX, it could run it here. JIVE [joint air operations center information viewing environment, SIGNAL, July 2000, page 37] is a PC application, and we are now working on developing a PC wall,” he states.
At this time, the interactive datawall is being used only for demonstrations. However, Jedrysik says that once it is connected to a network, applications could be run remotely and sent to the datawall. “If you can run an application between two computers, you could run it between a computer and the datawall. Think of it as a monitor, but the datawall is interactive,” he offers.
Program team members envision several military and civilian applications for the system. On the military front, the technology would integrate and enhance command center activities; provide a collaborative battle management environment; allow for mission planning, monitoring and rehearsal; and enable sortie simulations. “We’re hoping it’s going to make the mission-planning process more effective. It would leave a smaller footprint. Fewer people would be required in an area. It provides a better means of displaying information and a more intuitive way of interacting with the information,” Jedrysik says. Civilian applications include air traffic control, classroom enhancement and entertainment.
While the interactive datawall is a near-term project, the AFRL’s information directorate also is working on devices with more extensive capabilities that it envisions for deployment in several years. The advanced displays and intelligent interfaces (ADII) technology team has been researching virtual reality-related devices with a goal of designing information visualization techniques in immersive, nonimmersive and augmented virtual environments.
According to Jedrysik, the group is exploring stereoscopic displays to determine how they could be used by command and control commanders. The project is known as the virtual worlds environment. “This offers an effective means for visualizing the battlespace as well as collaborative planning, surveillance and battle assessment to improve situational awareness,” he says.
Researchers currently can demonstrate nonimmersive virtual environments at the laboratory. This system involves users viewing a three-dimensional (3-D) scene through a portal. A very large display provides the 3-D stereoscopic effect, but users can still see activity taking place around them in the room. Dual rear projectors overlay imagery across the same screen area via a dual liquid crystal display projection system. Each projector provides either a left-eye or right-eye view of the scene at a resolution of 1,024 pixels by 768 pixels. “This tricks the mind into thinking that it is seeing things with a depth to the field,” Jedrysik explains.
To experience the immersive environment, a user must wear a helmet-mounted display. This tool was designed and produced by the AFRL human effectiveness directorate. “Now, you’re totally immersed in that synthetic world. You can only see what’s inside the helmet, and with a head tracker, you can even look behind you,” he offers.
As in the datawall project, team participants are leveraging as much commercial technology as possible so that systems can be upgraded quickly as soon as new technologies are developed.
The ADII virtual reality laboratory consists of a helmet-mounted display and the large-screen stereoscopic projection system. The user is presented with a separate left- and right-eye image through a pair of monochrome cathode ray tubes. Each has a resolution of 1,280 pixels by 1,024 pixels. The helmet-mounted display also is equipped with a magnetic position tracker produced by Polhemus, a subsidiary of Kaiser Aerospace and Electronics, located in Colchester, Vermont. This technology tracks head movements so, as the user looks to the rear, the view changes to reflect this action.
To provide voice interaction, the ADII group chose the same Nuance 6 technology employed in the interactive datawall.
Magellan, a navigational device manufactured by Logitech, Fremont, California, allows the user to navigate through the virtual environment. “You have full freedom to move around the 3-D scene. It is similar to a joystick but very sensitive, and you have a lot more movement capability. In addition to the conventional x, y and z planes, you also have roll, pitch and yaw,” Jedrysik states.
The VTi cyberglove created by Virtual Technologies Incorporated, Palo Alto, California, is the final component of the immersive environment. This device detects hand gestures and enables the user to define 3-D input.
Laboratory personnel create the battlespace imagery for this immersive environment. According to Jedrysik, the National Imagery and Mapping Agency has digital terrain elevation data that could be used to create real-world scenarios; however, the current virtual reality setup does not have the capability to use these data sets to build the environment. “The experience is still very real,” he adds.
“The virtual worlds environment technology has great potential for simulation of an environment that is known or needs to be reconstructed based on full or partial information. The ability to create objects and construct entire environments from those objects in real time, as well as making changes just as quickly, makes this technology an excellent candidate for several applications,” Jedrysik says. Law enforcement applications, for example, could include simulations for hostage extraction scenarios or crime-scene re-enactments.
The augmented virtual reality environment combines the nonimmersive and immersive techniques (SIGNAL, December 1999, page 17). Although wearing a helmet-mounted display, the user is able to see the real world while also viewing synthetic images. While this is the most challenging of the virtual reality environments, it offers some of the most potentially useful applications, Jedrysik says. For example, in the medical field, doctors could view images gathered from magnetic resonance imaging while concurrently looking at the patient. In the operating room, the physician could use the images as a guide during a procedure. “We’re not at that point yet. Even some of the most augmented systems are still far from being this finely tuned. There may be some places where there would be more leeway for error. The trick is coming up with something portable,” he states.
Some combination of these techniques would enable telepresence applications, where engineers would be able to use virtual reality to perform tasks at remote locations as if they were physically present at the site. “For example, with a satellite in orbit the only thing that operators have in terms of redirecting a satellite is a textual representation. Imagine having a 3-D representation of what is happening. Or imagine a robot on an assembly line that is at a location that is not accessible. This technology would allow the operator to more effectively control the robot because the person would be able to see what is going on as if he or she were standing next to the robot or even within it,” Jedrysik says.
Although reluctant to predict when these capabilities will be available, Jedrysik offers that they are probably at least 10 years away because the display technologies are not advanced enough today, and portability and high-resolution capability have not yet been achieved.