Enable breadcrumbs token at /includes/pageheader.html.twig

Technology Speeds Intelligence Imagery

The U.S. Defense Department has developed an imagery system that allows full-motion video inputs from unmanned aerial vehicles, handheld cameras and similar devices to move directly from a sensor to an analyst's workstation. Based on recent advances in hardware and commercially available software, intelligence agencies can now capture and process uncompressed imagery in real time with sophisticated off-the-shelf products.

Software tool creates direct link from camera to monitor without format conversion or bandwidth delays.

The U.S. Defense Department has developed an imagery system that allows full-motion video inputs from unmanned aerial vehicles, handheld cameras and similar devices to move directly from a sensor to an analyst’s workstation. Based on recent advances in hardware and commercially available software, intelligence agencies can now capture and process uncompressed imagery in real time with sophisticated off-the-shelf products.

The air campaign in Kosovo demonstrated the effective use of unmanned aerial vehicles (UAVs) as battlefield awareness tools. One of the achievements from that operation was the establishment of live video links from various reconnaissance sources back to command centers in the United States. The acquisition and processing of video data was a key part in the daily management of the air war against Serbian forces.

The situation also revealed deficiencies in the old proprietary imagery capture systems used by the Defense Department, Henry Dardy observes. He is the chief scientist for advanced computing at the Naval Research Laboratory’s (NRL’s) Center for Computational Science in Washington, D.C. Because the old systems were expensive and difficult to upgrade, there was a need for new methods to capture and analyze video data.

One of the major advances occurred in the first half of this year when it became technically feasible to deliver imagery directly from a sensor or camera to an analyst’s monitor, Dardy says. Prior to this, information had to be recorded on tape and then transferred to a format that is recognizable to computers.

Funded by the National Reconnaissance Office, the 8-month-old project is a partnership between the Naval Research Laboratory and Silicon Graphics Incorporated (SGI), Mountain View, California. The product of this teaming is a video acquisition and exploitation system based on SGI software and hardware. The system is designed to operate with high-resolution digital cameras. Much of the development work was conducted on an NRL-designed high definition television (HDTV) camera.

Developed by a consortium of academic and corporate sponsors, the camera features a progressive scan HDTV system that produces an image of 1,280 pixels x 720 pixels. Progressive scanning refers to the sequential placement of pixels on the screen, Dardy notes. According to NRL officials, the camera uses a five-sensor charge-coupled device optical block to produce images with enhanced color, resolution and contrast.

Another breakthrough that preceded the program was the development of a high definition input/output (HD I/O) board that allows imagery to flow directly to an HDTV monitor. Once the software and driver technology fell into place, it became important to develop an application to use the system’s capabilities, Dardy says. The NRL contacted SGI’s Alias/Wavefront division to develop a software-based solution.

The video acquisition and exploitation system (VAES) is driven by an SGI Onyx2 visualization supercomputer that provides the system with the bandwidth to collect and preview uncompressed video from a variety of sources in real time. Digital video inputs, including real-time transmissions from satellite or microwave downlinks and tapes from handheld video cameras, are processed through a digital video option (DIVO) board. The HD I/O board processes high definition video in 1,920 pixels x 1,080 pixels interlaced or 1,280 pixels x 720 pixels progressive—the two HDTV formats used worldwide, NRL officials note.

SGI representatives maintain that Onyx2’s scalable architecture and fibre channel redundant array of independent disks, or RAID, capacity allows the system to easily manipulate terabyte-sized databases. The computer’s processing capability can also be increased to handle up to 128 processors for additional video acquisition and processing operations.

The software package was adapted for the task by SGI’s Alias/Wavefront division from its Maya Composer product. Used in the commercial video industry, the program had plug-ins added to meet the military’s special intelligence gathering needs. The interface also allows users to secure video input through the DIVO and HD I/O boards.

Another useful intelligence feature allows tracking of selected groups of pixels through the video stream to show change or movement in identified subjects. For example, footage of an aircraft carrier’s deck can be subdivided to track individual crew members. The software tool can then produce a small video of the individual within the larger broadcast, Alan Dare, SGI solutions manager for imaging systems, says.

This capability is an off-the-shelf, nonlinear video editor with all of the functionality and productivity of an electronic light table, SGI officials say. Analysts can move quickly through captured video using real-time playback from disk. The program also allows users to pan and zoom or fast forward through the broadcast by using a mouse-controlled status bar on the interface, which acts as a fast forward or reverse function, Dare explains. Subjects of interest are located through pan, zoom and rotate functions while contrast and brightness adjustments increase clarity. The software can stabilize and sharpen images blurred by camera movement and allow analysts to annotate their findings with graphics indicators and text notes to delineate subjects or to add comments and recommendations. The Maya Composer software also performs bilinear and bicubic filtering to smooth out images in zoom mode, he adds.

Dardy believes that the VAES system is unique in the Defense Department because it handles HDTV on both ends. The camera stream can be fed into a conventional computer where it is processed and played back in real time—60 frames per second or at 72 frames per second. The system pumps bandwidth at full speed, and it also can run multiple one-and-a-half gigabit streams. Interest also exists in compressing the stream to extremely low levels, but there is a risk of losing image quality at the expense of bandwidth, he says.

The system also can be ruggedized for use in the field. Dare notes that UAVs require a mobile groundstation, and the VAES is suited for battlefield video acquisition and analysis.

A driving factor in the development of the VAES is the acquisition of imagery from UAVs. While the real-time UAV communications links created for the Kosovo air operations were a considerable achievement, the aircraft were using older video technology, Dardy maintains. Specifically, UAV cameras use an interlaced type of HDTV signal. Interlaced HDTV scans onto a screen on alternating rows of pixels, then scans in the remaining rows one-sixtieth of a second later. This causes problems for analysts because it creates artifacts, or distortions, in the live image, he says. The most basic of these is a fast-moving object that appears to blur. Other issues arise when the image has to be processed while moving in real time. He notes that standard television broadcasts have imperfections, but the human mind processes the images and blurs over the flaws. A computer notices these artifacts, he says. “You can precondition it [the computer], but you only get back the accuracy you start with. So, we improved the accuracy of the sensor to be the equivalent of film, versus video,” he explains.

The NRL’s HDTV camera incorporates frame transfer technology, which provides the equivalent of a snapshot for every frame. Each frame can be directly printed from the screen, and the imagery does not have to be preconditioned to fit the square pixel shape of a computer. Unlike interlaced systems, frame transfer technology does not have the same difficulty with spatial and temporal artifacts created by movement such as a UAV tracking an object in flight, Dardy observes. Instead of interlacing alternating rows of pixels, progressive HDTV scans the screen from top to bottom, arranging the pixels sequentially.

“We are giving you a quality image from the sensor down through the communications chain. At the other end of the chain, you have to put it in a form for the analyst to use. We worked our way through with the hope of having solved all of the high-risk technical issues. People can now go back and start looking at next uses and retrofits of UAV technology to use the whole processing chain,” Dardy says.

The VAES has advantages over older proprietary systems because its off-the-shelf software and architecture is easy to operate and maintain. By contrast, most proprietary systems use older technology, which is expensive and sometimes impossible to modify. Often the development teams that created the proprietary system have been split up and moved to other projects when the need for a modification arises. Dardy notes that this is why the NRO is so interested in this system. “We have too few systems to support the need, and any time you want to make a change, you have to do a major rewrite. Now we are writing modules for commercial packages that grow over time with reusable software. It’s the right way to move,” he says.

The system is currently being assessed by other agencies with intelligence or imagery interests such as the Defense Intelligence Agency, the National Security Agency, the National Imagery and Mapping Agency and the National Aeronautics and Space Administration’s Goddard Space Flight Center.

The VAES capabilities are available to these groups on the advanced technology demonstration network testbed—a high-speed network that moves images to remote sites at full bandwidth and allows the NRL to run demonstrations at sites with compatible display monitors. For example, a demonstration was run earlier this year in Baltimore. The software package and data remained in the laboratory, but images were displayed remotely on three screens at the Baltimore Convention Center. Three data streams ran at full bandwidth in real time, Dardy says. This also benefits analysts because they do not have to come to the tool—imagery can simply be accessed from compatible desktop systems while the processors and archival data can be stored at a separate site, Dardy observes.

In the future, Dardy sees other venues for this and related types of video acquisition software and hardware. He expects the technology to reach desktop PCs within the next two to three years. At that point, it has applications beyond the Defense Department such as telemedicine, teaching and presentations.