Battle Laboratory Seeks Command Data Fusion
Exponential growth in data could mean similar growth in errors.
The U.S. Army’s Brute Force Fusion program, or BUFF, seeks to merge different types of intelligence, surveillance and reconnaissance data into accurate information. Based at the Battle Command Battle Lab at Fort Huachuca, the program aims to empower capabilities that ultimately can be moved from command centers such as this down to individual vehicles.
The U.S. Army is marshalling the forces of supercomputers and superanalysts in an effort to merge diverse battlefield intelligence data into knowledge for commanders. The intention is to establish a technology-based means of fusing vast amounts of sensor data into effective information without magnifying the inescapable errors that creep into data at various stages.
This particular effort to solve the intelligence data fusion puzzle focuses on level II aggregation. Separate bits of data must be combined to produce a larger element that is represented by an icon. Then this information must be presented to the user in a way that displays facts effectively and does not maximize errors.
This is the goal of the Army’s Brute Force Fusion program, or BUFF, which is a research-focused conceptual program. This limited objective experiment originated as several years of fusion-related experimentation aimed at determining gaps between capabilities and concepts in the future force.
BUFF is centered at the Battle Command Battle Lab at Fort Huachuca, Arizona. The Army is teaming with industry and including elements of academia such as the University of Arizona. Some work is being performed at private sector and academic facilities.
Jason Denno, deputy director of the Battle Command Battle Lab at Fort Huachuca, explains that the future force probably will feature an exponential leap in sensor technology and reporting. This will require automated tools to process that reporting, although there always will be some workload sharing between humans and machines. Because of the amplified workloads that will characterize personnel actions in wartime, data fusion tools will be necessary to meet the demands of new sensors and dataflow. Denno foresees a longtime balancing act between machine and human activities.
“As the force structure goes down, the automated pieces must come up, or else we will have some serious workflow and workload issues,” he says.
Denno relates that one problem in the fusion world is that many of the solutions are aimed at establishing and continually refining an assessment. As new data becomes available, this assessment becomes updated. Consequently, small amounts of error are introduced at every single layer because sensors do not always understand what they observe and reports may have inherent errors. As an error is introduced, it is compounded along the way because the data in the base assessment is not revisited. Human involvement often is necessary to remove or reduce errors and to restore logic to assessments.
BUFF attempts to deal with this issue in a totally different fashion, he continues. The program aims to look at every single bit of data every time an assessment is made or updated. Any new bit of data that disproves an earlier assessment triggers an on-the-spot correction, and users who are affected by this change receive an immediate update.
Until now, the technology to achieve this did not exist. Examining every bit of data and its history would tax processing power and shared memory. The data set would become exponential and unmanageable.
“We’re not sure if it isn’t still unmanageable right now,” Denno allows. “It appears that we may be able to look at enough of the data set to get a good representation of which data we actually need to hold onto.”
Achieving this requires multiple-processor machines, large shared memories and large distributed databases—which now are available. Researchers currently are building the BUFF framework in which its software will be built to have modules to interact with each other. One of the first benchmarks for BUFF will be the development of modules that have entity activity capability. An entity—allied or adversarial—can undertake only activities for which it has a capability. The development of this module will permit pieces of software that overlay all of the different entities to determine each entity’s capabilities.
Course-of-action generators, similar to a game engine, will continually update possibilities for entities based on the environment. This will generate indicators of what their activities will be, based on the most likely course of action. These indicators in turn will be placed in a huge matrix that will be compared against known data as it is reported.
All of that complicated work is underway, Denno says. While none of these modules are up and running yet, the battle laboratory has brought together some of the hardware. This hardware includes shared memory clusters, typical of companies such as SGI and Cray and Mercury, and node structure hardware such as Apple servers.
Denno relates that the software developers have been instructed to ensure that all programming is platform-independent and extendable to multiple platforms. At some point the laboratory will need to conduct experiments where Linux-based SGI systems must interoperate with Unix- and PC-based systems. This may be necessary for the system to operate in future Army platforms that could be using different hardware or operating systems.
One of the platforms on which the system will be tested comes from SGI, Mountain View, California. Its technology currently is serving as the backbone for program development, Denno notes. Paul Temple, senior manager of business development for Defense Department, homeland security and intelligence at SGI, explains that his company’s approach is to attack the problem as a data fusion challenge in a massive memory architecture that is capable of fusing intelligence, surveillance and reconnaissance data from a wide variety of sources. The goal is not merely to fuse data by applying algorithms for a common operating display but instead to engage in media fusion to present information in a visual sense.
The SGI approach offers scalability in both processing and memory. The system employs a single file system that eliminates latency problems. It features a NUMAflex shared-memory system architecture in which memory is plugged into a central NUMAhub.
SGI has provided two 16-processor Altix 350 systems along with a Prism visualizing system. The company’s machines employ the Linux operating system, ATI graphics cards and Intel Itanium 2 chips.
This system offers 14 terabytes of random access memory, which permits planners to set specific guidelines for targeted items in the data pool. The parallel processing and massive memory in a core computing resource would allow Army officials to manage individual stovepipe systems into a central hub.
“The idea is to take from strategic national intelligence capabilities, to [provide to] tactical or operational capabilities, into analytical or mission planning and [to] be able to have uninterrupted access and a multidirectional flow of tactical data married through operational to strategic data in a fixed or a deployable tactical hub,” Temple says.
A functional model of BUFF could be ready by early fall 2005. This model would include algorithmic development based on data rules. But, reaching this and future benchmarks will require meeting some challenges.
At the top of BUFF’s technology wish list is a high-performance computing capability for the lower end of the intelligence hierarchy. To perform distributed processing across nodes throughout the battlespace and around the world, the nodes “at the tip of the spear” will need the capability to take control if the network jams or crashes. Virtually every platform will need a limited processing capability for when the data stream stops and individual users lose the ability to share processors across the network.
However, achieving this goal will require overcoming serious constraints. Placing a high-performance computer in a vehicle will mandate processors that ordinarily consume considerable power, generate heat and easily run afoul of sand, mud or rain—situations that are anathema to effective battlefield performance. This equipment must be ruggedized and must draw power efficiently.
Software is a more complex issue. Potential solutions offered have ranged from multifaceted intelligent agents to all-purpose algorithms, Denno relates. It is extremely difficult to convert an intelligence analyst’s innate abilities into software code, and this may prove to be the most difficult aspect of automating fusion. Temple offers that software must enable asynchronous connectivity among all of the network capabilities, including the visual area networking portals.
Denno predicts that if BUFF is successful, it will be able to show that data must be viewed constantly to determine whether information is correct. BUFF’s success might point the way to incorporating the brute force approach across all data fusion levels, not just an element of it.
He offers that, win or lose, the program will help the Army see its way to the future. Even if some of the software modules do not show promise for fixing the problem, the tools alone can help advance into the semi-automated environment that reduces the personnel workload in the force structure.
“This program will be a success no matter what,” Denno says, “because if we go down this road and find that even with the most current hardware available we still cannot get to these data sets, then that at least is a triage within experimentation. It will allow the rest of the acquisition development communities to avoid going down this path the way we did.”