Independent Testing Keeps the Bugs at Bay
Outside inspection helps with software, equipment interoperability.
A third-party testing and verification regimen allows program managers and directors to save time and money by efficiently integrating commercial systems into mission-critical environments. When it is initiated at the beginning of a program, the practice offers an additional means of detecting faults in systems before they are deployed.
The growing use of commercial off-the-shelf (COTS) products is a boon to government agencies seeking to use widely available technologies in their computer and communications networks. But this convenience comes at a price because COTS software and hardware have a greater tendency to have faults, while proprietary products may have difficulty interoperating with each other. A vigorous system of testing and compliance permits system architects to study how these technologies interact before any unpleasant surprises occur.
Known as independent verification and validation (IV&V), the process has deep roots in U.S. Defense Department programs. Before the widespread use of commercial products, organizations began bringing in outside parties to run final tests on their systems. According to Frances R. Pierce, chairman and chief executive officer of Data Systems Analysts (DSA) Incorporated, Fairfax, Virginia, when she began her career in the mid-1960s, IV&V was already a standard practice. However, the concept still had to be sold to many clients because they could not grasp the necessity of testing and validation, she says.
The designers of early command and control systems often would develop sophisticated test plans and procedures that were passed to the customer as part of the deliverable products. But self-testing is not foolproof and sometimes creates conflicts of interest.
Third-party inspectors provide an unbiased view of a system, she says, which is important because developers often test against their knowledge of the inner workings of a program or piece of equipment. “A lot of times people developing a system assume users will do the so-called ‘right thing,’ but they do not prohibit users from making mistakes,” Pierce observes. For example, mistakes are commonly made when data is manually keyed into a network. Third-party testers are more liable to make these mistakes, either by design to test safety features or inadvertently. Defects the designers were unaware of are often detected through this type of testing, she explains.
The rapid growth of COTS products in the government sector and their use in mission-critical command and control systems have created several challenges for program managers. One issue is how commercial and government software products and equipment treat fault tolerance. Military systems are designed to meet requirements such as data integrity, reliability, failure modes, accountability and throughput. “All those issues count a lot for sophisticated mission-critical environments, but that’s not where typical commercial products come from,” she says.
Because speed to market and mass-market dominance drive commercial firms, their products often are released with known or unknown defects. “They expect the users to report the bugs back, and they’ll fix them in the next version. The commercial marketplace is much more tolerant of that because users are willing to exchange enriched functionality for the inconvenience of some idiosyncrasies in the system,” Pierce observes.
Interoperability is a major concern when COTS products are introduced into government systems. For example, configuration management issues may arise when a new software release becomes available. The original version was completely tested and integrated into the platform, but the new one may or may not work with other programs. Because of the potential effect on network functionality, extensive testing is often required before software upgrades can be fully adopted, she says.
The difference between the civilian world’s faster product cycles and government verification and approval practices can create a time lag for software applications. Administrators may be tempted to hold back on deploying a new product until patches are available or they have worked around the bugs. “What this means is everybody’s screaming for the new functionality, which for many reasons needs to be held back from certain communities of users until the entire integrity of the platform is assured,” Pierce explains.
Many commercial products also have features and functions that are not desirable in a command and control environment. System administrators have the choice of testing the functions even though they will not be used, or they can block them from operating in the network. But these applications create other difficulties. Pierce notes that users such as warfighters are very skilled with commercial products. “They’re pretty good at enabling the stuff again and playing around with it,” she points out.
Although these usage issues can be solved through the application of strict access and privilege rules, the widespread use of commercial systems presents program managers and system designers with many choices and problems. Based on anecdotal discussions with IV&V professionals, Pierce speculates that commercial products do not reduce development efforts and may in fact increase them from a testing and integration perspective. But the wide availability of these products makes their use a necessity. “Once you started getting software into the hands of everyone, it raised the issue—why are we building all this [proprietary] stuff when it exists commercially?” she explains.
Pierce adds that commercial products have many benefits, but they create a certain level of risk in areas such as security. Managers must balance the benefits of cost savings and increased delivery speed against risk mitigation.
Manufacturers of commercial software and hardware products are becoming more aware of government’s needs, explains Richard Lorenz, DSA’s director of business development. The need for interoperability extends beyond the federal level, with integrated systems connecting state and local governments and first responders. All of this integration must be tested systematically.
Program managers and directors can take steps to establish a good internal IV&V regimen, Lorenz offers. Managers must think about the IV&V process from the very beginning, planning it in from a program’s inception through to its completion. This does not require a full-time employee or an organization dedicated to IV&V from the outset, he says. It involves earmarking resources at a certain level, depending on the program’s size, to keep IV&V interests at the forefront of the design and development process. “Because if it’s done correctly from the very beginning, the process tends to be very, very smooth, and you end up with a better product,” he says.
However, if an IV&V process is inserted into a mature program halfway through the development phase, then the third-party testers may be seen as intruders, resulting in an adversarial relationship with the program staff, Lorenz explains.
It is difficult to quantify the cost benefits of a robust IV&V program, he adds. If run correctly, the user does not know the amount of cost savings because the testing becomes a part of the overall program. “We only have track records for when things fail,” he explains. Lorenz offers the example of recent telephone system failures around the United States that resulted from a lack of IV&V on switching equipment. The result was millions of dollars in lost revenue to affected individuals and businesses, he says.
Security testing is another concern. As more software products are designed overseas, integrating them into secure command and control products presents new risks. Because of the millions of lines of code generated for these applications, detecting bugs or malicious programs becomes an increasingly difficult task. Lorenz adds that offshore software is available in a range of COTS products from database programs to server applications—all of which may become part of a mission-critical platform. “I don’t want to cry wolf here, but it is not out of the realm of possibility that a Trojan horse could be planted in software that ends up finding its way into a command and control environment,” he says.
There is no going back to the old days of developing systems with proprietary equipment and software, notes Pierce. Program managers and directors must think more about developing methodologies to prevent and minimize problems caused by integrating COTS products, she says. This planning is important if rapid action is necessary such as when a vendor detects a security problem with its software. By the time a solution is published, hackers already know about it. Because there is a time lag between the release of a patch and its integration into a system, a period of vulnerability may exist. “Traditionally, vendors have been reactive rather than proactive in many of these areas. Now they’re getting better at it,” Lorenz adds.
Organizations within the government and defense community are solving IV&V issues internally. Pierce notes that one of DSA’s customers, a defense and aerospace organization, is developing its own procedures for managing and evaluating commercial prod ucts. This approach extends beyond technical issues to cover areas such as finance, security and intellectual property. “When they make a product selection, before they worry about testing, they have to worry about designing the platform. They have to take all these things into consideration, and they’re actually developing a methodology to enable their people to do that because they want to institutionalize all these lessons learned,” she says.
Pierce speculates that a common set of issues exists for many different types of systems. She is sanguine about organizations’ ability to institutionalize these lessons into a methodology providing a degree of protection. “It would be perfect if the more you mitigate, the better off you are. Then you could enjoy the benefits of being able to use COTS products,” she says.