Enable breadcrumbs token at /includes/pageheader.html.twig

Locking the Door From the Inside

A review of U.S. Defense Department information systems using a code analysis process has found no evidence of deliberate infusion of vulnerabilities into applications, but it has found instances of bad coding practices and programmer shortcuts that have left systems open to attack. The vulnerabilities found would not have been easily detected by an outside source, but they were open doors for an insider who wished to exploit them. The systems were hosted on extremely critical networks where a breach could have catastrophic consequences.
By Kevin Holmes, John Henry and Ray Steffey

A new methodology evaluates the integrity of source code.

A review of U.S. Defense Department information systems using a code analysis process has found no evidence of deliberate infusion of vulnerabilities into applications, but it has found instances of bad coding practices and programmer shortcuts that have left systems open to attack. The vulnerabilities found would not have been easily detected by an outside source, but they were open doors for an insider who wished to exploit them. The systems were hosted on extremely critical networks where a breach could have catastrophic consequences.

Many organizations view unauthorized users as the greatest threat to their systems. However, some experts believe that it is information from someone within the organization—not the malicious efforts of the unknown outside hacker—that can do the most damage to an organization. In fact, Howard Millman, computer technology consultant, states that insiders commit three of every four attacks. “Whether it’s a denial of service attack, a malicious break-in or data theft, most likely the perpetrator is an employee or a former employee. Yet companies continue to focus their attention on preventing external attacks,” he states.

Recent Federal Bureau of Investigation data indicate that the average cost of an insider security breach is nearly $2.4 million, roughly 50 times the average cost of a breach from the Internet. Steven Aftergood, a defense and intelligence analyst, Federation of American Scientists, Washington, D.C., says the record seems clear. “The most devastating threats to computer security have come from individuals who were deemed trusted insiders,” he notes.

A snack food company found this out when all of its sales force data was destroyed. A disgruntled programmer within the organization wrote malicious code into its computer system before resigning. The code destroyed all of the sales databases on all salespersons’ computers, costing the company millions of dollars.

Unlike the commercial sector, the U.S. Defense Department not only has to worry about threats posed by an unhappy employee but also those posed by well-financed adversaries. As the world learned on September 11, 2001, adversaries of the United States are quite capable of executing a plan that takes many years to formulate. The armed forces must guard against this possibility every time new systems are added into the department’s infrastructure.

Frances Karamouzis, an analyst with the Gartner Group, states that 300 of the Fortune 500 companies now develop systems overseas, and he estimates that 40 percent to 50 percent of all software development will be done overseas within the next 10 years. Consequently, reviewing actual code through vulnerability assessment is crucial for assurance of information availability and correctness.

The Joint Interoperability Test Command (JITC), a Defense Information Systems Agency organization located in Indian Head, Maryland, has developed a methodology to audit software before it is installed on a network. Code assessment and validation analysis (CAVA) can be used to evaluate commercial software as well as software developed in-house. It provides an objective assessment of the vulnerability of software-related products and processes in terms of the integrity of the developed source code.

CAVA identifies risks to the application, provides a scope of analysis to be performed, provides a scope of the source code that needs to be analyzed and demonstrates whether the software’s functionality is compromised by deliberate inclusion of code that subverts intended functionality or bypasses security controls. It also facilitates early detection and correction of software errors, enhances management insight into risk, ensures compliance with program requirements and assesses satisfaction of standards, practices and conventions.

Code vulnerability assessments can be performed on any application. The CAVA methodology uses the Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP) as its overall framework. DITSCAP requires that software applications be reviewed and analyzed for vulnerabilities. CAVA can be applied as part of a full DITSCAP program, or it can be applied separately.

The process is divided into seven tasks within two phases. The methodology’s first phase is a thorough assessment of the risks and vulnerabilities of the system and its operational environment. This determines the type and depth of code assessment to be conducted and defines a set of products to be created by the assessment. The second phase is an analysis of the source code for software-coding issues related to potential security problems such as worms, viruses, Trojan horses, hostile mobile code, backdoors, coding errors, standard coding practices, trapdoors and time bombs.

The procedure begins with a system evaluation to ascertain what needs to be analyzed and the level of analysis detail. The CAVA level determines what part of the source code will be reviewed, and the DITSCAP level determines the depth of the analysis to be performed on the identified modules.

The CAVA level of a system is set by reviewing several items. Where and how the system is installed within the entire network architecture is one area. This assessment addresses the operational environment of the fielded system and not the system itself. For instance, a vulnerability in an application that keeps track of planned vacations may not be important, but if that application resides on a network that also hosts the Pentagon command and control systems, the vulnerability would be very important.

The scope of the impact of a security breach also is reviewed. The CAVA team works with developers to determine if a security breach of an application has global, national or local implications.

In addition, the criticality of a system and its environment to overall readiness is evaluated to determine the impact of a security error. This procedure addresses the impact a security breach could have on the overall application and system environment. For instance, a security breach of a system that is global in nature but would result in access to unclassified read-only information is not as critical as a security breach that controls the air defense posture of the eastern seaboard of the United States.

The CAVA level includes an analysis of the probability that malicious code might be inserted into a system. The development environment and processes used for the application are assessed. A security-conscious development team that requires Top Secret clearances, performs regular code reviews and checks on all software on its systems will have a lower probability of error than a team that manages a system developed off-shore by employees who are not alert to security concerns and do not conduct code reviews.

The process uses DITSCAP certification level and weighting criteria to identify the depth and complexity of the assessment to be conducted for each CAVA level. The system security authorization agreement is the primary document within DITSCAP that guides actions and documents decisions, specifies information assurance requirements, documents certification tailoring and level of effort, identifies potential solutions, and maintains operational systems security throughout the system’s life cycle. If the system has not been involved in a DITSCAP certification, JITC engineers work with the developers to complete the DITSCAP minimal security activity checklist. The areas reviewed for a DITSCAP level determination include the interfacing mode to determine the impact of data interaction with other systems. Examiners check for containment of risks within defined interfaces and look at the interrelationships between this application and the applications with which it interfaces.

Next, they examine the processing mode, determining how the application is configured in relation to security, users and processes. If an application works only within a set of parameters—for example, system high security—it would decrease the impact of a security vulnerability.

In evaluating the attribution mode, examiners define the accountability of the system and assess the need to identify all transactions within the system. For instance, a Web-based, read-only application does not have update transactions and does not need attribution; however, for a system that processes financial transactions, information on personnel as well as the processes they used to update the data is required.

An evaluation of mission reliance determines the extent to which the organization depends on the application and defines the ability of the organization to carry out its functions without the system. In addition, an availability analysis examines the amount of time the system must be available from a security perspective and how long an organization can function without the application. The integrity of operation from a security risk viewpoint also is examined as are information categories to determine the security level of data processed.

For systems that have undergone or are involved in DITSCAP certification, a copy of the most current version of the system security authorization agreement should be provided to the CAVA assessment team. The team uses data from this agreement to determine the depth of assessment to be conducted.

Once the scope of the analysis is determined, the engineers begin reviewing the source code, which is accomplished using different tools, databases and hardware configurations depending on the CAVA and DITSCAP levels, language and architecture of the system. The JITC uses the WebInspect tool to assist with Web-based applications. This tool has a database and analysis engine that check for known vulnerabilities, including cross-site scripting, buffer overflow and structured query language injection. JITC CAVA engineers use the output of this product to further investigate areas that were identified as susceptible to attack to verify vulnerabilities and to offer recommendations to the developers.

For legacy systems and systems that exist on the back end of Web-based systems, JITC has combined known vulnerabilities and suspect code constructs into an automated tool that scans the code, highlighting suspect code for the analysts to review manually. During this manual review process, the analysts look for password protection, including required passwords that are not properly safeguarded, code that sends passwords in the clear and hard-coded passwords for specific user names. They examine networking items such as code that provides excessive access to files across the network or that opens ports that do not need to be open, as well as networking items that may connect to systems or software subsystems in an unsafe manner. They also study file permissions that are changed unnecessarily, programs that take unauthorized ownership of files and programs that access publicly writable files, buffers or directories with malicious exploitation potential.

In addition, analysts scrutinize minimum privilege code that does not prevent abuse of required access privileges, code that is granted more than the minimum privileges necessary to perform its function and programs that provide shell access. These should be considered suspect as they may be used to obtain excessive privileges. They also look for self-replicating or modifying code and perform bounds and buffer checks to identify code that does not have proper bounds and parameter checks for all input data, invalid system calls and unbounded string copies or arguments.

Race-conditions checks identify conditions where one process is writing to a file while another is reading from the same location, code that changes parameters of critical system areas prior to their execution by a concurrent process, improperly handled user-generated asynchronous interrupts and code that may be subverted by user- or program-generated symbolic links.

Final checks include looking for code that is never executed. This code may execute under unknown circumstances or conditions and consume system resources. Analysts search for implicit trust relationships that could induce vulnerabilities, code that does not meet functional security claims, performs a malicious activity and uses relative path names inside the program that could give unintended access to files, including dynamically linked libraries.

Access to source code has become a business decision for companies because security-conscious government organizations are turning to open-source software for highly critical systems. In January 2003, Microsoft signed an agreement with the U.S. and Chinese governments to allow government security agencies access to their source code. As more and more code is developed from unknown sources and unknown countries, code reviews will become mandatory for all critical software.

 

Kevin Holmes is the lead engineer for information assurance testing at the Joint Interoperability Test Command. John Henry is technical director at Engineering Documentation Systems Incorporated. Ray Steffey is an independent verification and validation manager for Northrop Grumman Information Technology, Defense Mission Systems.