The U.S. Defense Department has launched a contest to push the boundaries of software development with the goal of creating programs that can analyze, diagnose and repair flaws they detect in computer networks. The Cyber Grand Challenge, managed by the Defense Advanced Research Projects Agency (DARPA), will see teams from industry, academia and the private sector develop software programs and compete against one another in a series of competitions in 2016 for a $2 million prize.
DARPA officials recently provided more information about the contest, which its program manager, Mike Walker, hopes could launch a new computer revolution. A major driver behind the challenge has been recent developments in the field of program analytics. He noted that the development of systems such as SAGE and Mayhem, which are capable of rapid self analysis and repair/patching, are examples of how far the field has advanced in recent years.
Program analysis has been around for a while, but the pace of new developments has increased the sense that a real breakthrough in the field and by extension computing in general is close, Walker says. DARPA wants to help contribute to any potential industry-changing discoveries through the Cyber Grand Challenge, he explains.
The agency is basing the contest on information technology industry software competitions, such as “capture the flag” contests, where two software programs are pitted against each other in a controlled environment. Computer competitions of the 1970s, where chess-playing programs competed head-to-head, also served as inspiration, Walker says.
The DARPA contest will use the event format to create a high-fidelity contest that will allow competitors to test their prototype software. As per the contest rules, there will be funded and unfunded participants. There are limited slots available for funding, but according to DARPA guidelines, the unfunded category is open to any individual or group that meets the event criteria. However, during the final rounds of the contest, both funded and unfunded software programs will compete directly against one another. Winners will receive a $2 million prize, with $1 million and $750,000 prizes awarded to second- and third-place finishers, Walker says.
Like many of DARPA’s technology challenges in recent years, the goal is to push ahead the boundaries of a particular field or science, not necessarily to provide immediate commercial products. Walker notes that the DARPA robotics challenges greatly advanced self-driving technologies for vehicles, but it will still be some years before they become widely available. “This is the Grand Challenge approach … We’re trying to do something that doesn’t currently exist, and we hope to get there,” he says.
DARPA officials don’t have an exact number of challenge participants that they can discuss publicly at the moment, Walker says. He adds that an exact number will be released in a few weeks but notes that there has been a lot of interest from across the information technology community. The funding solicitation is still open for groups seeking DARPA funding for their software, and there is no cap on the number of unfunded participants, he adds.
All participants will simultaneously be presented with the same set of software that their programs must then analyze for vulnerabilities. All of the code used in the event will be written in and complied in the C programming language. The event competitions will take place in a network especially designed for the event. This bespoke environment will feature a single defended host for competition purposes, Walker says.
The event will test the programs’ abilities to dynamically detect and repair existing real world software issues, Walker says. A major goal of the contest is to measure each program’s general purpose adaptation capability, he adds.
After the contest, Walker says he would like to see a feasible plan to transition winning technologies to operational use in the commercial or government sectors. However, he adds that this is a high-risk research and development project—it could fail entirely, or the benefits could change software design. “If we succeed, it could be the genesis of a new computer revolution,” he says. But he adds that everyone will have to wait until the actual event in 2016 to find out.
Walker notes that there are some concerns in media and industry that the pace of existing technology might surpass the goals of the event during the three-year build up to the challenge, but he discounts these concerns, observing that the event represents a highly compressed research schedule. Another factor is that many of the capabilities needed for the software evaluation systems to be successful are still only lab-based proof of concept models. “These prototypes don’t exist,” Walker said, adding that there will be a major research and development process going on right up to contest time.