Enable breadcrumbs token at /includes/pageheader.html.twig

Hands-Off Weaponry Requires Hands-On Planning

As artificial intelligence revolutionizes warfighting, military leaders must recognize the ramifications.
By Justin Sherman and Inés Jordan-Zoob

The cyber realm has redefined the meaning of warfare itself. Conflict in cyberspace is constant, low-cost and uninhibited by traditional definitions of territory and country. Now, governments, militaries and private research groups from America to South Korea are taking cyber capabilities one step further, using developments in artificial intelligence and machine learning to create autonomous weapons that will soon be deployed into battle.

Machine learning already has been used in both cyber and kinetic weapons, from autonomously firing gun turrets to human-superior social engineering attacks. While these advances are noteworthy, these machines are neither entirely intelligent nor autonomous.

Today, technical restraints prevent deep neural networks and other machine learning implementations from reaching a level of “general intelligence.” However, soon this will no longer be the case. Researchers have made models that are particularly adept at specific tasks such as image recognition, but many limits still remain in broad functionality. Sophisticated tasks like natural language processing or understanding the breadth and depth of human emotion continue to be difficult for these models. This is an important distinction between machine learning and artificial intelligence (AI). Currently, only the former is attainable.

But all of this is likely to change with advances in quantum computing. By leveraging quantum physics theories such as superposition, the fast-approaching advent of quantum computing will revolutionize the processing complexity of all modern machines. In addition to jeopardizing modern encryption and pushing breakthroughs in virtual reality, augmented reality and blockchain, this technology will completely reshape the machine learning landscape. Researchers will be able to experiment with neural networks faster, and networks will be able to train and improve themselves more effectively, opening the envelope to true artificial intelligence.

This is where warfare is headed, and the breadth and speed of these latest developments expose a gap security leaders must address regarding the future of autonomous warfare. They must prepare for the time when the turrets on guns will scan faces at a border crossing and distinguish between allies, enemy combatants and civilians before firing. Drones will be truly autonomous, piloting themselves and identifying, tracking and even killing targets between self-refuels. Missiles will independently steer through high-activity airspaces, hitting targets with an ease and consistency impossible under today’s technological constraints.

Tanks, submarines, satellites and yes, even robots, will all become similarly self-controlled elements of kinetic warfare. Central command systems, meanwhile, will process and analyze this breadth of signals and other forms of intelligence at remarkable rates, enabling faster military response times.

AI cyber weapons will wreak even more havoc on the world. They will enable bad actors to breach critical systems and steal sensitive data. Even more broadly, these capabilities will allow adversaries to manipulate human perceptions, sabotage elections, launch disinformation campaigns and falsify intelligence. On the direct-impact level, they might turn off electrical grids, cut off water supplies, disrupt traffic controls, blow up buildings, hijack kinetic weapons and adapt to their digital environment with remarkable speed and sophistication.

The potential benefits of autonomous systems have received significant public attention. AI machines will process information faster than humans and, as a result, will likely be more responsive to their environment. Autonomous planes, drones and other vehicles could safely remove the wounded from active conflict zones, reach previously inaccessible areas and defy human physiological and mental constraints such as prolonged flight at high altitudes. Some experts argue these capabilities will reduce the number of casualties by replacing human warfighters with robots, as well as save money by decreasing the costs of food, clothing and other provisions human warfighters require.

Beyond the degree of autonomy that AI weapons could provide, their use also will span a large spectrum. One example is the Israel Aerospace Industries’ Harpy, a loitering munition drone that searches for enemy radar systems to dive-bomb and destroy them. The drone is not designed to kill combatants in battle, but this specification doesn’t ensure that, upon detection of enemy radar systems, the drone would account for allies or civilians within the target’s vicinity.

These types of issues are difficult to address because the algorithms behind many machine learning models are, in fact, black boxes. However, the sooner military leaders tackle these challenges, the better—because China, South Korea, Turkey and India have purchased the Harpy, and similar devices are being developed.

Advances in capabilities bring with them extreme vulnerability. Machine learning algorithms are already prone to adversarial “injection,” enabling attackers to send malicious data to a network to disrupt its functionality. Security researchers have tricked current cutting-edge image recognition technology into misclassifying photographs and symbols or not recognizing them at all. In one case, a group at New York University demonstrated how this attack could allow an individual to evade facial recognition.

This vulnerability will grow significantly with autonomous weapons. Militias might trick bombs into self-detonating. Terrorists could dupe guns into firing on civilians. Enemy nation-states could confuse and nullify intelligence systems right before kinetic attacks. High-value targets might hack into drone surveillance protocols to evade detection.

Even today, declining powers can leverage cyber to increase their influence on the world geopolitical stage, as Russia and Iran have done over the past decade. Malicious actors with no technical capability can purchase cyber weapons on the dark web, and those with technical capability can easily turn more sophisticated weapons against their creators. Less cyber-capable enemies will steal cyber weapons from countries like the United States, Russia or China, and because the code will be totally in their hands, it won’t take long for those same enemies to turn those weapons against their creators. In addition, even if hackers can’t steal the source code, they’ll reverse-engineer the AI controlling these weapons; hackers can already do this with today’s machine learning models when the implications are not nearly as severe as they will be in the future. Digitally tracking down cyber attackers is incredibly difficult given their ability to mask their identities. And, unlike kinetic warfare, cyber attacks have little historical reference: Breaching a database doesn’t resonate in the same way as bombing a hospital. Even what constitutes an act of war in cyberspace cannot be answered directly using existing military doctrine. Along this vein of vulnerability are the global implications of autonomous warfare: for international relations; for diplomacy; and for national security decision making.

Cyber capabilities are currently changing the weapons of war. Militias communicate using encrypted cellular technology, and terrorists use Google Earth to scout attack sites. Drones are employed in a range of conflict situations, from U.S. surveillance at home and abroad to insurgents dropping bombs on enemies.

But unlike the cyber weapon Stuxnet, which was detected and attributable and spread to devices its creators did not intend to harm, artificially intelligent cyber weapons will move only as intended. Not only will they remain totally silent, but their authors will prove extremely difficult to identify.

With the enormous growth of the Internet of Things and the total permeation of technology in political systems, global economies, national infrastructure, medicine and militaries, these weapons will be everywhere—all the time. Warfare will be constant and machine-directed.

Because the developers of these weapons operate in extreme secrecy and with enormous financial incentives, they often neglect to consider moral and legal culpability. Consequently, as nation-states and private corporations delve further into the creation of AI weapons, national and military leaders must stop and think: What happens when one of these machines makes a mistake? What happens when one of these machines is hacked? Ultimately, humans create these weapons so humans must determine their design, their implementation and the effects they will have on humanity.

How war is waged and sustained is changing. If these weapons—in their varying degrees of autonomy—are developed without any ethical or legal guidelines, humanity will be defined and affected by technologies few people truly understand.

Justin Sherman is studying computer science and political science at Duke University, focused on cybersecurity, warfare and governance. Inés Jordan-Zoob is studying political science and art history at Duke University, focused on foreign policy, counterterrorism, cybersecurity, warfare and the intersection of art and politics. The views expressed here are their own and do not represent the views and opinions of Duke University.

Enjoying The Cyber Edge?