Enable breadcrumbs token at /includes/pageheader.html.twig

Leveraging Social Engineering for Successful Cyber Operations: Enhancing the Minds of Decision-Makers

Effectively using cyber warfare could mean the difference between victory and defeat in military operations.

Third Place in The Cyber Edge Writing Contest

Image
Analysis

As the U.S. military shapes its research and development investment decisions, it is past time to focus on a long-neglected area of cyber warfare—the minds of our military decision-makers—and ensure that they can make better decisions faster and with fewer errors than our adversaries.

When most people hear the word “cyber,” they think of nefarious adversaries attacking and damaging another nation’s computers or networks by conducting denial-of-service attacks, inserting computer viruses, spoofing or other means of gaining an advantage in cyberspace.

This warfare in the cyber realm is devastatingly effective. Witness Russia’s cyber attacks in various conflicts, most recently in Ukraine. A major intent of these assaults is to slow—or completely stop—leaders in the country from making the timely decisions needed to mount an effective defense against military assaults.

Said another way, these efforts are a form of “social engineering,” that is, the use of deception and other means to manipulate individuals to make suboptimal decisions. In military operations, this can mean the difference between victory and defeat. However, social engineering can cut both ways. Social engineering can be used advantageously to enable warfighters at all levels to prevail in the cyber realm.

This is a crucial aspect of tomorrow’s battles because the speed of warfare today will impede the decision-making the U.S. military must have to prevail in 21st-century conflicts. Until government, industry and academia devise an effective taxonomy to enable warfighters to overcome negative social engineering and harness positive social engineering by leveraging emerging technologies to enable military leaders to make better decisions faster than our adversaries, we will surely lose the next conflict.

Perspective
The domains of combat in the 20th century—air, land sea and space—have been joined by a dynamic new warfare area in the 21st century, cyber warfare. This new realm of combat is changing conflict in profound ways. Regardless of what name is given to this emerging entity—cyber operations, cyber warfare, information warfare or many others—the threat remains the same, and it is real and growing.

As the U.S. military addresses the threats of cyber attacks that happen “in the air,” it must also contend with the challenges that occur “in the minds” of those who control the weapons of war. Military decision-makers must be equipped to make the right decisions in the stress of combat. Today, they are not. Social engineering focused on enabling military personnel, from senior leaders to “strategic corporals,” to absorb and rapidly act on an avalanche of information is imperative.

Image
Air Force Lt. Gen. John “Jack” Shanahan, director, Joint Artificial Intelligence Center, presents a lecture titled “The Challenges and Opportunities of Fielding Artificial Intelligence Technology in the U.S. Military” at the U.S. Naval War College on December 12, 2019. Credit: U.S. Navy photo by Mass Communication Specialist 2nd Class Tyler D. John/Released
Air Force Lt. Gen. John “Jack” Shanahan, director, Joint Artificial Intelligence Center, presents a lecture titled “The Challenges and Opportunities of Fielding Artificial Intelligence Technology in the U.S. Military” at the U.S. Naval War College on December 12, 2019. Credit: U.S. Navy photo by Mass Communication Specialist 2nd Class Tyler D. John/Released

The Most Important Ingredient in Warfare
Military history is replete with examples where commanders who made better decisions were victorious, even when their opponent had a geographic or material advantage, and these events need no retelling here. What is important to note is that in centuries past, leaders at all levels had hours, or even days, to make crucial decisions. But by the middle of the last century, warfare changed in ways that dramatically compressed the decision cycle.

During the Korean War, Russian MiG-15s and American F-86 Sabres fought heated battles for mastery of the air. Seeking to find a way to mitigate U.S. combat losses, U.S. Air Force Col. John Boyd created the OODA Loop. OODA stands for observe, orient, decide and act. Col. Boyd’s concept was that the key to victory was to create situations where one can make appropriate decisions more quickly than an opponent can react to them.

The challenge of making crucial military decisions under stress found its way into popular culture in the 1965 movie “The Bedford Incident.” Loosely based on Cold War confrontations between U.S. Navy ships and Soviet submarines, the movie’s plotline revolves around the cat-and-mouse game between an American destroyer, USS Bedford (DLG-113), and a Soviet submarine.

In the urgency to find the Soviet adversary, Bedford’s captain ignores warnings that his crew is wilting under the pressure. When someone asks the captain if he will take the first shot against his adversary, he replies that he will not, but says, “If he fires one, I’ll fire one.” A tired ensign mistakes his captain’s remarks as a command to “fire one” and launches an anti-submarine rocket that destroys the submarine, but not before it fires a nuclear-armed torpedo that annihilates the ship.

While fiction, “The Bedford Incident” was eerily prescient of a real-world event 55 years later. In January 2020, the Iran Revolutionary Guard shot down a Ukrainian jetliner. What is known today is that in the stress of combat, where Iran had just fired a barrage of ballistic missiles at U.S. military forces, the country was on high alert for an American counterattack.

Somewhere in the Iranian chain of command, a warning of incoming cruise missiles was issued. The officer in charge of an anti-air missile battery tried to reach his higher-echelon commander for authorization to shoot. Tragically, he could not get through, and armed with incomplete information, he fired two anti-aircraft missiles and 176 people aboard Ukraine International Airlines Flight 752 died.

These incidents—one fictional and one all-too-real—had one thing in common: Humans were forced to make crucial decisions with insufficient or erroneous information. In the case of “The Bedford Incident,” it was the air gap between humans a few feet apart. In the case of Flight 752, it was the inability to communicate and the incorrectly perceived threat.

The U.S. Military Track Record of Effective Decision-Making
It would be easy to dismiss incidents like those described above as implausible fiction or decisions made by militaries inferior to the U.S. military, but that would be a tragic mistake. U.S. military personnel making bad decisions that resulted in loss of life has been dogging the American military for several decades:

  • In May 1987, USS Stark (FFG 31) was on patrol near the Iran–Iraq War exclusion boundary. Incorrectly believing neither of the belligerents would target an American warship, the captain was not alarmed when Stark attempted to communicate with an incoming aircraft. The Iraqi Mirage jet fired two Exocet missiles, killing 37 Americans.
  • In July 1988, with memories of the captain of USS Stark failing to take action to protect his ship, and while his ship was being hounded by Iranian gunboats, the captain of USS Vincennes (CG 49) mistakenly believed that an approaching aircraft was descending on an attack profile. He fired an SM-2ER missile and shot down Iran Air Flight 655, killing all 290 people onboard.
  • In April 1994, two U.S. Air Force F-15 Strike Eagles shot down two U.S. Army UH-60 Blackhawk helicopters over Iraq, believing that they were Iraqi Mi-24 Hind helicopters, killing all 26 people aboard. Miscommunication between the Air Force Airborne Warning and control System control aircraft and the Strike Eagles was the proximate cause of this tragedy.
  • In March and April 2003, U.S. Army Patriot batteries mistakenly shot down a Royal Air Force Tornado GR4 and a U.S. Navy F/A-18C Hornet, killing three aviators. While the batteries were being operated in automated mode, investigators determined that there should have been oversight by a human on-the-loop.
  • In June 2017, the USS Fitzgerald (DDG 62) collided with the container ship MV ACX Crystal. Seven of her crew were killed. Three months later, the USS John S. McCain (DDG 56) collided with the Liberian-flagged tanker Alnic MC and 10 of her crew died as a result of the crash.

While there were multiple reasons behind all these tragic accidents, most notably the fatal collisions involving the USS Fitzgerald and USS John S. McCain, in every case data was available that, if properly used, might have averted catastrophe. Technology was needed to present that data to decision-makers in an operationally relevant and timely manner.

An article in “Defense News” put the issue this way: “We saw a few years ago in the McCain and Fitzgerald destroyer collisions that had there been some kind of artificial intelligence aiding decision-making, which could have been some help to a bridge crew that appeared to be overwhelmed.” The military people who made these suboptimal decisions were doing the best job they could with the tools at hand. What occurred was that the speed of operations exceeded the ability of the human brain to make the right decision.

What Our Leaders Say We Need
At the highest levels of U.S. strategic guidance, big data, artificial intelligence (AI) and machine learning are vital to providing the American military with a warfighting edge. Increasingly, those with stewardship for integrating these technologies into U.S. military platforms, systems, sensors and weapons have identified decision-making as a crucial area where these technologies can add the most value.

In an address at the U.S. Naval War College, the founding director of the Department of Defense’s Joint Artificial Intelligence Center, Lt. Gen. Jack Shanahan, USAF, described the goal of harnessing AI technologies to help U.S. warfighters “outthink” adversaries when he noted: “The most valuable contribution of AI to U.S. defense will be how it helps human beings to make better, faster and more precise decisions, especially during high-consequence operations.”

A Way Forward
In 20th-century warfare, the unit of measure for military superiority was tanks, ships and aircraft and the ability to “out-gun and out-stick” an opponent. In 21st-century warfare, where military leaders have minutes or even seconds to make crucial decisions, the ability to outthink an adversary will spell the difference between victory and defeat.

Harnessing big data, artificial intelligence and machine learning technology to better equip warfighters to make these critical battlespace decisions is the surest way to achieve this end.

 

Retired U.S. Navy Capt. George Galdorisi is director of strategic assessments and technical futures for the Naval Information Warfare Center (NIWC) Pacific. Prior to joining NIWC Pacific, he completed a 30-year career as a naval aviator. He also served as an executive officer, commanding officer, commodore and chief of staff. During his final tour of duty, he led the U.S. delegation for military-to-military talks with the Chinese Navy. Galdorisi was also a winner in the 2021 The Cyber Edge Writing Contest.

Comments

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Enjoying The Cyber Edge?