• Credit Shutterstock/sdecoret
     Credit Shutterstock/sdecoret

AI Is the Key to Cyber Operations, but With a Caveat

January 14, 2020
By Robert K. Ackerman
E-mail About the Author

Diverse entities must coordinate efforts to maximize effects.

Applying artificial intelligence/machine learning (AI/ML) cybersecurity is a “hard problem,” but one with significant and promising progress, according to intelligence experts. Achieving this will require a combination of top-down and bottom-up efforts that leverage both government and industry cooperation, as each can benefit from unique capabilities and contributions of the other.

These were the key findings released in an unclassified summary emerging from the Classified Cyber Conference held last year by AFCEA. The topic of the conference was, “Artificial Intelligence/Machine Learning and Cyber Security—Myths, Realities, Risks and Opportunities.” The conference was designed to move beyond addressing potential implications of AI/ML for national security and instead focus on the specific implications of AI/ML on cybersecurity. It addressed AI/ML from the perspective of evolving national policy, the threat landscape and likely areas for leveraging AI/ML in both offensive and defensive cyber operations.

The unclassified summary noted that data was a key challenge for AI/ML, as obtaining sufficient quantities properly curated and managed is essential to developing effective AI/ML algorithms. This is difficult in the area of cybersecurity, as large quantities of relevant data are scarce. Given the difficulty of developing AI/ML systems, speakers recommended that they are applied to BHAG, for “big hairy audacious goal,”-type problems where the payoff justified the effort.

Speakers and attendees agreed that the benefits of AI/ML for defensive cyber operations are particularly bright. An increasing number of AI/ML-based tools are reducing the need for human review and permitting automated defensive action at speed and scale. These AI/ML capabilities are enabling organizations to deal with the increasing volume and complexity of cyber threats. Moreover, some speakers asserted that AI/ML was the future for defensive cybersecurity.

Offensive cyber operations also were projected to draw increasing benefits from AI/ML technologies. Speakers indicated that this area was still somewhat immature, as there has not been an AI/ML-based offensive capability seen “in the wild.” However, ample evidence exists that government, industry, and research and development organizations are experimenting with these capabilities. It was sobering to observe that countries such as China and Russia had advantages over Western countries because they had few legal restrictions over access to the large quantities of data necessary to refine AI/ML technologies. Moreover, China has specifically identified use of AI/ML as a primary strategic path to achieve global primacy.

In addressing the intersection of offensive and defensive cyber operations leveraging AI/ML, conference speakers predicted that AI/ML soon will result in an advantage for defensive operations over offensive operations. This would reverse the decades-old advantage enjoyed by offensive cyber operations. However, it also was noted that because of the interplay between offensive and defensive operations, organizations need to focus on both to be successful. Specifically, to counter AI/ML-based offensive operations, organizations need to experiment with these capabilities so they can develop effective defensive capabilities.

In addressing the likely future directions of AI/ML for cybersecurity, speakers pointed to the continued evolution of these capabilities, in particular with regard to addressing increased scale of operations. They noted that developing reliable AI/ML-based capabilities was particularly complex, as understanding the operations using normal techniques such as auditing and traceability is often not technically possible. Similarly, privacy and ethical concerns must be addressed as AI/ML systems evolve.

Speakers also noted potential unintended operational consequences of AI/ML arising from the difficulty of predicting how AI/ML tools will perform in a given situation. Therefore, they were cautious about application of AI/ML to weapons system platforms. Finally, the conference highlighted the need for increased numbers of people who understand both AI/ML technologies and the human dimension that is attendant with developing AI/ML-based systems. Attendees hoped it would not take a crisis to stimulate the investment in the human capabilities needed for AI/ML.

The conference drew about 300 attendees from government, industry and academia. Presenters represented senior policy officials and thought leaders from the Defense Department, the White House Office of Science and Technology Policy (OSTP), the intelligence community and government-funded think tanks along with a number of large and emerging companies.

The next AFCEA Classified Cyber Conference will focus on the cyber supply chain risk in June 2020.


Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Share Your Thoughts: