AI Cybersecurity: 22 Nations Sign Landmark Agreement
The U.S. has signed a set of guidelines for artificial intelligence (AI) with 21 other agencies and ministries from around the world, making this the first agreement of its kind globally.
The nonbinding recommendations tackle AI and cybersecurity to guarantee these systems “function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties,” according to the document.
“The guidelines jointly issued today by CISA [Cybersecurity and Infrastructure Security Agency], the U.K. NCSC [National Counterintelligence and Security Center], and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core,” said Secretary of Homeland Security Alejandro Mayorkas, and added that ‘secure by design’ principles were at the core of these principles.
This last multilateral agreement comes after a raft of regulations from the federal government, including an executive order.
“It builds on existing resources and efforts such as CISA’s Secure-by-Design initiative and NIST’s [National Institute of Standards and Technology's] Secure Software Development Framework. It also cites other resources such as Supply Chain Levels for Software Artifacts, recognizing the role of software supply chain security as it relates to AI,” said Chris Hughes, cyber innovation fellow at CISA.
These principles prioritize:
- Taking ownership of security outcomes for customers
- Embracing radical transparency and accountability
- Building organizational structure and leadership so secure by design is a top business priority
The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core.
The secure-by-design principles in the text define this as:
- Raise staff awareness of threats and risks
- Model the threats to your system
- Design your system for security as well as functionality and performance
- Consider security benefits and trade-offs when selecting your AI model
The adoption of this text may already be falling behind, as innovation seems to outpace political negotiations.
“Nearly every AI product is cloud-based. These apps often handle sensitive data and can be a prime target for cyber attacks, so securing them is extremely important, but that requires an app-centric approach. The Guidelines for Secure AI System Development don’t mention misconfigurations at all. On top of that, there are a multitude of new attacks and the guidelines don’t get into the details of those,” explained Joseph Thacker, security researcher with AppOmni, a software as a service security company.