Enable breadcrumbs token at /includes/pageheader.html.twig

AI Experts Urge Security, Model Protection and Global Cooperation

Security and global cooperation are the future of responsible artificial intelligence.

Artificial intelligence (AI) companies should prioritize security with active oversight from executives, protect AI model weights to prevent adversaries from bypassing resource-intensive barriers and address the challenges of global governance and cooperation for secure international deployment of AI systems, according to a panel of experts who spoke at a public event on Thursday in Washington, D.C.

“Senior leaders need to make security a top business priority,” said Lisa Einstein, senior advisor for AI and executive director of the cybersecurity advisory committee at CISA, the Cybersecurity and Infrastructure Security Agency.

AI model protection is key, especially model weights, according to Sella Nevo, director, RAND Meselson Center and RAND senior information scientist.

 

 

 

 

 

 

 

 

 

 

Weights in AI models determine the importance of inputs and how they contribute to the output. They are crucial for the model's performance and accuracy. If someone tampers with the weights, it can lead to incorrect predictions, degraded performance or malicious behavior. This can result in mistrust, security breaches or misuse of the AI system.

“If you have access to the weights, the weights are usually easier to secure than the code base. And so, if you've managed to get into the weights, [you] likely have managed to get into the code base,” Nevo said.

While malicious actors could attack AI applications at various levels of their life cycles, one speaker called for international regulations to avoid misuse.

"We still need to think about what other institutions and institutional functions we might need in the long run," stressed Joslyn Barnhart, senior research scientist, Google DeepMind.

 

 

 

 

 

Image
Lisa Einstein, CISA
Senior leaders need to make security a top business priority.
Lisa Einstein
Executive director, cybersecurity advisory committee, CISA

 

Currently, governments are working on fresh drafts and some laws have jurisdiction over countries or regions with no international regime in place.

Meanwhile, returning to the discussion on AI safety, Einstein paralleled the current state of AI security with the traditional thinking behind technology.

“This means the ways that AI security have echoed decades of software security vulnerabilities where people are incentivized to rush products to market without security in mind, and that has been amplified with the AI gold rush that we've all been watching," Einstein told the audience.

The event “Safeguarding Large Language Models and Why This Matters for the Future of Geopolitics” was organized by D.C.-based think tank RAND Corporation and was moderated by Jim Mitre, vice president and director, RAND Global and Emerging Risks; Tara Michels, research, development and evaluation lead at the AI Security Center, National Security Agency, also participated in the panel.