Enable breadcrumbs token at /includes/pageheader.html.twig

New AI Regulation To Create New Standards for Developers

President Biden issued an executive order for AI innovations meeting NIST standards, emphasizing cybersecurity and global collaboration.

Innovation in artificial intelligence (AI) in defense will need to adhere to National Institute of Standards and Technology (NIST) standards as well as complying with other federal regulations.

“Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests,” the White House said in a release on Monday about the new regulation.

“President Biden's AI executive order represents a pivotal stride in addressing AI-related cybersecurity challenges by focusing on bolstering safety and security. It aims to protect critical infrastructure and sensitive data from mounting AI-driven cyber threats while recognizing the need for global collaboration in establishing cybersecurity standards,” said Lisa Plaggemier, executive director at the National Cybersecurity Alliance.

This executive order was issued as the European Union readies its own AI law.

As a formal policy works its way through the U.S. Congress, the Biden administration brought in several government stakeholders to introduce this legal framework. This order should be fully implemented within one year.

"It’s more like a work plan than a rule, but it’s a good work plan. AI governance is a crowded space with the EU, the UK, the UN and others all putting out rules, but the [executive order] has real heft because it has U.S. industry largely behind it," said James Andrew Lewis, director of the Strategic Technologies Program at the Center for Strategic and International Studies, a Washington-based think tank.

Still, Congress is free not to follow the current order. "One big obstacle is Congress,” Lewis said.

“EOs can only go so far and especially for issues like privacy, Congress needs to act. We’re unlikely to see that happen," Lewis told SIGNAL Media.

Significant legal drafting will go on as federal agencies follow this executive order and as Capitol Hill continues its process.

“The comprehensive resolution of AI-related cybersecurity concerns requires further regulatory development,” Plaggemier told SIGNAL Media.

Image
Lisa Plaggemier
Continued regulatory development, specific guidelines and global collaboration are essential to keep pace with this evolving technology.
Lisa Plaggemier
Executive Director, National Cybersecurity Alliance

The White House and the National Security Council will issue a National Security Memorandum to ensure that the country’s military and intelligence community use AI safely, ethically and effectively in their work, and will direct actions to counter adversaries’ military use of AI.

“Continued regulatory development, specific guidelines and global collaboration are essential to keep pace with this evolving technology,” Plaggemier said.

The Department of Homeland Security issued another regulation on Monday to accompany the White House’s. One of the main points it tackles is the use of AI in critical infrastructure and cyberspace.