Enable breadcrumbs token at /includes/pageheader.html.twig

Regulations, Rules and Laws

“We’ve been in ‘Wild West’ with AI [artificial intelligence] for quite a while,” said Joel Krooswyk, federal chief technology officer at Gitlab. But this status is changing through layers of regulation in the pipeline.

The highest profile legislation was the executive order on AI from the White House in October 2023. This has received wide attention and analysis.

The executive order aims to cover most aspects of the activity, from critical infrastructure to applications in biology. Still, it should be noted that the text requires the intervention of the Secretary of Defense in at least 17 instances of action in the field. This places AI at the center of national security concerns.

Shortly after the executive order’s publication, Deputy Secretary of Defense Kathleen Hicks gave a speech underscoring data integrity and the Joint Warfighting Cloud Capability, or JWCC, bringing large cloud providers to ensure computing, storage, infrastructure and advanced data to satisfy the department’s demand.

AI development is crucial for combined joint all-domain command and control, or CJADC2, and autonomous weapon development, according to Hicks.

 

    Three days after publishing the executive order, the Pentagon published its data and AI adoption strategy.

    This document urges embracing “the need for speed, agility, learning, and responsibility.”

    The goal of this strategy is to achieve:

    • Battlespace awareness and understanding
    • Adaptive force planning and application
    • Fast, precise and resilient kill chains
    • Resilient sustainment support
    • Efficient enterprise business operations

    The U.S. National Institute of Standards and Technology (NIST) created a framework for managing AI risks.

    “AI risks—and benefits—can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed,” stated the NIST document.

    According to this publication, the four key functions that help organizations to mitigate risks are the following:

    • Govern: This function includes implementing a culture of risk management documenting procedures and assessing impacts.
    • Map: AI creation and use includes several stages in the production chain that do not necessarily share full information or even awareness. Approaching this holistically will help better understand where vulnerabilities may be.
    • Measure: This function “employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts,” the document states.
    • Manage: This function involves dealing with potential risks and incidents by allocating resources to mapped risks.
    Enjoying The Cyber Edge?