Disruptive By Design: Looking Into the Black Box: Takeaways From the EU’s New AI Act
In June, the European Union passed a landmark piece of legislation, the Artificial Intelligence (AI) Act, which presents several implications for leveraging AI. And while the EU’s legislation has exclusions for national security, military and law enforcement purposes, what about the current moment spurred the EU to define better guardrails?
The legislation centers on binning technology based on “unacceptable risk” (those items that are banned) and “high-risk systems” (those systems that may be less trusted or present higher bias). The legislation clarifies that the AI systems in question will rely on machine learning, logic and knowledge-based approaches.
The goal of the legislation is not purely to provide governance but to rebalance equity by focusing on privacy and trust. If there is a negative impact on the federal sector, it is founded on the assumption that this legislation will tamp down innovation and reduce the potential of dual-use technologies available.
Because of the speed at which this technology is being adopted, today’s AI is quickly and quietly shifting from assistive to authoritative—even though the impact of these systems can be broad and detrimental to some groups.
The intelligence community (IC), Department of Defense (DoD) and many federal agencies have been working closely with sophisticated algorithms that support automation and analysis for some time, so discussions and concerns around black box opacity are not new to the DoD and the IC. But legacy thinking had been focused more on auditing than on understanding. We need to devise new methods to validate how these algorithms function and what underlying assumptions (including training data) drive the outcomes. We need to explore not just what an algorithm does at a current state but how it can be influenced to produce a different outcome. And this new legislation does a brilliant thing: it does not challenge or debate issues of bias. Rather, it creates a new playbook for how to responsibly use AI based on inherently biased algorithms.
Because these algorithms can potentially affect people’s livelihoods and outcomes, we must ensure we are not solely dependent on the algorithm—hence the banning of specific technologies or use cases. The debate should shift from the ban to how a technology company can rebuild trust and combat these biases to have a viable commercial strategy.
Overall, the impact of this legislation should be minimal across the DoD, IC and broader government sectors. But if we view it as a surrogate for how we should approach the governance and insertion of machine learning-based AI, we can explore concepts that allow our technology providers to advance AI in more meaningful ways—all while living up to the standards set by these new laws. Compliance with the EU’s AI Act would inherently provide the United States with more trustworthy systems, underscoring the existing mission to protect the privacy and rights of citizens.
Perhaps this legislation gives us a backdrop to rethink internal processes and control systems. This includes the classification markings and processes along with identity and access controls utilized in conjunction with our systems. For example, is there a metadata standard that needs to evolve to manage and articulate where and how AI was utilized? Is there a new set of controls that should be developed to provide guidelines on how assumptions were tested when an analyst reviews data derived from AI?
Moreover, a potential solution along these lines already exists in our playbook: leverage public-private partnerships. But do so with the private sector in mind. The only way to ultimately understand and unpack the black box, or understand and counter bias, is to play with more data.
Chitra Sivanandam is co-founder of Rohirrim, a generative artificial intelligence company based in Reston, Virginia. Sivanandam, a Wharton MBA, is a former Emerging Leader in AFCEA’s Emerging Professionals in the Intelligence Community, or EPIC Committee.