Musings on the AI Act
Following three lengthy days of talks, the resolution of the AI Act Trilogue marked a crucial moment for Europe and for the global AI, or artificial intelligence, community.
Members of the European Parliament finally achieved a provisional agreement with the Council on the AI Act. The act aims at guaranteeing the safety of AI in Europe, upholding fundamental rights and democracy, as well as ensuring environmental sustainability, while fostering growth, innovation and expansion of European businesses.
Needless to say, this has always been a big ask and whether it truly achieves this finely balanced equilibrium, only time will tell. What it has achieved immediately, in my opinion, is to deliver a robust message about the European Union’s (EU’s) continued effectiveness as a significant authority in international technology regulation.
Some of the great achievements of the AI Act are as follows: it provides safeguards for general purpose AI; allows limits on biometric identification systems by law enforcement; bans social scoring and manipulative AI; and provides for the right for consumers to file complaints. It also introduces a revised system of governance for AI at an EU level in the form of an AI Office, which has some enforcement powers. It provides for better protection of citizens’ rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.
The EU approach is somewhat unique in that it has sought an overarching legislation that will sit across all sectors of the economy. This is a different approach than many jurisdictions in that often each sector derives its own relevant legislation to regulate AI in their sector. That is not to say that individual pieces of legislation, guidelines, codes of conduct and best practices will not appear in Europe in the future across various industries and sectors to support and complement the AI Act, but it does mean that this AI Act is much more complex and all-encompassing than many pieces of legislation.
What is important in Europe’s approach is that they have drawn from previous legislation such as product safety legislation and policy. This approach recognizes that AI is not just one technology. Instead, it has multiple uses and applications and thus many more implications for society. What the EU has done is take a risk-based approach to legislating AI. Recognizing that different types of systems in different contexts present different types of risk.
The only reservation that exists in this provisional agreement is around regulating foundation models (FMs). Admittedly, great progress in regulating these models has been achieved with the act, however, whether this progress will be sufficient is still unclear. The provisional agreement stipulates that foundation models must adhere to designated transparency requirements before being introduced to the market.
What [the Artificial Intelligence Act] has achieved immediately, in my opinion, is to deliver a robust message about the European Union’s (EU’s) continued effectiveness as a significant authority in international technology regulation.
A more rigorous framework has been instituted for “high-impact” foundation models. These high-impact models, characterized by extensive data training, advanced complexity and capabilities surpassing the norm, possess the potential to spread systemic risks down the value chain, hence the need for tougher controls. Very few existing FMs, however, fall into this high-impact category, which leaves many FMs lightly regulated and yet still possessing the potential for significant harm in my opinion.
In summary, the AI Act has been 2.5 years in the making and incredibly, we are still in the early stages of understanding its full impact. One thing that is certain now is that it is seen as hugely significant at both an EU and global level. The EU is not alone in trying to work out how best to regulate AI systems and to provide appropriate oversight while facilitating innovation, keeping people safe and upholding fundamental rights. It is, however, the first to achieve a provisional political agreement, something that all EU citizens can be satisfied about.
Maria Moloney is chief privacy officer for the European COST Action in "Fintech and Artificial Intelligence in Finance" and senior privacy researcher and consultant at PrivacyEngine. Her role extends to policy-making as the vice chair of the CEDPO Working Group in AI, notably contributing to the EU's AI Act. Her academic background in computer science and management underpins her expertise in technology and privacy, complemented by her influential research on ethical AI, data protection policy, and the nexus of AI and privacy at the European policy level.