The European Union has told the world that its AI Act is about fundamental rights. This represents a notable shift, as such rights have traditionally protected individuals against state power rather than governing relationships between private parties. Read the small print and you find a very different story.
Yiyang Mei and Matthew Sag argue that the Union’s AI settlement uses the language of rights to do the work of risk management. Their comparative reading across domains shows a pattern. Rights rhetoric dresses administrative tools that coordinate twenty-seven legal orders and many regulators. The point is not to dismiss the Union. It is to understand the Act on its own institutional terms. Once we do that, the text becomes legible.
Begin with the hard core of the instrument. Providers of high-risk systems must keep a risk management system that identifies and evaluates risks, applies controls and then accepts what the Act itself calls residual risk that is judged acceptable.
This is the idiom of product safety, not a bill of rights. Deployers carry their own set of obligations. They must use systems in line with instructions, monitor performance, and keep logs. Only a narrow class of deployers must carry out a Fundamental Rights Impact Assessment before use. That duty falls on public bodies and private providers of public services for most high risk use cases, and on all deployers for two specific Annex III categories. Even then, the instrument asks for a summary to be filed in the Union database rather than a full adversarial rights test.
Rights supervision operates through administrative intermediaries rather than direct enforcement mechanisms. National authorities that protect fundamental rights are given powers to request the technical file and to ask market surveillance authorities to organise testing. These powers create information gathering rights and testing procedures. They do not, however, establish a standalone rights tribunal with authority to halt deployment based on fundamental rights concerns alone. I set out the same concern in my earlier post, Secrecy Without Oversight: How Trade Secrets Could Potentially Undermine the AI Act's Transparency Mandate, where I explained how the Act can dress administrative access in the language of openness without delivering true oversight.
The real engine room is standards and conformity. Harmonised standards and, failing those, Commission common specifications are the preferred route to show compliance. Certificates can be granted and withdrawn. This is the New Legislative Framework transposed into software. It feels closer to ISO and CE marking than to a constitutional charter.
Contrast the Fundamental Rights Impact Assessment (FRIA) with the Data Protection Impact Assessment (DPIA)under the GDPR. A mandatory DPIA when processing is likely to result in high risk to individuals. If the risk remains high even after mitigations, the controller must consult the supervisory authority before going ahead.
This creates a meaningful pause in the process, a moment where an external authority can halt deployment if risks are unacceptable. It operates within a legal order that treats data protection as a fundamental right, albeit not an absolute one. As the Court of Justice emphasised in Pankki S, recital 4 tells us to balance rights against other values, but the institutional design still places a supervisory authority between a risky plan and execution. The AI Act’s FRIA, by design, is woven into deployment practice and documentation. It is far less of a red light.
Now set the Act next to two familiar EU regimes that also speak the language of risk. First—anti money laundering law. The Fourth and Fifth Anti-Money Laundering Directives obliges banks and other entities to carry out business-wide risk assessments, keep them under review, and have them approved by the management body. Internal policies and controls, continuous monitoring, and reporting of suspicious activity to financial intelligence units follow as routine. Board accountability is explicit. This is operational compliance built on documented risk judgements. It is not framed as rights protection and it does not pretend to be. Second—environmental law.
The Environmental Impact Assessment Directive requires assessment before development consent is given and folds in public participation. The Treaty on the Functioning of the Union establishes a precautionary approach for environmental policy. Article 191(2) TFEU makes clear that when faced with scientific uncertainty, action must err on the side of prevention. The default is earlier scrutiny and transparent justification, not reactive remedies. This stands in marked contrast to product safety regimes built around post-marketing surveillance and periodic quality checks, where oversight is triggered after market entry rather than before harm arises.
Seen in that light, the AI Act sits closer to the AML model than to EIA. It asks providers to manage risk within their own quality systems—to document, to register, and to accept residual risk within bounds—while supervisors and market surveillance authorities check files and can trigger testing. It is not an environmental style instrument that forces a decision point in public before the system is put to use.
The stated ambition is to safeguard fundamental rights. The texture of the text delivers something else. The obligations that bite hardest are those you can audit. Risk management plans, quality systems, harmonised standards applied in full, technical documentation in Annex IV form, registration in the Union database and conformity assessments. Fundamental rights language appears at each turn, but the enforcement pathways run through market surveillance and standards rather than through an authority that can say you simply may not deploy. The Act even spells out how compliant systems that still present a risk should be handled, which is classic product safety logic.
There are moments when the architecture does turn toward rights. The FRIA duty for public service contexts is one of them. The ability of fundamental rights authorities to demand documentation and to draw in market surveillance to test is another. Yet neither amounts to a free-standing merits review in which an individual right can defeat a system that otherwise passes its technical audit. The centre of gravity remains administrative.
If you take the Act at its word, you build a rights programme and wait for detailed jurisprudence to arrive. If you read it as an administrative instrument, you build an evidence machine. You invest in quality management, traceability, post market monitoring, and the ability to explain residual risk in a way a market surveillance authority will accept. You treat FRIA as a decision record that fits inside your risk system rather than as a separate moral tribunal. You watch the standards process, because that is where compliance becomes visible.
The July 2025 General Purpose AI Code of Practice confirms that the Commission itself wants compliance to flow along these channels. The Code sits as a voluntary tool to help model providers show that they meet their legal duties, and it promises a lower administrative burden and greater legal certainty to those who use it. That is the language of administration, not of inalienable rights.
Many practitioners would feel an instinct to map the FRIA onto the DPIA. This instinct, while understandable, obscures important distinctions between the two regimes. A DPIA has a clear chain—Identify high risk to individuals’ rights and freedoms. If mitigation still leaves high risk, take your plan to the regulator before you start. The AI Act’s FRIA is narrower in scope, attaches to certain deployers and certain use cases, is filed in summary form, and sits next to product style obligations such as CE marking and a declaration of conformity. In practice, the FRIA records how your risk management system addressed foreseeable harms, rather than posing an external threshold question of permissibility.
AMLR is candid about what it is. It asks boards to own the risk assessment, empowers compliance functions, and accepts that the regime will generate continuous reporting and record keeping. It is unapologetically administrative. The AI Act borrows much of that toolkit but wraps it in rights language. The result can mislead newcomers into expecting a principled veto in favour of individuals. What they find aside from the specific bans around banned AI, is a documentation and testing regime that lives or dies on dossiers and standards.
EIA on the other hand makes you do the thinking before consent, and it makes you do it in public. The precautionary principle sets the tone. The AI Act largely leaves the thinking inside the provider's quality system and inside standards committees. The public sees a summary registration and the outputs of enforcement.
From this angle, the Act looks more like a risk and assurance statute. Effective compliance will require robust governance models supported by active risk management systems. These structures must align with Article 9's requirements and accept that this regulatory settlement involves justifying residual risk rather than eliminating all possible harms
There is a very English honesty in saying what a thing is and not what we wish it to be. The AI Act is less Magna Carta and more a careful ledger—part safety manual, part standards catalogue—with a conscience never far away. If we understand that, we can hold the Union to its public promise of dignity and fairness using the tools it actually gives us, and we can avoid the perennial European mistake of assuming our rules are universal simply because we wrote them down with feeling.
Ian Gauci is the Managing Partner at GTG, Malta.
The author wishes to acknowledge the foundational work of Yiyang Mei and Matthew Sag in developing the analytical framework that underpins this article.
This article was first published on the Oxford Business Law Blog on 8 September 2025.