AI is moving. Fast. Too fast for some. Too recklessly for others. In finance, it’s approving loans and denying mortgages in the blink of an algorithm. In gambling, it’s personalizing experiences so well it knows your weaknesses better than you do. In healthcare, it’s making life-or-death recommendations with zero human oversight.
And for a while, no one stopped to ask: Where do we draw the line? Now, the EU AI Act does exactly that. It came into force in August 2024, regulating activities across the full AI lifecycle to ensure fair and ethical usage, safeguard fundamental rights and create a transparent AI ecosystem. Even thou it will predominately become generally applicable in August 2026, however it is already shaping the AI landscape, as the first tranche of key provisions came into effect from 2nd February 2025. Not with soft suggestions. Not with vague guidelines. But with red lines so sharp they cut through the noise.
Certain AI systems are not just problematic, they’re now banned. Not regulated. Not monitored. Banned. Full stop. Let’s break it down. Not just what AI is banned, but why - and what happens if companies ignore the rules.
The EU AI Act categorizes AI by risk. Most AI falls into high-risk (heavily regulated) or limited risk (minimal oversight). But then there’s a category so dangerous, so ethically corrosive, that it lands in a league of its own: And here we are referring to the Unacceptable Risk AI. This is the AI that crosses the line. That shapes decisions without consent, exploits vulnerabilities, or silently judges people without accountability.
These are some quick some suggestions on how the players in the AI arena can plan and stay ahead.
Stakeholder | Recommended Actions |
Developers | Build transparency into AI decision-making.Audit training data to eliminate bias.Ensure AI has human oversight. |
Businesses Using AI | Conduct regular compliance audits.Ensure high-risk AI is explainable and fair.Provide opt-out and appeal mechanisms. |
Consumers | Know your rights regarding AI decisions.Report unethical AI practices.Be cautious of overly personalised AI interactions. |
The Cost of Breaking These Laws is high where aside from reputational loss, civil liability as well as contractual, there are hefty fines:
Violation Type | Penalty |
Severe Breach of the AI Act | Up to €35 million or 7% of global revenue |
Non-compliance with transparency rules | Up to €7.5 million or 1.5% of revenue |
Failure to provide required information | Up to €5 million or 1% of revenue |
Repeated violations | Regulatory blacklisting from the EU market |
AI isn’t just a tool. It’s a force one that can shape economies, societies, and individual lives. The EU AI Act isn’t about slowing AI down - it’s about keeping AI in check. Because the best AI doesn’t just work. It works for us. Not against us.
Feel free to contact us on info@gtg.com.mt for any assistance in planning your AI compliance.
Author: Dr Ian Gauci
You may also wish to read:
Transparency and Human-Centric AI: The EU AI Act and Literacy Obligations