Banned AI

AI is moving. Fast. Too fast for some. Too recklessly for others. In finance, it’s approving loans and denying mortgages in the blink of an algorithm. In gambling, it’s personalizing experiences so well it knows your weaknesses better than you do. In healthcare, it’s making life-or-death recommendations with zero human oversight.

And for a while, no one stopped to ask: Where do we draw the line? Now, the EU AI Act does exactly that. It came into force in August 2024, regulating activities across the full AI lifecycle to ensure fair and ethical usage, safeguard fundamental rights and create a transparent AI ecosystem. Even thou it will predominately become generally applicable in August 2026, however it is already shaping the AI landscape, as the first tranche of key provisions came into effect from 2nd February 2025. Not with soft suggestions. Not with vague guidelines. But with red lines so sharp they cut through the noise.

Certain AI systems are not just problematic, they’re now banned. Not regulated. Not monitored. Banned. Full stop. Let’s break it down. Not just what AI is banned, but why - and what happens if companies ignore the rules.

The EU AI Act categorizes AI by risk. Most AI falls into high-risk (heavily regulated) or limited risk (minimal oversight). But then there’s a category so dangerous, so ethically corrosive, that it lands in a league of its own: And here we are referring to the Unacceptable Risk AI. This is the AI that crosses the line. That shapes decisions without consent, exploits vulnerabilities, or silently judges people without accountability. 

    1. AI That Manipulates You Without You Knowing (Article 5.1a): AI that distorts decisions using subliminal tricks, leading to financial loss, addiction, or emotional harm. Example: A gambling platform that knows you’re losing control and, instead of warning you, nudges you to keep playing. A finance app that uses subtle psychological prompts to push users into high-risk trades.
    2. AI That Exploits Vulnerabilities (Article 5.1b). AI that takes advantage of people’s weaknesses—whether due to age, disability, or financial distress. Example: A loan AI that deliberately targets low-income users with predatory interest rates. A healthcare chatbot that steers elderly users toward unnecessary, expensive treatments.
    3. AI That Judges Your Worth (Article 5.1c) – Social Scoring. AI that assigns you a reputation score based on behavior, finances, or social circles, then uses it to limit your opportunities. Example: A banking AI that lowers your credit score based on who your friends are, what you buy, or where you live.
    4. AI That Watches You in Public Without Consent (Article 5.1d & 5.1e) Biometric Surveillance.AI-driven facial recognition tracking people in public spaces without clear, legal justification. Example: Casinos secretly using AI-powered facial analysis to track high-risk gamblers. Banks profiling customers as financial risks just by scanning their faces.
    5. AI That Predicts Crime Without Evidence (Article 5.1e). Predictive Policing. AI that flags people as criminals or financial risks based on data profiling rather than hard evidence. Example: A fraud detection AI that denies financial services to an entire demographic group just because they fit a statistical pattern.
    6. AI That Lies About Being Human (Deepfake Deception - Article 5.1f).AI systems pretending to be real people, manipulating politics, markets, or personal decisions. Example: AI-generated customer service reps designed to trick users into thinking they’re human, making them trust decisions they otherwise wouldn’t.

These are some quick some suggestions on how the players in the AI arena can plan and stay ahead.

Stakeholder Recommended Actions
Developers Build transparency into AI decision-making.Audit training data to eliminate bias.Ensure AI has human oversight.
Businesses Using AI Conduct regular compliance audits.Ensure high-risk AI is explainable and fair.Provide opt-out and appeal mechanisms.
Consumers Know your rights regarding AI decisions.Report unethical AI practices.Be cautious of overly personalised AI interactions.

 

The Cost of Breaking These Laws is high where aside from reputational loss, civil liability as well as contractual, there are hefty fines:

Violation Type Penalty
Severe Breach of the AI Act Up to €35 million or 7% of global revenue
Non-compliance with transparency rules Up to €7.5 million or 1.5% of revenue
Failure to provide required information Up to €5 million or 1% of revenue
Repeated violations Regulatory blacklisting from the EU market

 

AI isn’t just a tool. It’s a force one that can shape economies, societies, and individual lives. The EU AI Act isn’t about slowing AI down - it’s about keeping AI in check. Because the best AI doesn’t just work. It works for us. Not against us.

Feel free to contact us on info@gtg.com.mt for any assistance in planning your AI compliance.

Author: Dr Ian Gauci

 

You may also wish to read:

Transparency and Human-Centric AI: The EU AI Act and Literacy Obligations

 

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content