AI Systems have quietly become the engine behind countless decisions and processes across Europe’s private and public sectors. Reliance on these systems is deepening, and so too does responsibility to ensure they operate in line with fundamental rights; a responsibility that carries particular weight within the EU’s own Institutions, Bodies, Offices and Agencies (“EUIs”).
Recognising this, on 11 November, the European Data Protection Supervisor (“EDPS”) has issued the new Guidance for Risk Management of Artificial Intelligence Systems (the “Guidance”).
As a preface, the Guidance's scope is deliberately narrow as it addresses EU institutions only. More critically, the EDPS explicitly disclaims responsibility, noting that the document "does not constitute and shall not be relied upon as a set of compliance guidelines" and that it provides "non-exhaustive" technical measures.
This disclaimer aside, the document provides practical value. Built around ISO 31000:2018 risk management principles, the Guidance structures AI-related data protection risks into a systematic cycle: 1. Identification; 2. Analysis; 3. Evaluation, and 4. Treatment. Rather than imposing rigid prescriptions however, it creates a diagnostic toolkit for the EUIs to utilise.
The comprehension of the AI lifecycle serves as the lynchpin. Traditional software development moves in relatively linear phases; with AI systems being more complex. The Guidance cycles through inception and problem definition, data acquisition and preparation, development, verification and validation, deployment, operation and monitoring, continuous validation, re-evaluation, and ultimately retirement. The key idea being communicated is that risks manifest differently at each stage, i.e., Data quality issues surface during acquisition; overfitting emerges in development; drift undermines accuracy during operation. By mapping risks to these phases, the Guidance forces the EUIs to ask not merely "what risks exist?" but “when and how do they emerge?".
The Guidance identifies interpretability and explainability as “cross-cutting” prerequisites. Here lies nuance often lost in policy debates. It carefully distinguishes interpretability (the degree to which natural persons can understand a model’s logic), from explainability (which concerns the ability to justify specific predictions).
Linear models are inherently interpretable, and although their outputs still fall within the camp of ‘probabilistic’ in their nature, the resulting outcomes can nonetheless be regarded as expected and readily explainable. Juxtaposing this, deep neural networks rarely are, but they may be explainable through techniques like Local Interpretable Model-Agnostic Explanations (“LIME”) and Shapley Additive Explanations (“SHAP”) that aid illuminating decision pathways. The point being that interpretability and explainability do not suffice alone. Understanding how an AI system works says nothing about whether it produces fair outcomes. On the inverse, explaining a decision clarifies its reasoning but not whether the reasoning was sound.
The Guidance then unpacks five data protection principles through a risk lens.

Fairness demands that EUIs identify bias, whether from poor training data, unrepresentative populations or just design choices, countermeasures ought to be implemented. The document suggests technical responses such as data audits, bias mitigation techniques, feature engineering, and human-in-the-loop oversight.
Accuracy receives parallel treatment, distinguishing legal accuracy (i.e. whether data reflects reality) from statistical accuracy (i.e. whether models perform well). The Guidance flags two core risks: inaccurate outputs and unclear provider information a pragmatic acknowledgment that many entities, not just EUIs, procure rather than build AI systems, and that vendor opacity remains endemic.
Data minimisation attempts to address a fundamental tension within AI development: the systems' insatiable appetite for large datasets versus the data protection principle that only necessary information should be processed. This creates a genuine conflict. IUDs procuring or developing AI systems must balance the need for sufficient, representative data against the imperative not to collect indiscriminately.
Security examines three specific threats: training data reconstruction attacks, where adversaries reverse-engineer sensitive information from model outputs; breaches in storage, exploiting the large attack surface of training datasets; and API exploitation through improperly secured interfaces. The Guidance proposes layered defences, data minimisation, encryption, synthetic data, security audits, secure API design, and proactive patching, however, emphasises that no single measure suffices.
Data subject rights, such as access, rectification, erasure, and portability remain fundamental but come with immense difficulty to enforce with AI systems. Identifying personal data is straightforward in structured datasets but complex when data is embedded within model parameters of neural networks or large language models. Precise removal of an individual’s data from these models requires a level of technical expertise and may be impossible to do without impairing overall functionality. The Guidance recommends detailed metadata management, tools for data retrieval, and emerging machine unlearning techniques to address these challenges. However, exact unlearning remains costly and approximate methods lack full guarantees, leaving a grey area between legal rights and technical feasibility.
Across these risks, the Guidance proposes non-exhaustive technical measures. Its annexes provide practical metrics and checklists to help translate these abstract principles into actionable steps, framing a pragmatic approach EUIs to manage AI risks responsibly.
--
AI Act compliance? AI related disputes? Do not hesitate to contact us on info@gtg.com.mt for more information.
Authors: Dr Terence Cassar & Dr J.J. Galea