The draft regulation’s aims, goals and intent is captured primarily in the introduction through Recital 3:

A legal framework setting up a European approach on artificial intelligence is needed to foster the development and uptake of artificial intelligence that meets a high level of protection of public interests, in particular the health, safety and fundamental rights and freedoms of persons as recognised and protected by Union law. This Regulation aims to improve the functioning of the internal market by creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union.”

Thus, the draft Regulation aims in primis to foster development and uptake of AI with a supposedly agile framework. Obviously, the factoring of this aim must be calibrated with the fierce global competition for AI advantage. Europe is already struggling in this race. The USA leads the global race for AI, with China in second, and the EU lagging behind. At the heart of this race is the ability of people and firms to engage in data-driven innovation. It is felt that the draft regulation if adopted as is, will doom Europe as it may compound the serious competitive disadvantage in relation to sourcing AI-related investment opportunities in the EU.

In its attempt to address concerns regarding AI, the draft regulation takes an approach which contains important tools for the protection of the end-users and society at large, but takes an approach which, it is felt, raises a number of concerns. According to the draft regulations the following types of AI will be prohibited:

  • AI used for “indiscriminate surveillance,” including systems that directly track individuals in physical environments or aggregate data from other sources
  • AI systems that create social credit scores, which means judging someone’s trustworthiness based on social behaviour or predicted personality traits.

The draft regulation also introduces among other things:

  • A classification of an AI system as high-risk
  • Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market.
  • Assessment for high-risk systems before they’re put into service, including making sure these systems are explicable to human overseers and that they’re trained on “high quality” datasets tested for bias
  • Special authorization for using “remote biometric identification systems” like facial recognition in public spaces
  • Notifications required when people are interacting with an AI system, unless this is “obvious from the circumstances and the context of use”
  • An ad hoc punitive regime where fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases can be imposed.
  • The creation of a “European Artificial Intelligence Board,” consisting of representatives from every nation-state, to help the commission decide which AI systems count as “high-risk” and to recommend changes to prohibitions

The approach proposed is one in which a particular technology is being regulated rather than addressing its application and its impact on the rights of individuals and society. Even more concerning is that in doing so, the regulation attempts to define the technology despite outright admitting that “there is no universally agreed definition of artificial intelligence” (Recital 13). Furthermore, the concern which this regulation is attempting to address goes beyond AI, since failure of critical systems is to be addressed no matter whether or not it involves the use of AI.

Whilst defining AI might still be deemed to be key for any purported regulation, any definition here should avoid intended or unintended biases pegged to a specific technology. To this end as a starting point before any definition is crafted, one should focus on the various types and uses of AI and their differentiation in use and scope. This is also in line with the intended outcome of draft Article 1 of the same regulation, which states that “…..key tenet of the legal framework is that it should not focus on the technology as such. Instead, the legal framework should focus on the concrete utilisation of the technology in the form of AI systems (and the risks potentially deriving therefrom).”

The above exercise is of paramount importance as it will clutch to the suggested ex-ante conformity assessment frameworks. It is strongly felt that the latter within the context of the suggested regulatory capture of AI (as presently defined) is neither desirable nor fair as it will ultimately stifle innovation.

It is also felt that various definitions in the draft regulation were very open to interpretation. The draft regulation when analysed is also mired with vagueness and loopholes, which detract from the intended aim behind an instrument of legal harmonisation and thus more legal certainty and transparency. This could also potentially dent the effect and target behind principles of accountability, responsibility, and transparency in the same draft.

Regulations should focus on addressing the use of technology in critical applications. In this respect, the draft regulation rightfully addresses critical systems which we fully support. It is suggested that it would be preferable to focus more on a horizontal framework from which would hang several specific vertical interventions. These verticals would concentrate on behaviours which should be prevented and be designed to tackle certain high risk uses whilst the horizontal approach should deal with matters such as product safety, consumer protection, liability issues and guidance to developers as set out in the assessment list developed in the HLEG Ethical Guidelines.

In this respect, the first, and perhaps most important recommendation of the AI HLEG is to adopt a risk-based policy framework. This implies  accountability, responsibility, transparency as well as liability elements to be captured in any intended legal regime. The latter should factor existing legal and regulatory tools in the current regulatory acquis, and not only with regards to liability issues. Without this clear understanding it is not possible to determine where this new regulation needs to be brought to bear. The principles that should drive regulation in this sector are ex ante quality assurance and ex post user protection. Processes to ensure that technology deployed in critical domains must undergo external due diligence processes and technical audits or conformity assessments (as they are called in the draft regulation) ensuring that precautions and tests were done before the deployment of the technology, and clear liability in order to ensure that code is not law – that systems are to behave as per their human-readable described behaviour and with described safeguards in place which are deemed to be appropriate for the domain in question. Liability goes beyond reasonable behaviour of a system (e.g. a health system should not leak an individual’s information to the public), but should also address promised and documented behaviour (e.g. data inputted to into the health system will be checked by at least two medical officers).

To reflect societal concerns about the impact of AI, it is felt that the regulation should focus more inherently on the design aspects. To ensure that AI systems are developed responsibly and incorporate legal, social and ethical values, characteristics of autonomy, adaptability, and interaction should be complemented with design principles that ensure trust.

The draft regulation hints at an AI which should be human centric (see recital 10) and covers human oversight in Article 11, albeit failing to mention and capture the notion of legal by design principles. Regrettably this aspect is not specially mandated and there is also no mention that the AI must through its operational lifespan observe a design which is coherent fundamental human rights. This raises a concern that AI could fail to receive adequate oversight under international human rights law.

The draft regulation focuses on conformity assessments, in particular for the EU block and it does not seem to recognize that AI creates inter-dependencies with other countries outside the EU. We see this as problematic, particularly given the starting point of the current EU AI market and the edge countries like China and USA and the interdependence needs that might be required. Aside from the latter concerns, limiting the EU regulation and measure to a regional as opposed to a global approach exacerbates its negative impact on trade, interoperability, innovation as well as global market presence.

The specific exclusions of the applicability of the regulation for military use AI as well as the exemptions and non-applicability to public authorities or on their behalf for the purpose of safeguarding public security should be re considered. It is our view that they should likewise be captured by this Regulation and in case of exemptions, these should only be allowed after a public consultation and the carrying out of the required impact assessments – including as well, human rights impact assessments.

It is also unclear how the Regulation will effect AI conformity, high risk AI and banned AI which were already present in every facet of the market and society prior to the inception of the regulation as well as the applicability of Article 56 in this instance. We believe that this should be crystallized, and any measure should be backed by a rigorous impact assessment as it will affect several sectors, markets as well as users.

It might still be early days to introduce a regulation and governance framework for AI. As regards the ecosystem of trust, it is felt that the draft regulation fails to provide a compelling case for the suggested measures. Many areas in which applications utilising AI are under development have yet to achieve significant traction in the market and much of the risks identified (even in the white paper and previous consultation) can easily be tackled by existing legislation. Until that happens it is very difficult to stand over any perception of risk because there is insufficient coherent data on which to build an evidence base need for such regulation.

Article written by Managing Partner, Dr Ian Gauci.

Disclaimer: This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content