The Act itself is set to lay out rules for products driven with Artificial Intelligence (“AI”) in every industry. Being part of the EU’s digital single market strategy, the Act aims to establish a framework for the development, modification and even the use of AI products.
The Act broadly defines AI systems as any “software that is developed with one or more techniques and approaches listed /…/ for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.
Alongside a broad definition which places even minute AI systems under its scope, the Act also employs a classification system on AI, with the four tiers being: i) Minimal Risk, ii) Limited Risk, iii) High Risk, and iv) Unacceptable Risk. Every classification holds different obligations and limitations to the providers, such as transparency obligations. Non-compliance with these obligations may result in penalties going as much as 30 million Euros or 6% of annual turnover.
Aside from having the potential to become the de facto global standard for AI legislation, the last revision of the Act bolsters a strong emphasis on fundamental rights. This is evidenced by the prohibition of systems deemed to be of an unacceptable risk, such as government social credit systems and real-time biometric systems within public places. This ensures that all systems can be trusted, and most importantly, avoids all undesirable outcomes.
For any queries such as your obligations as a provider, do not hesitate to contact Dr Ian Gauci.