The European Union (EU) created the Digital Services Act (DSA) and the Artificial Intelligence Act (AI Act) to address the challenges and opportunities posed by digital transformation and the rise of artificial intelligence.
The AI Act targets AI systems, particularly those classified as high-risk, setting detailed requirements for their deployment. It complements the DSA by addressing ethical concerns such as biases, discrimination, and lack of transparency. The AI Act mandates robust data governance, transparency, human oversight, and accountability to ensure AI systems are ethical and responsible.
The DSA primarily regulates online intermediaries and platforms, ensuring they operate transparently and responsibly to protect users from illegal content and misinformation. It targets large online platforms and online search engines, mandating measures to enhance transparency, accountability, and user rights. While not exclusively focused on AI, the DSA’s provisions extend to AI systems used by these platforms.
As the regulatory landscape evolves, the interplay between the DSA and the AI Act will shape how AI is integrated and governed within digital services, fostering a safer and more transparent digital environment.
Let’s have a brief overview of both Acts starting with the DSA. The latter primarily regulates online intermediaries and platforms, ensuring they operate transparently and responsibly to protect users from illegal content and misinformation and predominately capturing very large online platforms and very large online search engines.
While the DSA is not specifically designed to regulate AI, its provisions can apply to AI systems used by these platforms. Here is a brief outlay of how the DSA intersects with AI Act:
Many online platforms use AI systems to automate content moderation, detecting and removing illegal content such as hate speech, misinformation, or copyright infringements. The DSA's requirements for transparency and accountability in content moderation processes can also apply to these AI-driven systems. Platforms must provide clear information on how AI tools are used for moderation, ensure users can challenge automated decisions, and maintain oversight to prevent errors or biases.
The AI Act addresses ethical concerns associated with AI, such as biases, discrimination, and lack of transparency. It sets out requirements for data governance, transparency, human oversight, and accountability to ensure that AI systems are ethical and responsible.
The DSA mandates transparency in advertising and content recommendation systems, which often rely on AI algorithms. Platforms must disclose when AI is used to personalise content or adverts also explaining how these systems work and allowing users to control their data and preferences. This aligns with broader AI transparency requirements, ensuring users know and can manage AI-driven interactions.
The AI systems used by platforms must comply with the DSA's provisions to protect user rights. This includes offering mechanisms for users to appeal content removal decisions made by AI and ensuring that AI applications do not infringe on users' fundamental rights. For instance, platforms must comply with the DSA's user consent and transparency requirements if an AI system is used for profiling or targeted advertising.
The DSA also caters for systematic risks, and this is particularly relevant as well when AI is used as VLOPS would need to make sure in line with article 33(6) of DSA that the design of their software which would also include AI, as well as systems and services does not create systematic risks.
Similarly, the AI Act is precisely centred on minimizing risk for AI Systems, where it categorically prohibits banned AI systems because of unacceptable risks, regulates High risk AI because of their risk and introduces a regulatory regime for generative AI models based on systematic risks within the EU.
The DSA predisposes that very large online platforms and very large online search engines must have in place mitigation measures pursuant to Article 35 to mitigate risks and this includes as well testing and adapting algorithmic systems which might also include AI, testing for systematic risk as well as the power to the Digital Service Coordinators or the Commission to request the very large online platforms and very large online search engines the explanation of the design, function, logic and testing of their algorithms.
High Risk AI systems under the AI Act must be tested for the purpose of determining adherence with the AI Act along the storage of the required documentation as well as specific testing for the most effective risk management system under Article 9(1)
The DSA establishes an oversight mechanisms to ensure compliance with its provisions, including those related to AI usage. Platforms using AI for moderation, recommendation, or advertising must adhere to these regulations, subject to oversight by national digital services coordinators and the European Board for Digital Services. Similarly, the AI Act establishes a robust oversight framework and enforcement regime on AI.
The DSA seeks to protect users' fundamental rights, including freedom of expression and the right to privacy. It provides mechanisms for users to appeal content removal decisions and ensures that platforms respect these rights in their operations. Like the DSA, the AI Act seeks to protect fundamental rights and public interests by regulating AI systems that impact critical areas such as healthcare, education, employment, and law enforcement. This ensures that AI is used to benefit society and not harm individuals.
The DSA aims to harmonise the regulatory environment across the EU, ensuring that both small and large online platforms adhere to the same rules and supports, creating a fair competitive landscape and fostering innovation. Similarly, the AI Act aims to harmonise the regulatory environment in the EU on the making available, use and deployment of AI systems.
For information or assistance please contact Dr Ian Gauci.
Cybersecurity in Aviation: Securing the Skies
Enhancing the Life Sciences Sector through AI and Cybersecurity