It has long been almost unquestionable that data is the currency of the digital economy, and as digital innovation accelerates, this observation only becomes more pronounced. Technologies such as artificial intelligence, machine learning, big data analytics, and the internet of things rely fundamentally on vast volumes of data to be collected, analysed, scaled, and strategically applied. In this context, the value of data is not simply inherent, but is also derived from the insights it generates and the decisions it enables.
Against such backdrop, this article suggests that within the European Union (“EU”), data is no longer treated only as something to be protected, rather, it now sits at the centre of a broader regulatory architecture. That architecture is made up of diverse legislation, spanning data protection related legislation, increasingly data (not limitedly “personal data”) related legislation, and also an increasing body of legislation that while note directly concerned with data in the strict sense, have significant implications for how data is accessed, used, shared, governed and ultimately relied upon.
This article opines that the effective legal net sum reaches well beyond the confines of data protection law and one that increasingly shapes how businesses are designed, how digital services are structured, how content is regulated and how value is created and finally governed in the modern economy.
It follows that organisations can no longer approach “data law or data compliance” through a largely GDPR-centred lens alone. Data protection and the GDPR remain a pillar in data related regulation, can no longer be treated as the sole legal reference point of strategic importance.
The more pressing question is now broader. Where does a business sit in the data chain, and what obligations follow from that role? A manufacturer of IoT products, an online platform provider, an AI deployer and a critical-sector operator may all deal with data in one way or another, but they are not governed by the same legal position. That, in many ways, is the essence of the resulting complexities from the new EU data rulebook that is effectively emerging through legislative developments.
Sector-specific legislation which effectively impacts data and content regulation is also noted as on the rise. This article will only touch upon some of the key legislation of generic relevance and application regarding data or content and some which albeit not related thereto, effectively impact relevant aspects in a non-sector specific way.
A first strand of this wider effective framework concerns access to and use of data. The Data Governance Act (the “DGA”) and the Data Act are often mentioned together, but they have a different scope. The DGA inherently seeks to increase confidence in data sharing and strengthen mechanisms that underpin data availability. The Data Act, is more interventionist as it gives users of connected products greater control over data they generate themselves.
This is poised to also tackle fairness in business-to-business data sharing whilst addressing unfair contractual terms and simultaneously introducing rules around cloud switching and interoperability. Read together, these instruments reflect a clear policy direction that data should not remain locked away merely because one actor happens to control the interface, the service environment or the technical means through which it is generated. The legislative direction is instead towards greater usability, greater portability and, ultimately of greater economic value.
The second facet is what occurs when data is mediated through platforms. Here, the Digital Services Act (the “DSA”) and the Digital Markets Act (the “DMA”) mark yet another important divide. The DSA is concerned with accountability in online environments. It addresses the responsibilities of intermediary services and online platforms in areas such as transparency, governance of digital spaces, and the handling of harmful or illegal content. Its relevance to data lies in the fact that data does not operate in the abstract; it is ranked, recommended, moderated and interacted with within digital environments that shape user behaviour and commercial visibility. Critically, rules regarding content-takedown and its implications are set out under the DSA and implications on illegal content.
The DMA mainly tackles a different, but related, concern. Its focus is on gatekeepers, or rather, the monoliths of the digital sphere. The gatekeepers, which function as bottlenecks between businesses and consumers. While the DMA is not a “data law” in the narrow sense, it is highly relevant to data because control over digital markets is often inseparable from control over data flows, user access, ranking mechanisms and platform dependency. In practical terms, the EU is not concerned only with whether data can move, but also with who controls the channels through which digital value is realised.
That same evolution is visible in the law’s treatment of AI. The significance of the AI Act lies not simply in the fact that it regulates AI systems, but in what this says about the legal treatment of data or content driven technologies more broadly. The regulatory concern does not begin only at the point of deployment as it also extends to the data that informs training, the reliability and security of the systems built on that data, and the legal frameworks that such technologies now place under strain.
In this sense, the AI conversation is no longer confined to the AI Act alone. It also reopens older legal questions, including whether existing copyright rules under the DSM and InfoSoc acquis can adequately accommodate AI training and AI-generated outputs. Multiple courts have already had to distinguish between the input or training phase and the output phase of AI systems, and the fact that the AI Act recognises the relevance of text and data mining does not mean that AI training fits neatly within pre-existing copyright exceptions and various intellectual property law questions remain to be addressed regarding data and content in context of the use by innovative technologies such as AI.
As regards outcomes based on data however, the AI Act reflects a maturation of the EU digital rulebook. Once data is capable of influencing outcomes, the law is no longer concerned solely with movement or access, but with governance, explainability and human oversight. It is at this juncture that notions such as human-in-the-loop oversight, record-keeping and traceability come into sharper focus.
If data is to be shared, relied upon and operationalised, the surrounding digital environment must also be secure. This is where NIS2 assumes particular importance. Its relevance is not merely that it is a cybersecurity instrument, but that the availability, integrity and confidentiality of data ultimately depends on the resilience of the entities that store and process it.
The Cyber Resilience Act addresses the same trust problem from a different angle, focusing not on the entity, but on the product. Specifically, whether that technology is secure by design and throughout its lifecycle. The same wider movement is visible in adjacent frameworks such as DORA for the financial sector, the revised eIDAS2.0 framework for digital identity and trust services, and the Open Data Directive for the re-use of public-sector information. Read together, these instruments show that trust in the digital economy is no longer merely assumed; it is increasingly engineered through law.
Seen in that light, the emerging legislation in Europe is not simply a loose collection, but taken together constitute a holistic EU data rulebook that is ultimately shaping the value of data and its scalability, application and business across the EU.
For any additional information or assistance, please contact us at info@gtg.com.mt
Author: Dr Terence Cassar, Dr JJ Galea and Dr Neil Gauci