Generative AI

Mitigating risk in AI, particularly under the EU AI Act, involves various strategies tailored to different types of AI systems, from high-risk AI to generative AI. In this brief article, I will focus exclusively on generative AI.

What is generative AI?  Generative AI, a subset of artificial intelligence, is a beacon of innovation. It is focused on creating new content or data that mimics real-world examples, from text and images to music and video. These models learn patterns from existing data and use this understanding to produce new, similar content. The history of Generative AI is a testament to human ingenuity, spanning from early statistical methods and neural networks to modern deep learning models like GANs (Generative Adversarial Networks),  VAEs (Variational Autoencoders), and transformers (Large language models like GPT-4 ). This evolution has opened up a world of possibilities, from content creation to drug discovery, inspiring hope for a brighter future.  

Early work in AI focused on rule-based systems and symbolic reasoning, laying the groundwork for more complex models. In 1956, the Dartmouth Conference marked the birth of AI as a field. Generative AI did not capture the world’s attention before 2017 as it focused on narrow tasks. In contrast, Generative AI is now used in various applications, including content creation, drug discovery, personalised marketing, and more. It has raised important ethical and societal questions, particularly regarding potential misuse in generating deep fakes or spreading disinformation. It has also attracted regulatory capture because of its perceived risks.

The regulatory frameworks, particularly in the EU and the U.S., reflect growing concerns about the impact of generative AI on society. They are not designed to stifle innovation but to ensure it is balanced with ethical and safety considerations.

The U.S. does not have a comprehensive federal AI regulation akin to the EU AI Act; however, federal agencies have developed fragmented initiatives and guidelines, and state-level regulations are emerging. Generative AI has come under scrutiny due to its potential for misuse in areas like deepfakes, cybersecurity, and privacy. The National Institute of Standards and Technology (NIST) has issued guidelines for trustworthy AI, which include considerations relevant to generative AI.

In October 2022, the Biden administration also issued an Executive Order on Ensuring the Responsible Development of AI and ensuring that AI technologies, including generative AI, are developed and used to protect public safety and national security while promoting innovation and fairness. It emphasises transparency, accountability, and ethical considerations in AI deployment. The U.S. Executive Order on AI further underscores the importance of responsible development and use of generative AI technologies. This balance is crucial for the future of AI, reassuring us that innovation can coexist with ethical and safety standards.

Let me focus on the AI Act and its salient provisions related to generative AI. The AI Act regulates generative AI through a structured framework emphasising the importance of monitoring computational resources, particularly FLOPS (Floating Point Operations Per Second). These measures ensure that generative AI models with significant computational power are appropriately managed to mitigate systemic risks.

The AI Act distinguishes between a “generative AI system” and a “generative AI model.” A generative AI model is an AI model that can generate content such as text, images, audio, or video. It is typically trained on large datasets using methods like self-supervised learning and can perform a wide range of tasks. AI systems based on a generative AI model can serve various purposes, including direct use or integration into other AI systems and additional components beyond the model, such as user interfaces, that enable it to function as a standalone system or as part of a broader application.

The AI Act also predisposes different regulatory obligations based on these key distinctions, with the heavier burden falling on Generative AI System Providers who must ensure compliance with regulations when integrating generative AI models into their systems, particularly if these systems pose high risks. This distinction ensures that the regulatory framework addresses the roles and responsibilities associated with the foundational AI models and the complete systems they enable.

Specifically, regarding generative AI, the AI Act introduces the notion of cumulative computational power to train them. It measures it in FLOPS, introducing the threshold of FLOPS as a risk metric and, thus, regulatory capture.

To give some context and background to FLOPS, the latter represents computation power. The origins of computing power trace back to the early 20th century, and the AI Act is not the first legislation to use the latter for regulatory capture. Regulation of software using computing power emerged with the need to manage increasingly complex systems. One of the early examples was in the 1960s with IBM’s System/360, which introduced the concept of operating systems to manage hardware resources and software applications. The development of real-time systems, particularly in aerospace and defence, also necessitated regulating and scheduling software tasks to ensure timely and reliable operations.

Regulatory frameworks emerged in the 1980s and 1990s, focusing on software quality, safety, and security. Standards such as ISO 9000 and IEEE 12207 were developed to guide software development and lifecycle processes. These standards leveraged computing power to automate testing, validation, and verification processes, ensuring that software met specified requirements.

The AI Act, however, is more targeted and comprehensive. It identifies general-purpose AI models with systemic risks based on their capabilities, using FLOPS as a primary metric. An initial threshold of FLOPS is established, beyond which a model is presumed to have systemic risks. This threshold will be updated periodically to reflect technological advancements.

Irrespective of the above, it's important to note that the AI Act does not regulate generative AI based solely on FLOPS. Instead, it can also cater to AI risks, particularly in high-risk applications where generative AI systems are part of a High-Risk system. While the capabilities enabled by high computational power are relevant, the primary regulatory considerations are the potential impacts on safety, health, fundamental rights, and societal implications.

Generative AI systems that pose Systemic risks (risks specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach or due to actual or reasonably foreseeable adverse effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain) will thus be regulated more stringently.

In my view, it also captures generative AI, which would have high-impact capabilities irrespective of the FLOPS metrics. This means capabilities that match or exceed those recorded in the most advanced general-purpose AI models. Let's expand a little on this. While part of the risk captured by the AI Act regarding generative AI is related to high computational power and FLOPS, the AI Act is also predisposed to cater to innovation and other risks.

With better training methods and data quality, which reduces reliance on computing and the introduction of better architectural designs, smaller generative AI systems with lower thresholds of training FLOPS might outperform higher models with higher training FLOPS. Thus, in a nutshell, the AI Act here has a dynamic tool and would shift to performance parameters.

This forward-looking and dynamic approach for generative AI balances promoting innovation and applying a proportionate regulatory capture factored on a dynamic factoring of risks. By looking beyond mere computational metrics and the presumption of risk solely where these exist and focusing on risks-based output performance, it acknowledges the true nature of this technology and mitigation of unjustified risks.

References
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … & Bengio, Y. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems, 27.
  • Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv preprint arXiv:1312.6114.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention Is All You Need—advances in Neural Information Processing Systems, 30.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
  • McClelland, J. L., Rumelhart, D. E., & Hinton, G. E. (1986). The appeal of parallel distributed processing. In D. E. Rumelhart, J. L. McClelland, & PDP Research Group (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations (Vol. 1, pp. 3-44). MIT Press.
  • Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.
  • National Institute of Standards and Technology (NIST). (2021). Artificial Intelligence Risk Management Framework (AI RMF).
  • The White House. (2022). Executive Order on Ensuring Responsible Development of Digital Assets.

Article by Dr Ian Gauci

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content