As we wrap up another year, it’s hard not to marvel at how artificial intelligence (AI) continues to reshape our world. From education and healthcare to finance, AI is transforming industries at a pace that’s both exciting and, at times, overwhelming. But with great power comes great responsibility, and the rapid growth of AI has brought with it some big questions: How do we ensure AI is ethical? How do we keep it safe? And how do we make sure it works for everyone, not just a select few?
These are the kinds of questions I’ve been reflecting on as we close out the year. And they’re not just theoretical. Around the world, governments, organisations, and experts are working hard to create rules and standards to guide AI’s development. One of the most significant efforts is in Europe, where the AI Act sets the stage for regulating AI. But creating rules for something as complex as AI isn’t easy, and that’s where standards like ISO/IEC 42001 come in.
Let me take you on a journey through what these standards mean, why they matter, and how they’re shaping the future of AI.
Context and Leadership: Every organisation is different, and ISO/IEC 42001 recognises that. It asks organisations to align their AI strategies with their unique context regulatory requirements, stakeholder expectations, and business goals. Leadership is also a big focus. Top management needs to be on board, ensuring resources are available and fostering a culture of accountability.
Risk Management: AI has unique risks, like bias, lack of transparency, and unintended societal impacts. ISO/IEC 42001 builds on existing risk management principles (like those in ISO 31000) to help organisations identify and address these risks. It even includes an AI System Impact Assessment, which evaluates how AI might affect individuals and society.
Policies and Documentation: A good AI policy is like a company’s mission statement for responsible AI. ISO/IEC 42001 requires organisations to create and regularly review their AI policies, ensuring they reflect the company’s commitment to ethical practices. Detailed documentation of AI systems is also necessary, covering everything from data sources to compliance measures.
Competence and Resources: AI is only as good as the people behind it. That’s why the standard emphasises the importance of training, resource allocation, and continuous expertise evaluation.
Monitoring and Improvement: AI isn’t static; like other innovative technologies, it continuously evolves. ISO/IEC 42001 encourages organisations to monitor their AI systems, measure their performance, and constantly improve. It’s about staying ahead of the curve, not just keeping up.
Ethical AI: ISO/IEC 42001 also strongly focuses on ethics. It draws on principles like fairness, transparency, and human oversight to ensure AI systems align with societal values. In a world where trust in technology is often shaky, this focus on ethics is a game changer.
You might rightfully ask now, but how does ISO/IEC 42001 fit into the bigger AI picture?
Let's answer that question and take a peek at the larger picture. ISO/IEC 42001 doesn’t exist in a vacuum. It works alongside other standards to create a comprehensive framework for AI governance. For example:
(a) ISO/IEC 27001: This focuses on information security. When paired with ISO 42001, you cover AI's ethical and technical sides, keeping data safe while ensuring the AI is responsible.
(b) ISO 9001: This standard revolves around quality management and is being updated to better address AI. It perfectly complements ISO 42001’s focus on ethics and trustworthiness. ISO 9001 provides a framework for continuous improvement and consistent quality in processes and products. As it undergoes revisions to reflect new business practices and technologies, including AI, it will likely become even more relevant to the AI Act’s requirements.
(c) ISO/IEC TR 24368: This standard is like the moral compass that supports ISO 42001. ISO/IEC is an overarching standard addressing ethical and societal issues for AI systems. Identifying core themes and principles helps organisations align their AI activities with societal values and human rights. Notably, the standard needs to be more prescriptive; it guides organisations to create AI systems that mitigate harm while promoting beneficial outcomes.
These standards will be pivotal for implementing the AI Act; however, it's worth noting that the latter requires a broad Safety Risk Management approach, considering the combination of the probability of harm and the severity of that harm. It aims to mitigate risks specific to AI technologies, which may not be adequately addressed by the current risk management frameworks of ISO/IEC 27001 and ISO/IEC 42001, particularly since both standards allow organisations considerable flexibility in implementing controls.
This flexibility, unless adherent with the requirements of the AI Act, could result in QMS implementations that do not fully align with or support the AI Act’s conformity requirements, which vary according to the high-risk AI category. This misalignment could be particularly problematic for high-risk AI applications already regulated under a lex specialis, such as medical devices.
In such instances, the AI Act’s conformity requirements may need to operate alongside existing regulations for high-risk AI in specific sectors. Determining which QMS should apply (AI Act or lex specialis) remains challenging, requiring further clarity and guidance.
So, what does all this mean as we head into the new year? Rejoice, a new playbook for AI will be slowly coming to life. Together, these standards provide a toolkit for organisations navigating the complex world of AI regulation.
But it’s not all smooth sailing, albeit the message is clear: AI governance isn’t just about avoiding risks. It’s about creating a culture where organisations strive to improve, not because they have to but because it’s the right thing to do. Standards like ISO/IEC 42001 are a step in the right direction, but they’re just one playbook piece. As the AI landscape evolves, we’ll need to keep asking tough questions, challenging assumptions, and working together to build a future where AI benefits everyone.
As someone who’s been following these developments closely, I’m optimistic. The road ahead won’t be easy. With the right tools, standards, frameworks, and mindset, we can participate in an AI playbook that’s not just about compliance but a new way of societal well-being and welfare.
Here’s to a new year of innovation, responsibility, and progress!
Article by Dr Ian Gauci