The software development cycle

The software development cycle, also known as the software development life cycle (SDLC), is a structured process for developing software applications. It encompasses various phases, each contributing to the software's systematic development and deployment. Understanding the Software Development Life Cycle is crucial for developers and the industry, as it ensures that software products are developed efficiently and meet quality standards.

The concept of SLDC originated in the early days of computer science. The first formalised model, known as the "Waterfall Model," was introduced by Dr. Winston W. Royce in 1970. The Waterfall Model emphasises thorough documentation and clear project stages, including requirements analysis, system design, implementation, testing, deployment, and maintenance. Since then, various models have emerged to address the limitations of the Waterfall Model, such as iterative and incremental models, agile methodologies, and Dev Ops.

The SDLC is vital for the software industry for several reasons. It provides a clear roadmap, ensuring that development is organised and systematic. Each phase has defined goals and deliverables, which help manage project scope and timelines. These include quality assurance, identifying and addressing risks early in development, and structuring planning and budgeting resources.

The recently approved EU AI Act introduces a regulatory framework for AI systems and has profound implications for the SDLC of AI development in the EU, affecting each cycle phase.

These are some of the critical provisions of the AI Act impacting Software Development Life Cycle:
  1. Developers must assess and classify their AI systems early in the requirement analysis phase to determine if they fall under high-risk categories, triggering specific compliance obligations.
  2. Transparency must be embedded in the design phase, ensuring the AI system's decisions can be understood and traced back to the underlying data and algorithms. Developers must ensure that AI systems are transparent and traceable, maintaining documentation and logs that track the system's decision-making processes.
  3. High-risk AI systems require robust data governance practices. Developers must ensure data accuracy, relevance, and fairness, implementing measures to prevent biases and ensure data quality. This includes establishing data management protocols, ensuring data provenance, and applying techniques to mitigate bias.
  4. Developers must also incorporate mechanisms for effective human oversight, ensuring humans can monitor and control AI systems, especially in high-risk applications and creating interfaces and tools for human operators to intervene, review, and override AI decisions if necessary.
  5. The AI Act also imposes stringent testing and validation to identify and mitigate risks. Continuous monitoring and updating of risk management measures are crucial. This includes stress testing, scenario analysis, and validating the AI system under different conditions to ensure its robustness and reliability.
  6. Before an AI system is deployed in the EU, the AI Act also mandates that developers, in particular, must ensure their AI systems comply with ethical standards and legal requirements, particularly for high-risk systems. This involves conducting ethical reviews, obtaining necessary certifications, and guaranteeing the AI system meets regulatory standards.
  7. Another obligation stipulates that comprehensive training and documentation must be provided to users, ensuring they understand how to operate and oversee the AI system effectively. This includes creating user manuals, training programs, and support resources to help users interact with the AI system safely and effectively.

It is important to highlight that when assessing and implementing the above novel obligations, one must continuously monitor the post-deployment of an AI system. Thus, these AI systems must be constantly monitored to ensure ongoing compliance with the AI Act. Risk management and data governance measures will likewise need regular updates, requiring the setting up of tracking systems to detect anomalies, conducting periodic audits, and updating the AI system to address new risks or vulnerabilities.

Existing AI systems already deployed before the inception of the EU AI Act will generally not need to comply with the new regulations aside from making sure that there is no banned AI. Having said this, however, should any substantial modifications be planned or required, then the AI Act will apply. Thus, developers must start catering for the required modifications in their SDLC for any direct and indirect impacts. This is very tricky and requires adequate planning to make sure that the AI system being modified can actually cater to all the required obligations mentioned above and that the required training, plugins are in place before the modified version is deployed.

Given the so-called Brussels Effect of the AI Act, the above will apply to any high-risk AI system provided in the EU or from the EU.

Specifically for developers and companies outside the EU aiming to provide AI systems within the EU, the AI Act imposes additional compliance burdens. These developers must, amongst other obligations, align their development practices with the EU’s regulatory standards, potentially requiring significant changes to their processes. This may involve reworking existing AI systems to meet EU requirements or developing new systems specifically for the EU market.

Non-EU developers must also maintain thorough documentation and be prepared for audits by EU regulators. This includes providing detailed records of the AI system's development, testing, deployment, and maintenance processes.

While the Act imposes additional challenges, it also sets a global benchmark for AI governance. By understanding and adapting to the EU AI Act, developers will comply with regulatory standards and contribute to creating safer, more reliable AI systems.

Article by Dr Ian Gauci.

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content