Ethics, is a term derived from the Greek word ethos which can mean custom, habit, character or disposition. It is an intrinsic requirement for human life and is our means of deciding a course of action. At its simplest, it is a system of moral principles also described as moral philosophy, based on what is good for individuals and society.

Codes of ethics have always played an important role in many sciences. Such codes aim to provide a framework within which researchers can understand and anticipate the possible ethical issues that their research might raise, and to provide guidelines about what is, and is not, regarded as ethical behaviour. The late Professor Hawkins opined that new emerging technologies including Artificial Intelligence (AI), open a new frontier for ethics and risk assessment.

The Institute of Electrical and Electronics Engineers (IEEE) has been working on this front for some time, and to this end published a report titled “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial Intelligence and Autonomous Systems.”

Even the Asilomar AI principles build on ethics and value which in turn underpin principles capturing elements of safety, failure and judicial transparency, value alignment in the design and compatibility with human values and privacy.

The UK House of Lords published a report on 16th April 2017 entitled, “Artificial Intelligence Committee AI in the UK: ready, willing and able? Report of Session 2017-19”. This report highlights the importance of ethics and builds its five principles which would marry with the Asilomar AI principles and which read as follows:

(1) Artificial intelligence should be developed for the common good and benefit of humanity.

(2) Artificial intelligence should operate on principles of intelligibility and fairness.

(3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

(4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

From this angle, there is no novelty being proposed here. The report however does not stop with these principles. The report seems to stress that AI should not be regulated at this juncture. Such a reading would stem from paragraph 375, which quotes the Law Society UK stating that there is no obvious reason why AI would require further legislation or regulation and that AI is still relatively in its infancy and it would be advisable to wait for its growth and development to better understand its forms. In paragraph 373 eminent academics like Prof Robert Fisher et al said “most AI is embedded in products and systems, which are already largely regulated and subject to liability legislation. It is therefore not obvious that widespread new legislation is needed”.

The report however seems to imply that a form of legislative intervention might be required in other chapters. Let’s park this for a second however and briefly mention one piece of legislation, being the General Data Protection Regulation (GDPR), which was quoted as well in the report and which will be applicable in this ambit. In summary, the GDPR provides that when personal data is processed it should be processed in a lawful, fair and transparent manner. It is collected for specific, expressly stated and justified purposes and not treated in a new way that is incompatible with these purposes. It is correct, updated, adequate, relevant and limited to what is necessary for fulfilling the purposes for which it is being processed, not stored in identifiable form for longer periods than is necessary for the purposes, and processed in a way that ensures adequate personal data protection. Any algorithm would need to be coded keeping all these criteria in mind and thus following the mandated data protection by design principles. Aside from this, the GDPR also provides for data protection impact assessments (DPIAs), intended as a tool to help organisations identify the most effective way to comply with their data protection obligations and meet individuals’ expectations of privacy.

Now, back to the contents of the report. Even though the report lists five ethical principles and in certain paragraphs it implies that for the time being there is no need for further legislation, in other paragraphs it might be suggesting a different policy direction. In paragraph 95, the notion of intelligibility and technical transparency are highlighted along with the importance for experts (which could be software auditors) to understand how an AI system has been put together, which according to the same paragraph; “…this might, for example, entail being able to access the source code of an AI system.”

In paragraph 99, the drafters also acknowledge that:

“…achieving full technical transparency is difficult, and possibly even impossible, for certain kinds of AI systems in use today, and would in any case not be appropriate or helpful in many cases. However, there will be particular safety-critical scenarios where technical transparency is imperative, and regulators in those domains must have the power to mandate the use of more transparent forms of AI, even at the potential expense of power and accuracy”.

 The above policy suggestions would in principle boil down to regulations or laws which could mandate more than observance of ethical standards, as they would promote a higher level of algorithmic transparency, accountability where required, as well as powers for regulators to impose them. These are topics also touched upon by Rueben Binns in his book, “Algorithmic Accountability and Public Reason”. I also believe that a close look at the Asilomar Principles particularly principles 6, 7, 8 and 22 reproduced hereunder, would also hint at the inclusion of such measures and narrative:

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Certain software (which is composed of an algorithmic code), already allows companies to “draw predictions and inferences about personal lives”. A clear case in point is the recent Cambridge Analytica debacle. For example, a machine learning algorithm could successfully identify a data subject’s sexual orientation, political creed and social groups and use this information to build profiles, services as well as to categorise data subjects. As the code learns patterns in the data, it also absorbs biases in it, perpetuating them. In one of the most striking examples, an algorithm called COMPAS used by law enforcement agencies across multiple states to assess a defendant’s risk of reoffending, was found to falsely flag black individuals almost twice as often as whites.

This is AI’s so-called black box problem, and our inability to see the inside of an algorithm and understand how it arrives at a decision. Maybe the fine underlying message out of this report by the UK House of Lords, is that if this is left unchecked, particularly in an era where code can be law, and where many authors have already sounded the bells on algorithmic governance, this can have devastating effects on our societies.

As Professors Nick Bostrom and Eliezer Yodkowsky stated in “The Ethics of Artificial Intelligence”: “If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans, then the discipline of machine ethics must commit itself to seeking human-superior (not just human-equivalent) niceness”.

For more information on the legal aspects and implications of Artificial Intelligence, or if you have any questions, please feel free to contact Dr Ian Gauci on igauci@gtgadvocates.com.

Disclaimer: This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content