The emergence of technologies like, artificial intelligence, machine learning models that learn and evolve from the use of big data, as well as digital ledger technology, blockchain and smart contracts amplified the potential and scope for increased automated as well as autonomous systems in a software driven world. An autonomous system, can learn, adapt, evolve and decide on its own based on the diverse situations it is facing. Here we would be speaking of quasi intelligence, sentience. On the other hand, automated systems typically run within a well-defined set of parameters and are very restricted in what tasks they can perform. The decisions made or actions taken by these automated systems are based on predefined methods.
Blockchain-based smart contracts without any AI, usually fall within the latter category. Smart contracts are automated contracts. They are self-executing algorithms with specific instructions which are coded and get executed when certain conditions are met.
These algorithms have huge potential but are also fallible human constructions. They might lack required testing, be embedded with bugs, mistakes, limitations and bias. We also had several instances including the infamous DAO hack in 2016, which highlighted these shortcomings.
It’s also not surprising that the EU Blockchain Observatory and Forum in a recent report with the title, Legal and Regulatory Framework of Blockchains and Smart Contracts also highlighted these issues and the risks nascent from complex smart contracts and stated the following:
‘Depending on the complexity of the agreement, it can be extremely difficult to correctly or adequately encode contract terms. A smart contract might execute as written and yet still behave in ways not foreseen by its writers’
To this end given the lack of transparency, accountability, and oversight on such algorithms, the report raises a very important point:
‘ …smart contract “audits” – often complex, highly technical processes to check for the validity and viability of smart contract code – become important. That raises the question of whether such audits have to become requirements, or also need legal recognition of some kind to make a smart contract valid? ‘
The risks because of the lack of transparency, auditability and accountability in automated systems has already been highlighted in certain critical areas and witnessed legal intervention. Let’s take one example, which at least on paper tries to cater for this shortcoming. Automated systems processing personal data, having an impact on the data subjects rights are already subject to the General Data Protection Regulation (GDPR), where we also have the notion of data protection by design and default, data protection impact assessments algorithmic accountability as well as the right to an explanation and meaningful information about the logic involvedin automated decisions. Regrettably these provisions are not enough as they are limited to instances where personal data is involved and also combined with built-in restrictions, example, solely to ‘significant’ decisions, with ‘no meaningful human input’ to enforce transparent and accountable algorithmic decision-making.
Beyond the GDPR however and closer to home, in Malta, aside from the new AI Framework, as from November last year we already have a Law which captures transparency, auditability and accountability for certain types of software, which fall within the definition of innovative technology arrangements. We also created an ad hoc Authority, the Malta Digital Innovation Authority (MDIA) to foster innovation and an algorithmic ecosystem based on principles of transparency, accountability, auditability and governance.
The Innovative Technology Arrangements and Services Act (ITASA) already captures smart contracts, decentralized autonomous organisations and elements of distributed or decentralised ledger technologies, a popular example of which is the Blockchain. It will soon capture a wider array of innovative technology as well as software like AI capable of more autonomy. These can be voluntarily submitted for recognition to the MDIA. Prior to this stage, such innovative technology arrangements must be reviewed by a systems auditor, one of the services outlined as an Innovative Service under ITASA. The systems auditor must review the innovative technology arrangement and its blueprint based on the MDIA’s control objectives, recognised standards, in line with quality and ethical regulations and based on five key principles; security, process integrity, availability, confidentiality and protection of personal data. These have been reinforced by guidelines issued by the MDIA in conjunction with the provisions of ITASA.
Other Authorities like the Malta Financial Services Authority (MFSA) have also leveraged on the MDIA’s system audits and auditors by mandating an audit for any smart contract used with any registered white paper by an ICO. This week they have also issued a consultation to extend this obligation to all the other VFA licensable activities.
In a software driven world, the regulatory mindset needs to evolve and aside from having Authorities converging and collaborating, it has become imperative to install proper governance structures with accountability and transparency obligations, run and supervised by an independent Authority. The trust element cannot be forgotten as it’s still a fundamental tenant for innovation. Smart regulation will make sure that the trust element is the main lynchpin and that technology is trustworthy by design.
Ian Gauci is the Managing partner at GTG Advocates and Caledo. He lectures on Legal Futures and Technology at the University of Malta.
This publication is provided for your convenience and does not constitute legal advice.
This publication is protected by copyright © 2019 GTG Advocates.