The Regulation of Algorithms

In Law a company is treated as having the rights and obligations of a person. In this era of Artificial Intelligence (intelligent assistants, ‘Robo’-advisors, robots, and autonomous vehicles) ‘algorithms’ are rapidly emerging as artificial persons: a legal entity that is not a human being but for certain purposes is considered by virtue of statute to be a natural person. Intelligent algorithms will increasing require formal training, testing, verification, certification, regulation, insurance, and most importantly status in law.

For example, already in financial services Regulators require firms to demonstrate that trading algorithms have been thoroughly tested, demonstrate ‘best execution’ and are not engaged in market manipulation. Other interesting cases are healthcare algorithms; medical-assistant’ Chatbots and patient screening systems which will increasingly dispense medical advice and treatments to patients. Regulators, who have traditionally regulated firms and individuals, are raising the status of ‘algorithms’ to ‘persons’.

We consider the emergence of ‘Algorithms as artificial persons’, with the need to formally verify, certify and regulate algorithms. Its aim is to start discussion in the Legal profession regarding the legal impact of algorithms on firms, software developers, insurers, and lawyers. This paper is written with the expectation that the reader is familiar with ‘Law’ but has a limited knowledge of algorithm technologies.

The science fiction writer Isaac Asimov famously proposed “Three Laws of Robotics” (Asimov, 1950): 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Later he added an additional law: 0) A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

Fast forward, in 2007 the South Korean Government proposed a Robot Ethics Charter and in 2011 the UK Research Council EPSRC (Boden et al., 2011) published five ethical “principles for designers, builders and users of robots”. More recently, the Association for Computing Machinery (ACM, 2017) in 2017 published seven principles for algorithmic transparency and accountability:

  1. Awareness – Stakeholders of analytic systems should be aware of the potential harm that biases can cause to individuals and society;
  2. Access and Redress – Regulators should encourage the adoption of mechanisms for individuals and groups affected by algorithmically informed decisions;
  3. Accountability – Institutions should be held responsible for decisions made by the algorithms that they use;
  4. Explanation – Institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm;
  5. Data Provenance – A description of the training data and its potential biases should be maintained by the builders of the algorithms, with public scrutiny being a desirable feature as long as it not allows malicious actors to game the system;
  6. Auditability – Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected; and
  7. Validation and Testing: Institutions should routinely perform use rigorous methods to assess their models and document those methods and results as well as encouraged to make the results of such tests public.

In this era of Artificial Intelligence (AI), ‘algorithms’ are rapidly emerging in Law as artificial persons. Already algorithmic trading systems (Treleaven et al., 2013) account for 70%-80% of US Equity trades. Apple, Google and Amazon provide ‘intelligent’ virtual assistants and Chatbots (Virtual assistant, 2017), such as Apple Siri, Google Assistant, and ‘smart’ devices such as Amazon Echo, Google Home and Apple HomePod that interact with speech. Numerous financial firms provide financial ‘Robo’ investment advisors (Robo-advisor, 2017). Baidu has a medical-assistant advisor currently running in China (Baidu Research, 2017). And Google, Ubur, Tesla and most car manufacturers are working on autonomous vehicles (Autonomous car, 2017). In response, governments and regulators are modifying national laws to encourage innovation, with lawyers and insurers scrambling to absorb the implications.