| ,

Algorithms, Artificial Intelligence, and the Law

by

Vol. 105 No. 1 (2021) | The Courts Held | Download PDF Version of Article

Much attention is paid to our brave new world wrought by algorithms and artificial technology, one in which many societal functions are accelerated and made more efficient — and more impersonal. Not enough attention is being paid to how legal doctrine should adapt to accommodate this new world — and how quickly it must be done.

This topic is a huge one. But it is so important that I think lawyers generally — and that includes judges — should be trying to think through the issues that are already with us and those that are coming very fast down the track toward us.

At least in some technologies, there is some human agency in the background, guiding processes through admittedly complex computer programming or perhaps evaluating outcomes of the results an algorithm produces. But how should legal doctrine adapt to processes governed without human agency, by artificial intelligence — that is, by autonomous computers generating their own solutions, free from any direct human control? We need to think now about the implications of making human lives subject to these processes, for fear of the frog in hot water effect. We, like the frog, sit pleasantly immersed in warm water with our lives made easier in various ways by information technology. But the water, little by little, gets imperceptibly hotter and hotter, until we find we have gone past a crisis point and our lives have changed irrevocably, in ways outside our control and for the worse, without us even noticing. The water becomes boiling and the frog is dead.1

Often there is no one to blame. As James Williams points out in his book Stand Out of Our Light:

At “fault” are more often the emergent dynamics of complex multiagent systems rather than the internal decision-making dynamics of a single individual. As W. Edwards Deming said, “A bad system will beat a good person every time.”2

This aspect of the digital world and its effects poses particular problems for legal analysis.

For purposes of this discussion, I draw upon the distinction between algorithmic analysis, on the one hand, and artificial intelligence on the other. The tech world defines an algorithm as simply an automated instruction, or a “coded recipe that gets executed when it encounters a trigger.” It can be as simple as a mere “if, then” statement. AI, by contrast, is a configuration of algorithms that can self-modify and create new algorithms “in response to learned inputs and data as opposed to relying solely on the inputs it was designed to recognize as triggers.” That capacity to “change, adapt and grow based on new data” is AI.3 The main substance of my article is directed to algorithmic analysis. But many of my comments apply also to artificial intelligence more broadly.

Algorithms and AI

An algorithm is a process or set of rules to be followed in problem solving. It is a structured process. It proceeds in logical steps. This is the essence of processes programmed into computers. They perform functions in logical sequence. Computers are transformational in so many areas because they are mechanically able to perform these functions at great speed and in relation to huge amounts of data, well beyond what is practicable or even possible for human beings. They give rise to a form of power that raises new challenges for the law, in its traditional roles of defining and regulating rights and of finding controls for illegitimate or inappropriate exercise of power. At the same time, alongside its duty to control abuse of power and abuse of rights, law has a function to provide a framework in which this new power can be deployed and used effectively for socially valuable purposes. In that sense, law should “go with the flow” and channel this power, rather than merely resist it.

The potential efficiency gains are huge, across private commercial activity and governmental, legislative, and judicial activity. Information technology provides platforms for increased connectivity and speed of transacting. So-called smart contracts are devised to allow self-regulation by algorithms, in order to reduce the costs of contracting and of policing the agreement. Distributed ledger technology, such as blockchain, can create secure property and contractual rights with much reduced transaction costs and reduced need for reliance on state enforcement.4 Fintech is being devised to allow machines to assess credit risks and insurance risks at a fraction of the cost of performing such exercises by human agents.5 In this way, access to credit and to insurance can be greatly expanded, enhancing human capacity to take action to create prosperity and protect against risk.

The use of digital solutions to deliver public welfare assistance offers the prospect of greatly reduced cost of administration as well as, in theory, the possibility of diverting the savings into more generous benefits. It also offers the potential to tailor delivery of assistance in a more fine-grained way in order to deliver resources to those who need them most. The emergence of online courts through use of information technology offers the potential to improve access to justice and greatly reduce the time and cost taken to achieve resolution of disputes.

More widely, people increasingly live their lives in fundamentally important ways online, via digital platforms. They find it convenient, and then increasingly necessary, to shop online, access vital services online, and to express themselves and connect with other humans online.

Artificial intelligence is part of this brave new world. It is something at the stage beyond mere algorithmic analytical processes. I use the term as a shorthand for self-directed and self-adaptive computer activity. It arises, for instance, where computer systems perform more complex tasks that previously required human intelligence and the application of on-the-spot judgment, such as driving a car. In some cases, AI involves machine learning, whereby an algorithm optimizes its responses through experience as embodied in large amounts of data, with limited or no human interference.6 AI can involve machines that are capable of analyzing situations to learn for themselves and then generating answers that may not even be foreseen or controlled by their programmers. AI arises from algorithmic programming but, due to the complexity of the processes it carries out, the outcome of the programming cannot always be predicted by humans, however well informed they may be. Here, the machine itself seems to be interposed between any human agency and what it, the machine, does.

Agency, in the sense of intelligence-directed activity performed for reasons, is fundamental to legal thought. For legal regulation of this sort of machine activity, we need to think not just of control of power, but also of how agency should be conceptualized. Should we move to ascribe legal personality to machines? And perhaps use ideas of vicarious liability? Or should we stick with human agency, but work with ideas of agency regarding risk creation, on a tort model, rather than direct correspondence between human thought and output in the form of specific actions intended by a specific human agent?

Underlying all these challenges are a series of inter-connected problems regarding: (i) the lack of knowledge, understanding and expertise on the part of lawyers (I speak for myself, but I am not alone), and on the part of society generally; (ii) unwillingness on the part of programming entities, mainly for commercial reasons, to disclose the program coding they have used, so that even with technical expertise it is difficult to dissect what has happened and is happening; and (iii) a certain rigidity at the point of the interaction of coding and law, or rather where coding takes the place of law.

These problems play out in a world in which machine processing is increasingly pervasive, infiltrating all aspects of our lives; intangible, located in functions away in the cloud rather than in physical machines sitting on our desks; and global, unbound by geographical and territorial jurisdictional boundaries. All these features of the digital world pose further problems for conventional legal approaches.

Law is itself a sort of algorithmic discipline: If factors A, B, and C are present, then by a process of logical steps legal response Z should occur. Apart from deliberate legislative change, legal development has generally resulted from minor shifts in legal responses. These responses take place to accommodate background moral perspectives on a case, perspectives which themselves may be changing over time. With algorithms in law, as applied by humans, this evolution happens naturally in the context of implementation of the law. But algorithms in computer code are not in themselves open to this kind of change in the course of implementation.

Richard Susskind brought this home to me with an analogy from the card game Patience. It has set rules, but a human playing with cards can choose not to follow them. There is space to try out changes. But when playing Patience in a computer version, it is simply not possible to make a move outside the rules of the game.7 Similarly, coding algorithms create a danger of freezing particular relationships in set configurations with set distributions of power, which seem to be natural and beyond any question of contestation. The wider perceptual control that is noticeable as our world becomes increasingly digital also tends to freeze categories of thought along tramrails written in code.8 Unless resisted, this can limit imagination and inspiration even for legislative responses to digitization.

All this erodes human capacities to question and change power relations.9 Coding will reflect the unspoken biases of the human coders and in ways that seem beyond challenge. Moreover, coding algorithms are closed systems. As written, they may not capture everything of potential significance for the resolution of a human problem. With the human application of law, the open-textured nature of ideas, like justice and fairness, creates the possibility for immanent critique of the rules being applied and leaves room for wider values not explicitly encapsulated in law’s algorithm to enter the equation leading to a final outcome. That is true not just for the rules of the common law, but in the interstices of statutory interpretation.10 These features are squeezed out when using computer coding. There is a disconnect in the understanding available in the human application of a legal algorithm and the understanding of the coding algorithm in the machine.

This rigidity enters at the point of the intersection of law and coding. It is a machine variant of the old problem of law laid down in advance as identified by Aristotle: The legislator cannot predict all future circumstances in which the stipulated law will come to be applied, and so cannot ensure that the law will always conform to its underlying rationale and justification at the point of its application. His solution was to call for a form of equity or flexibility at the point of application of the law, what he called epieikeia (usually translated as equity), to keep it aligned to its rationale while it is being applied and enforced.11

A coding algorithm, like law, is a rule laid down in advance to govern a future situation. However, equity or rule modification or adjustment in the application of law is far harder to achieve in a coding algorithm under current conditions.

It may be that at some point in the future, AI systems, at a stage well beyond simple algorithmic systems, will be developed with a fine-grained sensitivity to rule application to allow machines to take account of equity informed by relevant background moral, human rights, and constitutional considerations. Machines may well develop to a stage at which they can recognize hard cases within the system and operate a system of triage to refer those cases to human administrators or judges, or indeed decide the cases themselves to the standard achievable by human judges today.12 Application of rules of equity or recognition of hard cases, where different moral and legal considerations clash, is ultimately dependent on pattern recognition, which AI is likely to be able to handle.13 But we are not there yet.

As things stand, using the far more crude forms of algorithmic coding that we do, there is a danger of losing a sense of code as something malleable, changeable, potentially flawed, and requiring correction. Subjecting human life to processes governed by code means that code can gain a grip on our thinking, which reduces human capacities and diminishes political choice.

Preventing Technocracy

This effect of the rigid or frozen aspect of coding is amplified by the other two elements to which I call attention: (i) ignorance among lawyers and in society generally about coding and its limitations and capacity for error; and (ii) secrecy surrounding coding that is actually being used. The impact of the latter is amplified by the willingness of governments to outsource the design and implementation of systems for delivery of public services to large tech companies, on the footing that they have the requisite coding skills.

Philip Alston, United Nations (UN) Special Rapporteur on Extreme Poverty and Human Rights, recently presented a report on digital welfare systems to the UN General Assembly.14 He identifies two pervasive problems. First, governments are reluctant to regulate tech firms, for fear of stifling innovation. Second, the private sector is resistant to taking human rights systematically into account in designing their systems.

Alston refers to a speech by UK Prime Minister Boris Johnson to the UN General Assembly on Sept. 24, 2019, in which he warned that we are slipping into a world characterized by round-the-clock surveillance, the perils of algorithmic decision-making, the difficulty of appealing against computer determinations, and the inability to plead extenuating circumstances against an algorithmic decision-maker. In this world, the power of the public to criticize and control the systems that are put in place to undertake vital activities in both the private and the public sphere is eroded by the lack of understanding and access to relevant information. Democratic control of law and the public sphere is being lost.

David Runciman argues in How Democracy Ends15 that the appeal of modern democracy has been founded on a combination of, first, providing mechanisms for individuals to have their voice taken into account, thereby being afforded respect in the public sphere; and, second, its capacity to deliver long-term benefits in the form of a chance to share in stability, prosperity, and peace. But, he says, the problem for democracy in the 21st century is that these two elements are splitting apart. Effective solutions to shared problems depend more and more on technical expertise, so that there has been a movement to technocracy, or rule by technocrats using expertise that is not available or comprehensible to the public at large. The dominance of economic and public life by algorithmic coding and AI is fueling this shift as it changes the traditional, familiar ways of aligning power with human interests through democratic control by citizens, regulation by government, and competition in markets.

At the same time, looking from the other end of the telescope, from the point of view of the individual receiving or seeking access to services, one might have a sense of being subjected to power that is fixed and remorseless16 — an infernal machine over which one has no control, and which is immune to any challenge or appeal to consider extenuating circumstances, or to any plea for mercy. For access to digital platforms and digital services in the private sphere, the business model is usually take it or leave it: Accept access to digital platforms on their terms requiring access to your data and on their very extensive contract terms excluding their legal responsibility, or be barred from participating in an increasingly pervasive aspect of the human world. This may be experienced as no real choice at all. The movement begins to look like a reversal of Sir Henry Maine’s famous progression from status to contract. We seem to be going back to status again.

Meanwhile, access to public services is being depersonalized. The individual seems powerless in the face of machine systems and loses all dignity in being subjected to their control. The movement here threatens to be from citizen to consumer and then on to serf.

Malcolm Bull argues in On Mercy17 that it is mercy rather than justice that is foundational for politics. Mercy, as a concession by the powerful to the vulnerable, makes rule by the powerful more acceptable to those on the receiving end and hence more stable. In a few suggestive pages at the end of the book, under the heading “Robotic Politics,” Bull argues that as the world is increasingly dominated by AI, we humans become vulnerable to power outside our knowledge and control; therefore, he says, we should program into the machines a capacity for mercy.18

The republican response to the danger of power and domination, namely of arming citizens with individual rights, will still be valuable. But it will not be enough if the asymmetries of knowledge and power are so great that citizens are in practice unable to deploy their rights effectively. So what we need to look for are ways of trying to close the gap between democratic, public control and technical expertise to meet the problem identified by David Runciman; ways of trying to build into our digital systems a capacity for mercy, responsiveness to human need, and equity in the application of rules to meet the problem identified by Malcolm Bull; and ways of fashioning rights that are both effective and suitable to protect the human interests that are under threat in this new world.

We are not at a stage to meet Malcolm Bull’s challenge, and rights regimes will not be adequate. People are not being protected by the machines and often are not capable of taking effective action to protect themselves. Therefore, we need to create laws that require those who design and operate algorithmic and AI systems to consider and protect the interests of people who are subject to those systems.

Evaluating Technical Systems

Because digital processes are more fixed in their operation than the human algorithms of law and operate with immense speed at the point of application of rules, we need to focus on ways of scrutinizing and questioning the content of digital systems at the ex ante design stage. We also need to find effective mechanisms to allow for systematic ex post review of how digital systems are working and — without destroying the efficiency gains they offer — for ex post challenges to individual concrete decisions to correct legal errors and ensure equity and mercy.

Precisely because algorithmic systems are so important in the delivery of commercial and public services, they must incorporate human values and protections for fundamental human interests.19 For example, systems need to be checked for biases based on gender, sexuality, class, age, and ability. As Jamie Susskind observes in Future Politics,20 progress is being made toward developing principles of algorithmic audit. On Feb. 12, 2019, the European Parliament adopted a resolution declaring that “algorithms in decision-making systems should not be deployed without a prior algorithmic impact assessment . . . .”21

The question then arises, how should we provide for ex ante review of code in the public interest? One idea is to follow the European Parliament’s resolution that a government department intending to deploy an algorithmic program should conduct an impact assessment, much as it does now in relation to the environmental impacts and equality impacts in relation to the introduction of policy. But government may not have the technical capability to do this well, particularly when one bears in mind that it may have contracted out the coding and design of the system on the grounds that the relevant expertise lies in the private sector. Moreover, those in the legislature who are supposed to be scrutinizing what the government does are unlikely to have the necessary technical expertise either. Further, it might also be said that provision needs to be made for impact assessment of major programs introduced in the private sector, where, again, government is unlikely to have the requisite expert capability. Because of lack of information and expertise, the public cannot be expected to perform their usual general policing function in relation to service providers.

There seems to be a strong argument for creating a new agency to scrutinize AI programs from the perspective of the public interest, which would constitute a public resource for government, legislators, the courts, and the public generally. It would be an expert commission staffed by coding technicians, with lawyers and ethicists to assist them. The commission could be given access to commercially sensitive code on strict condition that its confidentiality is protected. However, it would invite representations from interested persons and groups in civil society and, to the fullest extent possible, it would publish reports from its reviews, to provide transparency in relation to the digital processes.

Perhaps current forms of pre-legislative scrutiny of Acts of Parliament here in the UK offer the beginnings of an appropriate model. For example, the Joint Committee on Human Rights scrutinizes draft legislation for its compatibility with human rights and reports back to Parliament on any problems. But those introducing algorithmic systems are widely dispersed in society and across the globe, so one would need some form of trawling mechanism to ensure that important algorithms are available for scrutiny by the commission. That is by no means straightforward. The emphasis may have to be more on ex post testing and audit checking of private systems after deployment. Also, it cannot be emphasized too strongly that society must be prepared to devote the resources and expertise to perform this scrutiny to a proper standard. It will not be cheap. But the impact of algorithms on our lives is so great that I suggest the likely cost will be proportionate to the risks which this will protect us against.

There should also be scope for legal challenges to be brought regarding the adoption of algorithmic programs, including at the ex ante stage. In fact, this seems to be happening already.22 This is really no more than an extension of the well-established jurisprudence on challenges to adoption of policies which are unlawful23 and is in line with recent decisions on unfairness challenges to entire administrative systems.24 However, the extension will have procedural consequences. The claimant will need to secure disclosure of the coding in issue. If it is commercially sensitive, the court might have to impose confidentiality rings, as happens in intellectual property and competition cases. And the court will have to be educated by means of expert evidence, which on current adversarial models means experts on each side with live evidence tested by cross examination. This will be expensive and time consuming, in ways that feel alien in a judicial review context. I see no easy way round this, unless we create some system whereby the court can refer the code for neutral expert evaluation by an algorithm commission or an independently appointed expert.

The ex ante measures should operate in conjunction with ex post measures. How well a program is working and the practical effects it is having may only emerge after a period of operation. There should be scope for a systematic review of results as a check after a set time, to see if the program needs adjustment.

More difficult is to find a way to integrate ways of challenging individual decisions taken by government programs as they occur while preserving the speed and efficiency that such programs offer. It will not be possible to have judicial review in every case. I make two suggestions. First, it may be possible to design systems whereby if a service user is dissatisfied, they can refer the decision to a more detailed assessment level — a sort of “advanced search option,” which would take a lot more time for the applicant to complete but might allow for more fine-grained scrutiny. Secondly, the courts and litigants, perhaps in conjunction with an algorithm commission, could become more proactive in identifying cases that raise systemic issues and marshalling them together in a composite procedure, by using pilot cases or group litigation techniques.

The creation of an algorithm commission would be part of a strategy for meeting the first and second challenges I mentioned — (i) lack of technical knowledge in society and (ii) preservation of commercial secrecy in relation to code. The commission would have the technical expertise and all the knowledge necessary to be able to interrogate specific coding designed for specific functions. I suggest it could provide a vital social resource to restore agency for public institutions — to government, legislators, the courts, and civil society — by supplying the expert understanding required for effective lawmaking, guidance, and control in relation to digital systems. It would also be a way of addressing the third challenge — (iii) rigidity in the interface between law and code — because the commission would include experts who understand and can constantly remind government, legislators, and the courts about the fallibility and malleability of code. Already, models exist in academia and civil society that bring together tech experts and ethicists.25 Contributions from civil society are valuable, but they are not sufficient. The issues are so large, and the penetration of coding into the life of society is so great, that the resources of the state should be brought to bear on this as well.

In addition to being an informational resource, one could conceive of the commission as a sort of independent regulator, on the model of regulators of utilities. It would ensure that critical coding services were made available to all and that services made available to the public meet relevant standards.

More ambitiously, perhaps, we should think of it as a sort of constitutional court. There is an analogy with control and structuring of society through law. Courts deal with law, and constitutional courts deal with deeper structures of the law that provide a principled framework for the political and public sphere. The commission would police baseline principles that structure coding and ensure compliance with standards on human rights. One could even imagine a two-way reference procedure between the commission and the courts (when the commission identifies a human rights issue on which it requires guidance) and between the courts and the commission (when the courts identify a coding issue on which they require assistance).

The commission would pose its own dangers arising from an expert elite monitoring an expert elite. To some degree there is no escape from this. The point of the commission is to have experts do on behalf of society what society cannot do itself. The dangers could be mitigated by making the commission’s procedures and its reports as transparent and open as possible.

A further project for the law is to devise an appropriate structure of individual rights, to give people more control over their digital lives and enhance individual agency. One model is proposed by the 5Rights Foundation,26 which calls for five rights to enable a child to enjoy a respectful and supportive relationship with the digital environment: i) the right to remove data they have posted online, ii) the right to know who is holding and profiting from their information and how it is being used, iii) the right to safety and support if confronted by troubling or upsetting scenarios online, iv) the right to informed and conscious use of technology, and v) the right to digital literacy. These need to be debated at a legislative level. Such a rights regime could usefully be extended to adults as well.

In view of the global nature of the digital world, there also has to be a drive for cooperation in setting international standards. Several initiatives are being taken in this area by international organizations. An algorithm commission could be an important resource for an international effort and, if done well, could give the United Kingdom (UK) significant influence in this process.27 Following through on these initiatives might allow for important national standards and values to be better respected in any international rules or dominant technologies. This could provide some counterbalance to the existing geographic bias in the production of digital technologies. Over the years 2013–16, between 70 and 100 percent of the top 25 cutting-edge digital technologies were developed in just five countries: China, Taiwan, Japan, South Korea, and the USA.28

All this is to try to recover human agency and a sense that technology is a tool to improve things, not to rule us. Knowledge really is power in this area. We need to find a way of making the relevant technical knowledge available in the public domain, to civil society, government, courts, and legislatures. Coding is structuring our lives more and more. No longer are the material conditions of nature, molded by industrial society, the main grounding of our existence. Law has been able to operate effectively as a management tool for that world. But now coding is becoming as important as nature in forming the material grounds of our existence.29 It is devised and manipulated by humans and will reflect their own prejudices and interests. Its direction and content are inevitably political issues.30 We need to find effective ways to manage this dimension of our lives collectively, in the interests of all.

Adapting Law

Legal doctrine may have to adapt in the increasingly digital age. Such are the demands of bringing expertise and technical knowledge to bear that it is not realistic to expect the common law, with its limited capacity to change and the slow pace at which it does so, to play a major role.31 It may assist with adaptation in the margins. But the speed of change is so great and the expertise that needs to be engaged is of such a technical nature that the main response must come in legislative form. What is more, the permeability of national borders to the flow of digital technologies is so great that there will have to be international cooperation to provide common legal standards and effective cross-border regulation.

The Challenges of an Algorithmic World

In the space available I offer some thoughts at a very high level of generality in relation to three areas: (1) commercial activity; (2) delivery of public services; and (3) the political sphere.

(1) Commercial Activity. First, there is the attempt to use digital and encryption solutions to create virtual currencies free from state control. However, as Karen Yeung observes, points of contact between these currency regimes and national jurisdictions will continue to exist. The state will not simply retreat from legal control. There will still need to be elements of state regulation in relation to the risks they represent. She maps out three potential forms of engagement, which she characterizes as (a) hostile evasion (or cat and mouse), (b) efficient alignment (or the joys of [patriarchal] marriage), and (c) supporting novel forms of peer-to-peer coordination to reduce transactional friction associated with the legal process (or uneasy coexistence).32

Second, there is the loss of individuals’ control over contracting and the related issue of accessibility to digital platforms. Online contracting has taken old concerns about boilerplate clauses to new extremes. To access digital tools, one has to click to accept terms that are extremely long and rarely read. Margaret Radin has written about the deformation of contract in the information society33 and describes what she calls “massively distributed boilerplate” removing ordinary remedial rights. She argues for a new way of looking at the problem, involving a shift from contract to tort, via a law of misleading or deceptive disclosure. A service provider would be liable for departures from reasonable expectations that are insufficiently signalled to the consumer.

The information and power asymmetries in the digital world are so great that we need a coherent strategic response along a spectrum: from competition law at the macro level, to protect against abuse of dominant positions;34 to rights of fair access to digital platforms; to extended notions of fiduciary obligation in the conduct of relationships35 and an expansion of doctrines of abuse of rights, which in the UK currently exist only in small pockets of the common law36 and statute;37 to control of unfair terms and rebalancing of rights at the micro level of individual contracts.

Third, intellectual property has grown in importance, and will continue to, as economic value shifts ever-more to services and intangibles. On one hand, a major project is likely to be the conceptual development of the idea that one’s personal data is his or her property, which ought to be portable for one’s own benefit and over which one has rights to control its commercial exploitation. On the other hand, the veto rights created by intellectual property are likely to become qualified, so as not to impede the interconnected and global nature of the digital world. Intellectual property rights may be subjected to regimes that allow them to be overridden or bought out in return for a fair payment to the property owner. They may become points creating rights of fair return to encourage innovation as economic life flows through and round them, as has happened with patent rights under so-called FRAND regimes. In these regimes, as the price of being part of global operating standards, patent holders give irrevocable unilateral undertakings for the producers and consumers of tech products to use their patents on payment of a fee that is fair, reasonable, and nondiscriminatory.38 It is possible that these sorts of solutions may come to be imposed by law by states operating pursuant to international agreements.

The fourth topic is the use of digital techniques to reduce transaction costs in policing of contracts, through smart contracts that are self-executing without interventions of humans. An example: If payment for a service delivered and installed on a computer fails to register on time, the computer shuts off the service. Smart contracts will become more sophisticated. They will create substantial efficiencies. But sometimes they will malfunction, and legal doctrine will need to adapt to that in ways that are supportive of the technology and of what the parties seek to do. A recent decision in Singapore, B2C2 Ltd v Quoine Pte Ltd,39 provides an arresting illustration. A glitch arising from the interaction between a currency trader’s algorithmic trading program and a trading platform’s program resulted in automatic trades purchasing currency at about 1/250th of its true value, thereby realizing a huge profit for the trader. The trading platform was not permitted to unravel these trades. Defenses based on implication of contract terms, mistake, and unjust enrichment all failed. The judge at first instance had to make sense of the concept of mistake in contract when two computer programs trade with each other. He did so by looking at the minds and expectations of the programmers, even though they were not involved in the trades themselves.40 The majority in the Court of Appeal followed this approach.41 But in the future, the programs may become so sophisticated and operate so independently that this process of looking back through the programs to the minds of those who created them will seem completely illogical. Legal doctrine is going to have to adapt to this new world.

If human will drops out of the picture in trading, there may have to be a move away from the fetishization of consent as the basic justification for contracts. Benefit-based and reliance-based grounds of obligation may become more important.42 Aspects of contract law that were based on older ideas of fair exchange, and which were pushed to the margins of contract doctrine during the 19th and 20th centuries as that doctrine focused on consent and freedom of contract, may crowd back in. Ideas of consent will still play a significant role, but with a widening margin where fair and reasonable standards of economic exchange may come to govern. Judged by such standards, it must be open to question whether the contract in the BC2C Ltd case would be upheld.

(2) Public Administration, Welfare, and the Justice System. Digital government has the potential for huge efficiency savings in the delivery of public services and provision of social welfare. But it carries substantial risks as well, in terms of enhancement of state power in relation to the individual, loss of responsiveness to individual circumstances, and the potential to undermine important values that the state should be striving to uphold, such as human dignity and basic human rights including rights of privacy and fair determination of civil rights and obligations. Philip Alston writes in his report of the “grave risk of stumbling zombie-like into a digital welfare dystopia” in Western countries. He argues that we should take human rights seriously and regulate accordingly; should ensure legality of processes and transparency; promote digital equality; protect economic and social rights in the digital welfare state, as well as civil and political rights; and seek to resist the idea of the inevitability of a digital only future.43

Legal scholars Carol Harlow and Richard Rawlings emphasize that the implications of the emergent digital revolution for the delivery of public services are likely in the near future to pose a central challenge for administrative law.44 Procedures, such as those that allow for transparency, accountability, and participation, are a repository for important values of good governance in administrative law.45 But it is administrative procedures that are coming under pressure with the digitization of government services. The speed of decision-making in digital systems will tend to require the diversion of legal control and judicial review away from the individual decision toward the coding of the systems and their overall design.

Similarly, online courts offer opportunities for enhanced efficiency in the delivery of public services in the form of the justice system, allowing enhanced understanding of rights for individuals and enhanced and affordable access to justice. But the new systems have to allow space for the procedural values at the heart of a fair and properly responsive system of justice.46

(3) The Interface With Politics and Democracy. A number of points should be made here. The tech world clearly places our democracy under pressure. Law is both the product of democracy, in the form of statutes passed by legislators, and a foundation of democracy, in the form of creating a platform of protected rights and capacities that legitimizes our democratic procedures and enables them to function to give effect to the general will.47 I have already mentioned the dilemma identified by David Runciman, namely the problem of disconnection between democracy and technical control in a public space dominated by code.48 There are plainly other strains as well. Here, I am going to call attention to four. Space does not allow me to explore solutions in any detail. As a society we are going to have to be imaginative about how we address them. The task is an urgent one.

First, we are witnessing a fracturing of the public sphere. Democracy of the kind with which we were familiar in the 20th century was effective because lawmakers worked in the context of a communal space for debating issues in the national press, television, and radio, which generated broad consensus around fundamental values and what could be regarded as fact. Jürgen Habermas, for example, gave an attractive normative account of democracy according to which legislation could be regarded as the product of an extended process of gestation of public opinion through debate in the communal space, which then informed the political and ultimately legislative process and was put into refined and concrete statutory form by that process.49 But information technology allows people to retreat from that communal space into highly particularistic echo-chamber siloes of like-minded individuals, who reinforce each other’s views and never have to engage or compromise with the conflicting views of others. What previously could be regarded as commonly accepted facts are denounced as fake news, so the common basis for discussion of the world is at risk of collapse. In elections, detailed information about individuals can be harvested by computing platforms, allowing voters to be targeted by messaging directed to their own particular predilections and prejudices, without the need to appeal to other points of view at the same time. We need to find ways of reconstituting a common public space.

Second, Jamie Susskind points out that the most immediate political beneficiaries of the ongoing tech revolution will be the state and big tech firms:

The state will gain a supercharged ability to enforce the law, and certain powerful tech firms will be able to define the limits of our liberty, determine the health of our democracy, and decide vital questions of social justice.50

There is already concern about the totalitarian possibilities of state control as illustrated by China’s social credit system, in which computers monitor the social behaviour of citizens in minute detail and benefits are awarded or withheld according to how people are marked by the state. But Susskind argues that digital tech also opens up possibilities for new forms of democracy and citizen engagement, and that to protect people from servitude we need to exploit these new avenues to keep the power of the supercharged state in check.51 In relation to the tech companies, he argues for regulation to ensure transparency and structural regulation to break up massive concentrations of power. Structural regulation would be aimed at ensuring individual liberty and that the power of the tech companies does not go unchecked.52

Third, James Williams, in Stand Out of Our Light,53 identifies a further subtle threat to democracy arising from the pervasiveness of information technology and the incessant claims that it makes on our attention. According to him, the digital economy is based on the commercial effort to capture our attention. In what he calls the “Age of Attention,” information abundance produces attention scarcity. At risk is not just our attention, but our capacity to think deeply and dispassionately about issues and hence even to form what can be regarded as a coherent will in relation to action. He points out that the will is the source of the authority of democracy. He observes that as the digital attention economy compromises human will, it strikes “at the very foundations of democracy” and may “directly threaten not only individual freedom and autonomy, but also our collective ability to pursue any politics worth having.”54

He argues that we must reject “the present regime of attentional serfdom” and instead “re-engineer our world so that we can give attention to what matters.”55 That is a big and difficult project. As Williams says, the issue is one of self-regulation, at both individual and collective levels.56 It seems that law will need to support this effort in some way, perhaps through some form of public regulation. We have made the first steps to try to fight another crisis of self-regulation, obesity, through supportive public regulation. Similarly, in relation to the digital world, as Williams points out, it is not realistic to expect people to “bear the burdens of impossible self-regulation, to suddenly become superhuman and take on the armies of industrialized persuasion.”57 But at the moment, it is unclear how public regulation would work and whether there would be the political will to impose it.

Fourth, the law has an important role to play in protecting the private sphere in which individuals live their lives and in regulating surveillance. For example, the case law of the European Court of Human Rights58 and of the UK’s Investigatory Powers Tribunal59 sets conditions for the exercise of surveillance powers by intelligence agencies and provides an effective way of monitoring such exercise.

The Challenges of Artificial Intelligence

Some of the challenges to legal doctrine in relation to AI will be extrapolations from those in relation to algorithmic programming. But some will be different in kind. At the root of these is the interposition of the agency of machines between human agents and events that have legal consequences. An much-discussed example is that of a driverless car that has an accident.

Existing legal doctrine suggests possible analogies on which a coherent legal regime might be based. The merits and demerits of each have to be compared and evaluated before final decisions are made. We should be trying to think this through now. There is already a burgeoning academic literature in this area, engaging with fundamental legal ideas. Legislation at the EU level is beginning to come under consideration, stemming from a European Parliament Resolution and Report in January 2017.60 On the issue of liability for the acts of robots and other AIs, the resolution proposes61 establishing a compulsory insurance scheme, a compensation fund, and, in the case of sophisticated AIs, “a specific legal status for robots in the long run.”

In one approach,62 sophisticated AIs with physical manifestations, such as self-driving cars, could be given legal personhood, like a company.63 However, types of AI differ considerably, and a one-size-fits-all approach is unlikely to be appropriate.64 It may be necessary to distinguish between ordinary software used in appliances, for which a straightforward product liability approach is appropriate, and that used in complex AI products.65

A contrary approach is to maintain the traditional paradigm of treating even sophisticated AIs as mere products for liability purposes.66 A middle way has also been proposed, in which some but not all AIs might be given separate legal personality, depending on their degree of autonomous functionability and social need,67 but may be denied “[i]f the practical and legal responsibility associated with actions can be traced back to a legal person.”68 There are concerns about allowing creators or operators of AIs to enjoy a cap on liability for the acts of such machines, which Jacob Turner calls the “Robots as Liability Shields” objection.69 However, legal personality for AIs could be used in conjunction with other legal techniques, such as ideas of vicarious liability and requirements for compulsory insurance.70 These are familiar ways of distributing risk in society.

Conclusion

Algorithms and AI present huge opportunities to improve the human condition. They also pose grave threats. These exist in relation to both of the diverging futures the digital world seems to offer: technical efficiency and private market power for Silicon Valley, on the one hand, and more authoritarian national control, as exemplified by China, on the other.

The digitization of life is overwhelming the boundaries of nation states and conventional legal categories, through the volume of information that is gathered and deployed and the speed and impersonality of decision-making that it fosters. The sense is of a flood in which the flow of water moves around obstacles and renders them meaningless. Information comes in streams that cannot be digested by humans, and decisions flow by at a rate that the court process cannot easily break up for individual legal analysis. Law needs to find suitable concepts and practical ways to structure this world in order to reaffirm human agency at the individual level and at the collective, democratic level. It needs to find points in the stream where it can intervene and ways in which the general flow can be controlled, even if not in minute detail. Law is a vehicle to safeguard human values. The law has to provide structures so that algorithms and AI are used to enhance human capacities, agency, and dignity, not to remove them. It has to impose its order on the digital world and must resist being reduced to an irrelevance.

Analyzing situations with care and precision with respect to legal relationships, rights, and obligations is what lawyers are trained to do. They have a specific form of technical expertise and a fund of knowledge about potential legal solutions and analogies which, with imagination, can be drawn upon in this major task. Lawyers should be engaging with the debates about the digital world now, and as a matter of urgency.

Footnotes:
1 James Williams, Stand Out of Our Light: Freedom and Resistance in the Attention Economy 93–94 (2018).
2 Id. at 102.
3 Kaya Ismail, AI vs. Algorithms: What’s the Difference?, CMSWire (Oct. 26, 2018) https://www.cmswire.com/information-management/ai-vs-algorithms-whats-the-difference/. See also Stephen F. Deangelis, Artificial Intelligence: How Algorithms Make Systems Smart, Wired (last updated May 25, 2018) https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/ (“Rather than follow only explicitly programmed instructions, some computer algorithms are designed to allow computers to learn on their own (i.e., facilitate machine learning).”); Artificial Intelligence Algorithms: All you need to know, Edureka (Nov. 25, 2020) https://www.edureka.co/blog/artificial-intelligence-algorithms (“Generally, an algorithm takes some input and uses mathematics and logic to produce the output. In stark contrast, an Artificial Intelligence Algorithm takes a combination of both — inputs and outputs simultaneously in order to ‘learn’ the data and produce outputs when given new inputs.”).
4 Solvej Karla Krause, Harish Natarajan, & Helen Luskin Gradstein, “Distributed Ledger Technology (DLT) and Blockchain” (World Bank Group, Report No. 122140, 2017), http://documents1.worldbank.org/curated/en/177911513714062215/pdf/122140-WP-PUBLIC-Distributed-Ledger-Technology-and-Blockchain-Fintech-Notes.pdf.
5 See Lord Hodge, Justice of the Supreme Court of the United Kingdom, The Potential and Perils of Financial Technology: Can the Law Adapt to Cope?, The 1st Edinburgh FinTech Lecture (Mar. 14, 2019).
6 Financial Stability Board, “Artificial Intelligence and machine learning in financial services” (Nov. 1, 2017), https://www.fsb.org/2017/11/artificial-intelligence-and-machine-learning-in-financial-service/.
7 Jamie Susskind refers to this effect as “force”: algorithms which control our activity force certain actions upon us, and we can do no other. Jamie Susskind, Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech ch. 6 (2018).
8 Id. at ch. 8.
9 Ben Golder, Foucault and the Politics of Rights (2015).
10 See, e.g., Philip Sales, “A Comparison of the Principle of Legality and Section 3 of the Human Rights Act 1998” (2009) 125 LQR 598. These are but two specific examples of a much wider phenomenon.
11 Aristotle, The Ethics of Aristotle: the Nicomachean Ethics (J.A.K. Thomson trans., 1953) at 198–200.
12 For a discussion of the possibilities, see Richard Susskind, Online Courts and the Future of Justice Part IV(2019).
13 See supra note 7, at 107–10, on the ability of AI to apply standards as well as rules.
14 Human Rights Council Res. 35/19, U.N. Doc. A/74/48037, (Oct. 18, 2019).
15 David Runciman, How Democracy Ends (2018).
16 This sense exists in some contexts, while in others the emerging digital systems may be hugely empowering, enabling far more effective access to a range of goods, such as, education, medical guidance and assistance, and help in understanding legal entitlements. See Richard Susskind and David Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Experts (2015). Of course, what is needed are legal structures which facilitate this process of enhancing individuals’ agency while avoiding the possible negative side-effects which undermine it.
17 Malcolm Bull, On Mercy (2019).
18 Id. at 159–61.
19 See Williams, supra note 1, at 106. The goal is “to bring the technologies of our attention onto our side. This means aligning their goals and values with our own. It means creating an environment of incentives for design that leads to the creation of technologies that are aligned with our interests from the outset.”
20 Susskind, supra note 7, at 355.
21 Resolution on a comprehensive European industrial policy on artificial intelligence and robotics, Eur. Parl. Doc. A8-0019/2019, 2018/2088(INI), (2019) https://www.europarl.europa.eu/doceo/document/TA-8-2019-0081_EN.html#ref_1_1 .
22 See Henry McDonald, AI system for granting UK visas is biased, rights groups claim, The Guardian (Oct. 29, 2019) https://www.theguardian.com/uk-news/2019/oct/29/ai-system-for-granting-uk-visas-is-biased-rights-groups-claim.
23 Gillick v. West Norfolk and Wisbech Area Health Auth. [1985] Eng. Rep. 402, [1986] 1 AC 112; R (Suppiah) v. SOS for the Home Dep’t [2011] EWHC 2 (Admin), [137]; R (S and KF) v. SOS for Just. [2012] EWHC 1810 (Admin), [37].
24 See, e.g., R (Det. Action) v. First-tier Tribunal (Immigr. & Asylum Chamber) [2015] EWCA Civ 840; [2015] 1 WLR 5341; R (Howard League for Penal Reform) v. LC [2017] EWCA Civ 244; LC v. Det. Action [2017] 4 WLR 92; Fredrick Powell, Structural Procedural Review: An Emerging Trend in Public Law, 22 Judicial Rev. 1 83 (2017).
25 For instance, in the field of digital healthcare systems, the International Digital Health and AI Research Collaborative was established in October 2019 to bring together health experts, tech experts and ethicists to establish common standards for delivery of digital health services. It will have the capacity to review and critique systems adopted by governments or big tech companies.
26 5Rights Foundation, https://5rightsfoundation.com (last visited Feb. 6, 2021).
27 E.g., G20 A.I. Principles (2019), Tsubuka, https://www.g20-insights.org/wp-content/uploads/2019/07/G20-Japan-AI-Principles.pdf; OECD Council Recommendation on A.I., OECD/Legal/0449 (2019) (calling for shared values of human-centredness, transparency, explainability, robustness, security, safety, and accountability); U.N. Secretary-General, High-Level Panel on Digital Cooperation report, The Age of Interdependence (June 2019), https://www.un.org/en/pdfs/DigitalCooperation-report-for%20web.pdf (emphasizing multi-stakeholder coordination and sharing of data sets to bolster trust, policies for digital inclusion and equality, review of compatibility of digital systems with human rights, importance of accountability and transparency; that report indicates that the UN’s 75th anniversary in 2020 may be linked to launch of a “Global Commitment for Digital Cooperation”. See generally Anna Jobin, Marcello Ienca, & Effy Vayena, The global landscape of AI ethics guidelines, 1 Nature Machine Intelligence 9, 389–99 (2019).
28 OECD, Measuring the Digital Transformation Aroadmapto the future (2019).
29 Simone Weil, Reflections Concerning the Causes of Liberty and Social Oppression in Oppression and Liberty (Arthur Wills and John Petrie trans., 1958).
30 See Susskind, supra note 7.
31 See also Lord Hodge, supra note 5.
32 Karen Yeung, Regulation by Blockchain: the Emerging Battle for Supremacy between the Code of Law and Code as Law, 82 Modern L. Rev. 207 (2019).
33 Margaret Radin, The Deformation of Contract in the Information Society, 37 Oxford J. of Legal Studies 505 (2017).
34 Autorité de la concurrence & Bundeskartellamt, Algorithms and Competition (Nov. 2019) (working paper) (on file at https://www.bundeskartellamt.de/SharedDocs/Publikation/EN/Berichte/Algorithms_and_Competition_Working-Paper.pdf?__blob=publicationFile&v=5).
35 White v. Jones [1995] 2 AC 207, 271–72 (Browne- Wilkinson, L.J., concurring) (finding fiduciary obligations are imposed on a person taking decisions in relation to the management of the property or affairs of another).
36 See, e.g., John Murphy, Malice as an Ingredient of Tort Liability, 79 Cambridge L.J. 355 (2019).
37 See, e.g., U.K. Pub. Gen. Acts, Companies Act 2006 S. 994 (giving members of a company the right to complain of abuse of rights by the majority where this constitutes unfair prejudice to the interests of the minority).
38 Huawei Tech. Co. Ltd v. Unwired Planet Int’l Ltd [2018] EWCA Civ 2344 (under appeal to the Supreme Court). See also the discussion about FRAND regimes in the communication from the Commission, the Council, and the European Economic and Social Committee dated 2017 (COM (2017) 712 final), referred to at para. 60 in the Court of Appeal judgment.
39 [2019] SGHC (I) 03 (Sing.); on appeal at Quoine Pte Ltd v B2C2 Ltd [2020] SGCA(I) 02) (Sing.).
40 B2C2 Ltd [2019] SGHC (I) 03 at para. 210 (Sing.).
41 [2020] SGCA(I) 02 at paras. 98–99 (Sing.); but see [2020] SGCA(I) 02 at paras. 192–98 (Sing.) (Mance IJ, J., dissenting) (finding the law has to be adapted to the new world of AI in a way which gave rise to results which reason and justice would lead one to expect; relief should be available if it would at once have been perceived by an honest and reasonable trader that some fundamental error had occurred).
42 See P.S. Atiyah, The Rise and Fall of Freedom of Contract, 43 Modern L. Rev. 467 (1979). See also Frederick Wilmot-Smith, Term Limits: What is a Term?, 39 Oxford J. of Legal Studies 705 (making the point that freedom of contract is a public policy which can be open to challenge on policy grounds; there is no one thing that contracts necessarily are); Wilmot-Smith, Term Limits, at 724 (“Rather than asking about the way courts must interpret contracts, we should ask what the law of interpretation should be. Any answer to that question will depend upon the conditions of justified legal responsibility and the values contract law should promote. Contractual scholarship must, therefore, connect with broader debates in legal and political philosophy … we are concerned with the appropriate grounds of legal obligations and, in certain cases, the use of state power to enforce those obligations. Conflict is inevitable.”).
43 Human Rights Council Res. 35/19, supra note 14.
44 Carol Harlow & Richard Rawlings, Proceduralism and Automation: Challenges to the Values of Administrative Law in The Foundations and Future of Public Law, ch. 14 (Elizabeth Fisher, Jeff King, & Alison Young eds., 2020).
45 Id. at 297.
46 See generally Susskind, supra note 12.
47 See Philip Sales, Legalism in Constitutional Law: Judging in a Democracy (2018) Pub. L. 687.
48 Runciman, supra note 15.
49 Jürgen Habermas, Between Facts and Norms, ch. 8 (William Rehg trans., 1996); Christopher Zurn, Deliberative Democracy and the Institutions of Judicial Review239–43 (2007); Philip Sales, The Contribution of Legislative Drafting to the Rule of Law, 77 Cambridge L.J. 630.
50 Susskind, supra note 7 at 346.
51 Id. at 347–348 and ch. 13.
52 According to Susskind’s vision, the regulation would implement a new separation of powers, according to which “no firm is allowed a monopoly over each of the means of force, scrutiny, and perception-control” and “no firm is allowed significant control over more than one of the means of force, scrutiny, and perception control together.” Id. at 354–59.
53 Williams, supra note 1.
54 Id. at 47.
55 Id. at 127.
56 Id. at 20.
57 Id. at 101.
58 E.g., Liberty v. UK app. 58243/00, Eur. Ct. H.R (2008).
59 E.g., Privacy Int’l v. SOS for Foreign and Commonwealth Aff. [2018] UKIP Trib IPT 15_110_CH 2 and related judgments.
60 Report with Recommendations to the Comm. on Civil Law Rules of Robotics, Eur. Parl. Doc. A8-0005/2017 (Jan. 27, 2017); Civil Law Rules on Robotics, Eur. Parl Doc. P8_TA(2017)0051 (Jan. 27, 2017).
61 Civil Law Rules on Robotics, supra note 62, at para. 59.
62 See, e.g., Jiahong Chen & Paul Burgess, The boundaries of legal personhood: how spontaneous intelligence can problematise differences between humans, artificial intelligence, companies and animals, 27 A.I. & L. 73, 73–92 (2019); see also Gabriel Hallevy, When Robots Kill: Artificial Intelligence under Criminal Law(2013); Shawn Bayern, The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems, 7 European J.of Risk Reg. 297, 297–309 (2016); Shawn Bayern et al., Company Law and Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators, 9 Hastings Sci. & Tech. L.J. 135, 135–62 (2017).
63 The common factors being (1) physical location, (2) human creation for a purpose or function, and (3) policy reasons for anchoring liability back to other natural or legal persons. Chen & Burgess, supra note 62, at 81.
64 Id. at 74.
65 Id. at 90.
66 S. M. Solaiman, Legal Personality of Robots, Corporations, Idols, and Chimpanzees: A Quest for Legitimacy, 25 A.I. & L. 155 (2017). Solaiman objects to extending the corporate model to sophisticated AIs, principally on the grounds that this would serve the undesirable aim of exonerating the creators and users from liability where significant harm to humans can or has been caused by AIs and the inability to apply a rights-duties analysis.
67 Robert van den Hoven van Genderen, Legal personhood in the age of artificially intelligent robots, in Research Handbook on the Lawof Artificial Intelligence (Woodrow Barfield & Ugo Pagallo eds., 2018).
68 Id. at 245.
69 Jacob Turner, Robot Rules: Regulating Artificial Intelligence 191–93 (2018).
70 See Hodge, supra note 5. “The law could confer separate legal personality on the machine by registration and require it or its owner to have compulsory insurance to cover its liability to third parties in delict (tort) or restitution.”