Buyers beware – AI and the law

Posted by Robert Morley in Excello Law Blogs on Thursday, June 1st, 2017

Artificial intelligence (AI) research has a long history, but it has been an even longer fascination for Hollywood, with more than fifty films to date, portraying self-aware intelligent robots, often posing stereotypical threats to humankind. In the real world, the best known AI is an IBM supercomputer, lovingly named Watson, which won the US game show Jeopardy in 2011. A Google image search for AI shows countless pictures of humanoid robots, often wearing suits; and the computer science terminology of AI uses terms such as neural network, machine learning and artificial intelligence. All of these influences foster a tendency to anthropomorphise (ascribe human form or attributes) AI computer systems.

This is particularly the case when AI systems involve motion, such as robots and self-driving cars. Even today’s traditional cars are often referred to as s/he by the owner. This anthropomorphisation naturally leads, in the non-legal mind, to the association of robots as “persons” and self-driving cars as “agents” as if they literally have a life of their own. It is this type of thinking that even leads some commentators to speculate on the “human rights” that might need to be afforded to AI’s in the future. But today’s reality is that an AI computer system, certainly one born from today’s approaches to AI, is not a legal person, and so cannot be an agent. In simple terms, police officers will not be arresting artificially intelligent machines and charging them with crimes.

This concept was tested recently when a Swiss art group created an automated shopping robot with the sole purpose of committing random purchases in the dark web, infamous for being known as the criminal underbelly of the internet. The robot managed to purchase several items, including a stolen Hungarian passport and some Ecstasy pills, before it was “arrested” by Swiss police. The aftermath resulted in no charges being brought against the robot, and interestingly on this occasion, nor the artists behind the robot.

At present, there is no specific body of law devoted to the range of new technologies that fall under the AI heading. Instead, the consequences of AI going wrong or making mistakes fall under existing common law and legislation governing contract, employment, health and safety at work, the environment, and so on. In English law, negligence is established where there is a breach of a duty of care, determined by the issues of remoteness, causation and foreseeability, which either results in, or might lead to loss, damage or injury. Those found to be at fault are either individuals, companies, or both.

Beyond breaches of civil law, where the penalties are usually financial, criminal law can sometimes cover more serious issues relating to possible injury or even death as a result of defective machinery, construction, equipment or methods of transport. Sanctions beyond heavy fines can be imposed, even leading to imprisonment – the most commonplace example might be the speeding driver who causes injury or death.

Unless it is a genuine accident, or a so called Act of God, the key element of liability in every case is human because it is the human brain which is at fault in not meeting a standard or duty of care through actions which are either deliberate, reckless or indifferent to any potential adverse consequences. At present, the same applies to computer systems: they are only as good as those who programme them.

But does the potential for AI take things further with the much-discussed capacity for original thought and decision-making?  If we gaze into the crystal ball, AI law must inevitably come into focus at some point in the not too distant future as a distinct body of law. In his 1942 short story, Runaround, the science fiction writer Isaac Asimov anticipated this. He wrote about Three Laws that would apply in his Handbook of Robotics, 56th Edition, 2058 AD:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Although he was a biochemist rather than a lawyer, Asimov’s Three Laws might reasonably serve as basic principles from which to formulate a future set of AI laws. However, the words ‘must obey the orders’ are key since they imply that humans are in control, not robots. But if the futurologists are right, then AI will be in control, at least to some degree. What they do not predict is how those who shape and draft the law might respond.

Here, the entire philosophy of law in every jurisdiction faces a fundamentally existential problem. Every law in every modern legal system is based entirely on human conduct. The key point is that artificial intelligence and human intelligence are distinctly different things. Even though AI adds some human like qualities in robotics, it can not make human value judgments about good or bad, fair or unfair, right or wrong. Searching documents, driving cars, evaluating data – AI will be able to do all this and much more. But it cannot and will not make value judgments which are uniquely human, about humans, based on human experience.

What then can we expect from future AI laws? Although they might be drafted to safeguard the capacities and limitations of AI technology, insofar as they might adversely affect humans, AI laws will be drafted by humans, for humans, based on human experience and needs. New laws to prevent things going wrong or to remedy the situation when they do will have entirely human consequences because however intelligent a machine may be it is not animate: it does not breathe, think or feel like a human. You cannot fine a robot or lock it up in prison. You can only disable or dismantle it.

However, its human master or creator can and will continue to face the full force of the law when necessary. Business users of AI therefore need to be aware of the implications of error or mistake in any AI-based system for which they are responsible. New laws are likely to extend the duty of care principle to cover intelligent machines just as they currently embrace existing unintelligent machines. Devise a faulty programme where driverless cars, or even pilotless planes, crash causing death and injury to humans, and the legal consequences will no doubt be severe. Use a diagnostic AI tool in surgery which identifies the wrong leg for amputation (as some human doctors have done) and the cost for the Health Authority or insurance company will be enormous.

The European Union is likely to be the first parliament in the world to create a legal framework for AI, with the current intention of putting AI regulations into place by 2018. Besides the usual point about what personal data can be collected by an AI, the European Union appears to be most concerned about ensuring equality and fairness. With an AI making consequential decisions about people, replacing decisions made by human bureaucratic processes, this leads to concerns about how to ensure justice, fairness, and accountability. What happens if an AI starts making decisions that are prejudicial to a specific sex, race, age or religious group, even if they are based on objective data?

The thinking behind potential EU legislation is taking an interesting direction, one that I believe suffers from this tendency to anthropomorphise. Just last month (January 2017) MEPs have voted to propose granting legal status to robots, categorising them as “electronic persons” and warning that new legislation is needed to focus on how the machines can be held responsible for their “acts or omissions”. The draft report, tabled by Mady Delvaux-Stehres, MEP from Luxembourg, states current rules are insufficient for what it calls the technological revolution, and suggests the EU should establish basic ethical principles to avoid potential pitfalls.

Microsoft has some experience of AI going, well, AWOL. Developers at Microsoft created ‘Tay’, an AI modelled to speak like a teen girl, in order to improve the customer service on their voice recognition software. When they introduced this innocent AI chat engine to Twitter, it had to be deleted, because after just one day it had transformed into an evil Hitler-loving, incest sex-promoting, ‘Bush did 9/11’ proclaiming robot. Fortunately, it was just a chatbot and it was not making decisions that affect people’s lives.

Transparency is another aspect of potential regulation. Not only on the data and machine learning algorithms involved, but there is also the potential for authorities to require an explanation for any AI made decisions. This is going to be a very difficult area because there are inherent challenges in trying to understand the behaviour of advanced AI systems that are essentially programmed to go away, learn for themselves, and make their own decisions. In 2016 when Google AI beat Lee Sedol, one of the world’s leading GO players, it astounded experienced players who could not identify the reasoning behind the moves it was making. It often made completely unique moves, and that is just a board game.

Think about the complex decision making that may be required of driverless cars. In some situations, the robot driver might have to deliberately crash, killing its occupant, to avoid killing more people outside the car. A survey revealed that people were in favour of this system, but when asked what car they would buy, they told researchers they would choose one that protected the occupants in all circumstances.

The risk is, that attempting to regulate for fairness could effectively outlaw any fully automated system from making any decision about any person. Equally the requirement to a “right to an explanation of the decision reached after algorithmic assessment” could also be impossible to implement.

If EU AI legislation does in due course create a specific legal status of “electronic persons” for the most sophisticated autonomous robots, in order to clarify responsibility in cases of death or damage, it will be very interesting to see the unintended consequences that follow. For now, all AI systems are simply the responsibility of the people who make them and use them.

Next year will see the 200th anniversary of the publication of Frankenstein, whose monster created enormous havoc and destruction. In designing their new systems, there is something for AI programmers and developers in this cautionary tale: be careful to consider the consequences of what you create. The legal price will be high for errors made where the outcome affects humans adversely. No future AI system will be error free, but the old legal adage of caveat emptor – buyer beware – might even find a new legal equivalent for AI: caveat fabrica – maker beware.

Published in Access AI – May 2017

This article was written by Robert Morley
Robert Morley

See all posts by