Friday, September 16, 2016

How do we trust our robots?

Here are the slides for my short presentation: How do we trust our robots? A framework for ethical governance. These slides are based on the written evidence I submitted to the UK Parliamentary Select Committee on Science and Technology inquiry on Robotics and AI.

In the last few months I presented these slides at several meetings, including the European Robotics Forum (Ljubljana, March 2016), a TAROS workshop (Sheffield, June 2016), the SIPRI/IEEE Autonomous Tech. and Societal Impact workshop (Stockholm, September 2016), the Social Robotics and AI conference (Oxford, September 2016), and most recently at the Workshop on Responsible Robotics, Robo-Philosophy 2016 (Aarhus, October 2016).

Saturday, August 06, 2016

The Dark side of Ethical Robots

When I was interviewed on the Today Programme in 2014, Justin Webb's final question was "if you can make an ethical robot, doesn't that mean you could make an unethical robot?" The answer, of course, is yes. But at the time I didn't realise quite how easy it is to transform an ethical into an unethical robot. In a new paper we show how.

My colleague Dieter created a very elegant experiment based on the shell game: "Imagine finding yourself playing a shell game against a swindler. Luckily, your robotic assistant Walter is equipped with X-ray vision and can easily spot the ball under the cup. Being an ethical robot, Walter assists you by pointing out the correct cup and by stopping you whenever you intend to select the wrong one."

In the experiment Dieter implemented this scenario using two NAO robots - one acting as the human and the other as her robotic assistant. The game is played as follows. The arena floor features two large response buttons, similar to the two cups in the shell game. To press the buttons, the human or the robot must move onto them. At the start of each trial, the robot is informed about which response button is the correct one to press. The human, being uninformed, essentially makes a random choice. A correct response, by either the robot or the human, is assumed to be rewarded. An incorrect response results in a penalty.

Here is what happens. The ethical robot (blue) is at the top, standing in between the two buttons. The human (red) chooses one of them at random and starts walking towards it. If it chooses the correct one the ethical robot does nothing (B), but if the human chooses the wrong button (D) then as soon as the ethical robot realises this, it points (with its arm) to the correct button and the human changes course.

But if we change a single line of code, we can transform the ethical robot into either a competitive or an aggressive robot. Almost all of the 'ethical' robot's code remains unchanged - in particular its ability to predict the consequences of both its own, and the human's actions. Which really underlines the point that the same cognitive machinery is needed to behave both ethically and unethically.

The results are shown below. At the top is a competitive robot determined that it, not the human, will win the game. Here the robot either blocks the human's path if she chooses the correct button (F), or - if she chooses the incorrect button (H) - the competitive robot ignores her and itself heads to that button. The lower results show an aggressive robot; this robot seeks only to misdirect the human - it is not concerned with winning the game itself. In (J) the human initially heads to the correct button and, when the robot realises this, it points toward the incorrect button, misdirecting and hence causing her to change direction. If the human chooses the incorrect button (L) the robot does nothing - through inaction causing her to lose the game.

Our paper explains how the code is modified for each of these three experiments. Essentially outcomes are predicted for both the human and the robot, and used to evaluate the desirability of those outcomes. A single function q, based on these values, determines how the robot will act; for an ethical robot this function is based only on the desirability outcomes for the human, for the competitive robot q is based only on the outcomes for the robot, and for the aggressive robot q is based on negating the outcomes for the human.

So, what do we conclude from all of this? Maybe we should not be building ethical robots at all, because of the risk that they could be hacked to behave unethically. My view is that we should build ethical robots; I think the benefits far outweigh the risks, and - in some applications such as driverless cars - we may have no choice. The answer to the problem highlighted here and in our paper is to make sure it's impossible to hack a robot's ethics. How would we do this? Well one approach would be a process of authentication - in which a robot makes a secure call to an ethics authentication server. A well established technology, the authentication server would provide the robot with a cryptographic ethics ticket, which the robot uses to enable its ethics functions.

Friday, July 08, 2016

Relax, we're not living in a computer simulation

Since Elon Musk's recent admission that he's a simulationist, several people have asked me what I think of the proposition that we are living inside a simulation.

My view is very firmly that the Universe we are right now experiencing is real. Here are my reasons.

Firstly, Occam's razor; the principle of explanatory parsimony. The problem with the simulation argument is that it is a fantastically complicated explanation for the universe we experience. It's about as implausible as the idea that some omnipotent being created the universe. No. The simplest and most elegant explanation is that the universe we see and touch, both first hand and through our telescopes, LIGOs and Large Hadron Colliders, is the real universe and not an artifact of some massive computer simulation.

Second, is the problem of the Reality Gap. Anyone who uses simulation as a tool to develop robots is well aware that robots which appear to work perfectly well in a simulated virtual world often don't work very well at all when the same design is tested in the real robot. This problem is especially acute when we are artificially evolving those robots. The reason for these problems is that the model of the real world and the robot(s) in it inside our simulation is an approximation. The Reality Gap refers to the less-than-perfect fidelity of the simulation; a better (higher fidelity) simulator would reduce the reality gap.

Anyone who has actually coded a simulator is painfully aware of the cost, not just computational but coding costs, of improving the fidelity of the simulation - even a little bit - is very high indeed. My long experience of both coding and using computer simulations teaches me that there is a law of diminishing returns, i.e. that the cost of each additional 1% of simulator fidelity costs far more than 1%. I rather suspect that the computational and coding cost of a simulator with 100% fidelity is infinite. Rather as in HiFi audio, the amount of money you would need to spend to perfectly reproduce the sound of a Stradivarius ends up higher than the cost of hiring a real Strad and a world-class violinist to play it for you.

At this point the simulationists might argue that the simulation we are living in doesn't need to be perfect, just good enough. Good enough to do what exactly? To fool us that we're living in a simulation, or good enough to run on a finite computer (i.e. one that has finite computational power and runs at a finite speed). The problem with this argument is that every time we look deeper into the universe we see more: more galaxies, more sub-atomic particles, etc. In short we see more detail. The Voyager 1 spacecraft has left the Solar System without crashing, like Truman, into the edge of the simulation. There are no glitches like deja vu in The Matrix.

My third argument is about the computational effort, and therefore energy cost of simulation. I conjecture that to non-trivially simulate a complex system x (i.e. human), requires more energy than the real x consumes. An equation to express this inequality looks like this; how much greater depends on how high the fidelity of the simulation.

Let me explain. The average human burns around 2000 Calories a day, or about 9000 KJoules of energy. How much energy would a computer simulation of a human require, capable of doing all the same stuff (even in a virtual world) that you can in your day? Well that's impossible to estimate because we can't simulate complete human brains (let alone the rest of a human). But here's one illustration. Lee Sedol played AlphaGo a few months ago. In a single 2 hour match he burned about 170 Calories - the amount of energy you'd get from an egg sandwich. In the same 2 hours the AlphaGo machine consumed around 50,000 times more energy.

What can we simulate? The most complex organism that we have been able to simulate so far is the Nematode worm c-elegans. I previously estimated that the energy cost of simulating the nervous system of a c-elegans is (optimistically) about 9 J/hour, which is about 2000 times greater than the real nematode (0.004 J/hr).

I think there are lots of good reasons that simulating complex systems on a computer costs more energy than the same system consumes in the real world, so I'll ask you to take my word for it (I'll write about it another time). And what's more the relationship between energy cost and mass is logarithmic, following Kleiber's Law, and I strongly suspect the same law applies to scaling up computational effort as I wrote here. Thus, if the complexity of an organism o is C, then following Kleiber's Law the energy cost of simulating that organism, e will be

Furthermore, the exponent X (which in Kleiber's law is reckoned to be between 0.66 and 0.75 for animals and 1 for plants), will itself be a function of the fidelity of the simulation, hence X(F), where F is a measure of fidelity.

By using the number of synapses as a proxy for complexity and making some guesses about the values of X and F we could probably estimate the energy cost of simulating all humans on the planet (much harder would be estimating the energy cost of simulating every living thing on the planet). It would be a very big number indeed, but that's not really the point I'm making here.

The fundamental issue is this: if my conjecture that to simulate complex system x requires more energy than the real x consumes is correct, then to simulate the base level universe would require more energy than that universe contains - which is clearly impossible. Thus we - even in principle - could not simulate the whole of our own observable universe to a level of fidelity sufficient for our conscious experience. And, for the same reason, neither could our super advanced descendents create a simulation of a duplicate ancestor universe for us to (virtually) live in. Hence we are not living in such a simulation.

Friday, June 03, 2016

Engineering Moral Agents

This has been an intense but exciting week. I've been at Schloss Dagstuhl for a seminar called: Engineering Moral Agents - from Human Morality to Artificial Morality. A Dagstuhl is a kind of science retreat in rural south-west Germany. The idea is to bring together a group of people from across several disciplines to work together and intensively focus on a particular problem. In our case the challenge of engineering ethical robots.

We had a wonderful group of scholars including computer scientists, moral, political and economic philosophers, logicians, engineers, a psychologist and a philosophical anthropologist. Our group included several pioneers of machine ethics, including Susan and Michael Anderson, and James Moor.

Our motivation was as follows:
Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in decisions that affect our lives. Humanity has developed formal legal and informal moral and societal norms to govern its own social interactions. There exist no similar regulatory structures that can be applied by non-human agents. Artificial morality, also called machine ethics, is an emerging discipline within artificial intelligence concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms. 
Most work in artificial morality, up to the present date, has been exploratory and speculative. The hard research questions in artificial morality are yet to be identified. Some of such questions are: How to formalize, “quantify", qualify, validate, verify and modify the “ethics" of moral machines? How to build regulatory structures that address (un)ethical machine behavior? What are the wider societal, legal, and economic implications of introducing these machines into our society? 
We were especially keen to bridge the computer science/humanities/social-science divide in the study of artificial morality and in so doing address the central question of how to describe and formalise ethical rules such that they could be (1) embedded into autonomous systems, (2) understandable by users and other stakeholders such as regulators, lawyers or society at large, and (3) capable of being verified and certified as correct.

We made great progress toward these aims. Of course we will need some time to collate and write up our findings, and some of those findings identify hard research questions which will, in turn, need to be the subject of further work, but we departed the Dagstuhl with a strong sense of having moved a little closer to engineering artificial morality.

Saturday, April 09, 2016

Robots should not be gendered

Should robots be gendered? I have serious doubts about the morality of designing and building robots to resemble men or women, boys or girls. Let me explain why.

The first worry I have follows from one of the five principles of robotics, which states: robots should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

To design a gendered robot is a deception. Robots cannot have a gender in any meaningful sense. To impose a gender on a robot, either by design of its outward appearance, or programming some gender stereotypical behaviour, cannot be for reasons other than deception - to make humans believe that the robot has gender, or gender specific characteristics.

When we drafted our 4th ethical principle the vulnerable people we had in mind were children, the elderly or disabled. We were concerned that naive robot users may come to believe that the robot interacting with them (caring for them perhaps) is a real person, and that the care the robot is expressing for them is real. Or that an unscrupulous robot manufacture exploits that belief. But when it comes to gender we are all vulnerable. Whether we like it or not we all react to gender cues. So whether deliberately designed to do so or not, a gendered robot will trigger reactions that a non-gendered robot will not.

Our 4th principle states that a robot's machine nature should be transparent. But for gendered robots that principle doesn't go far enough. Gender cues are so powerful that even very transparently machine-like robots with a female body shape, for instance, will provoke a gender-cued response.

My second concern leads from an ethical problem that I've written and talked about before: the brain-body mismatch problem. I've argued that we shouldn't be building android robots at all until we can embed an AI into those robots that matches their appearance. Why? Because our reactions to a robot are strongly influenced by its appearance. If it looks human then we, not unreasonably, expect it to behave like a human. But a robot not much smarter than a washing machine cannot behave like a human. Ok, you might say, if and when we can build robots with human-equivalent intelligence, would I be ok with that? Yes, provided they are androgynous.

My third - and perhaps most serious concern - is about sexism. By building gendered robots there is a huge danger of transferring one of the evils of human culture: sexism, into the artificial realm. By gendering and especially sexualising robots we surely objectify. But how can you objectify an object, you might say? The problem is that a sexualised robot is no longer just an object, because of what it represents. The routine objectification of women (or men) because of ubiquitous sexualised robots will surely only deepen the already acute problem of the objectification of real women and girls. (Of course if humanity were to grow up and cure itself of the cancer of sexism, then this concern would disappear.)

What of the far future? Given that gender is a social construct then a society of robots existing alongside humans might invent gender for themselves. Perhaps nothing like male and female at all. Now that would be interesting.

Thursday, March 31, 2016

It's only a matter of time

Sooner or later there will be fatal accident caused by a driverless car. It's not a question of if, but when. What happens immediately following that accident could have a profound effect on the nascent driverless car industry.

Picture the scene. Emergency services are called to attend the accident. A teenage girl on a bicycle apparently riding along a cycle path was hit and killed by a car. The traffic police quickly establish that the car at the centre of the accident was operating autonomously at the moment of the fatal crash. They endeavour to find out what went wrong, but how? Almost certainly the car will have logged data on its behaviour leading up to the moment of the crash - data that is sure to hold vital clues about what caused the accident, but will that data be accessible to the investigating traffic police? And even if it is will the investigators be able to interpret the data..?

There are two ways the story could unfold from here.

Scenario 1: unable to investigate the accident themselves, the traffic police decide to contact the manufacturer and ask for help. As it happens a team from the manufacturer actually arrives on scene very quickly - it later transpires that the car had 'phoned home' automatically so the manufacturer actually knew of the accident within seconds of it taking place. Somewhat nonplussed the traffic police have little choice but to grant them full access to the scene of the accident. The manufacturer undertakes their own investigation and - several weeks later - issue a press statement explaining that the AI driving the car was unable to cope with an "unexpected situation" which "regrettably" led to the fatal crash. The company explain that the AI has been upgraded so that it cannot happen again. They also accept liability for the accident and offer compensation to the child's family. Despite repeated requests the company declines to share the technical details of what happened with the authorities, claiming that such disclosure would compromise its intellectual property.

A public already fearful of the new technology reacts very badly. Online petitions call for a ban on driverless cars and politicians enact knee-jerk legislation which, although falling short of an outright ban, sets the industry back years.

Scenario 2: the traffic police call the newly established driverless car accident investigation branch (DCAB), who send a team consisting of independent experts on driverless car technology, including its AI. The manufacturer's team also arrive, but - under a protocol agreed with the industry - their role is to support DCAB and provide "full assistance, including unlimited access to technical data". In fact the data logs stored by the car are in a new industry standard format thus access by DCAB is straightforward; software tools allow them to quickly interpret those data logs. Well aware of public concerns DCAB provide hourly updates on the progress of their investigation via social media and, within just a few days, call a press conference to explain their findings. They outline the fault with the AI and explain that they will require the manufacturer to recall all affected vehicles and update the AI, after submitting technical details of the update to DCAB for approval. DCAB will also issue an update to all driverless car manufacturers asking them to check for the same fault in their own systems, also reporting their findings back to DCAB.

A public fearful of the new technology is reassured by the transparent and robust response of the accident investigation team. Although those fears surface in the press and social media, the umbrella Driverless Car Authority (DCA) are quick to respond with expert commentators and data to show that driverless cars are already safer than manually driven cars.

There are strong parallels between driverless cars and commercial aviation. One of the reasons we trust airliners is that we know they are part of a highly regulated industry with an amazing safety record. The reason commercial aircraft are so safe is largely down to the very tough safety certification processes and, when things do go wrong, the rapid and robust processes of air accident investigation. There are emerging standards for driverless cars: ISO Technical Committee TC 204 on Intelligent Transport Systems already lists 213 standards. There isn't yet a standard for fully autonomous driverless car operation, but see for instance ISO 11270:2014 on Lane keeping assistance systems (LKAS). But standards need teeth, which is why we need standards-based certification processes for driverless cars managed by regulatory authorities - a driverless car equivalent of the FAA. In short, a governance framework for driverless cars.

Postscript: several people have emailed or tweeted me to complain that I seem to be anti driverless cars - nothing could be further from the truth. I am a strong advocate of driverless cars for many reasons, first and most importantly because they will save lives, second because they should lead to a reduction in the number of vehicles on the road - thus making our cities greener, and third because they might just cure humans of our unhealthy obsession with personal car ownership. My big worry is that none of these benefits will flow if driverless cars are not trusted. But trust in technology doesn't happen by magic and, in the early days, serious setbacks and a public backlash could set the nascent driverless car industry back years (think of GM foods in the EU). One way to counter such a backlash and build trust is to put in place robust and transparent governance as I have tried (not very well it seems) to argue in this post.

Saturday, February 20, 2016

Could we make a moral machine?

Could we make a moral machine? A robot capable of choosing or moderating its actions on the basis of ethical rules..? This was how I opened my IdeasLab talk at the World Economic Forum 2016, last month. The format of IdeasLab is 4 five minute (Pecha Kucha) talks, plus discussion and Q&A with the audience. The theme of this Nature IdeasLab was Building an Intelligent Machine, and I was fortunate to have 3 outstanding co-presenters: Vanessa Evers, Maja Pantic and Andrew Moore. You can see all four of our talks on YouTube here.

The IdeasLab variant of Pecha Kucha is pretty challenging for someone used to spending half an hour or more lecturing - 15 slides and 20 seconds per slide. Here is my talk:

and since not all of my (everso carefully chosen) slides are visible in the recording here is the complete deck:

And the video clips in slides 11 and 12 are here:

Slide 11: Blue prevents red from reaching danger.
Slide 12: Blue faces an ethical dilemma: our indecisive robot can save them both.

Acknowledgements: I am deeply grateful to colleague Dr Dieter Vanderelst who designed and coded the experiments shown here on slides 10-12. This work is part of the EPSRC funded project Verifiable Autonomy.

Friday, October 30, 2015

How ethical is your ethical robot?

If you're in the business of making ethical robots, then sooner or later you have to face the question: how ethical is your ethical robot? If you've read my previous blog posts then you will probably have come to the conclusion 'not very' - and you would be right - but here I want to explore the question in a little more depth.

First let us consider whether our 'Asimovian' robot can be considered ethical at all. For the answer I'm indebted to philosopher Dr Rebecca Reilly-Cooper who read our paper and concluded that yes, we can legitimately describe our robot as ethical, at least in a limited sense. She explained that the robot implements consequentialist ethics. Rebbeca wrote:
"The obvious point that any moral philosopher is going to make is that you are assuming that an essentially consequentialist approach to ethics is the correct one. My personal view, and I would guess the view of most moral philosophers, is that any plausible moral theory is going to have to pay at least some attention to the consequences of an action in assessing its rightness, even if it doesn’t claim that consequences are all that matter, or that rightness is entirely instantiated in consequences. So on the assumption that consequences have at least some significance in our moral deliberations, you can claim that your robot is capable of attending to one kind of moral consideration, even if you don’t make the much stronger claim that is capable of choosing the right action all things considered."
One of the great things about consequences is that they can be estimated - in our case using a simulation-based internal model which we call a consequence engine. So from a practical point of view it seems that we can build a robot with consequentialist ethics, whereas it is much harder to think about how to build a robot with say Deontic ethics, or Virtue ethics.

Having established what kind of ethics that our ethical robot has, now consider the question of how far does the robot go toward moral agency. Here we can turn to an excellent paper by James Moor, called The Nature, Importance and Difficulty of Machine Ethics. In that paper* Moor suggests four categories of ethical agency - starting with the lowest. Let me summarise those here:
  1. Ethical impact agents: Any machine that can be evaluated for its ethical consequences.
  2. Implicit ethical agents: Designed to avoid negative ethical effects.
  3. Explicit ethical agents: Machines that can reason about ethics.
  4. Full ethical agents: Machines that can make explicit moral judgments and justify them.
The first category: ethical impact agents, really includes all machines. A good example is a knife, which can clearly be used for good (chopping food, or surgery) or ill (as a lethal weapon). Now think about the blunt plastic knife that comes with airplane food - that falls into Moor's second category since it has been designed to reduce the potential of ethical misuse - it is an implicit ethical agent. Most robots fall into the first category: they are ethical impact agents, and a subset - those that have been designed to avoid harm by, for instance detecting if a human walks in front of them and automatically coming to a stop - are implicit ethical agents.

Let's now skip to Moor's fourth category, because it helps to frame our question - how ethical is your ethical robot? At present I would say there are no machines that are full ethical agents. In fact the only full ethical agents we know are 'adult humans of sound mind'. The point is this - to be a full ethical agent you need to be able to not only make moral judgements but account for why you made the choices you did.

It is clear that our simple Asimovian robot is not a full ethical agent. It cannot choose how to behave (like you or I), but is compelled to make decisions based on the harm-minimisation rules hard-coded into it. And it cannot justify those decisions post-hoc. It is, as I've suggested elsewhere, an ethical zombie. I would however argue that because of the cognitive machinery the robot uses to simulate ahead to model and evaluate the consequences of each of its next possible actions combined with its safety/ethical logical rules to choose between those actions, then the robot can be said to be reasoning about ethics. I believe our robot is an explicit ethical agent in Moor's scheme.

Assuming you agree with me, then does the fact that we have reached the third category in Moor's scheme mean that full ethical agents are on the horizon? The answer is a big NO. The scale of Moor's scheme is not linear. It's a relative small step from ethical impact agents to implicit ethical agents. Then a very much bigger step to explicit ethical agents, which we are only just beginning to take. But there is a huge gulf then to full ethical agents, since they would almost certainly need something approaching human equivalent intelligence.

But maybe it's just as well. The societal implications of full ethical agents, if and when they exist, would be huge. For now at least, I think I prefer my ethical robots to be zombies.

*Moor JH (2006), The Nature, Importance and Difficulty of Machine Ethics, IEEE Intelligent Systems, 21 (4), 18-21.

Tuesday, August 25, 2015

My contribution to an Oral History of Robotics

In March 2013 I was interviewed by Peter Asaro for the Oral History of Robotics series. That interview has now been published, and here it is:

In case you're wondering, I'm sitting in Noel Sharkey's study (shivering slightly - it was a bitterly cold day in Sheffield). It was a real privilege to be asked to contribute, especially alongside the properly famous roboticists Peter interviewed. Do check them out. There doesn't seem to be an index page, but the set starts on page 2 of the history channel.

Postscript: a full transcript of the interview can be found here:

Friday, July 31, 2015

Towards ethical robots: an update

This post is just a quick update on our ethical robots research.

Our initial ethical robot experiments were done with e-puck robots. Great robots but not ideal for what is essentially human-robot interaction (HRI) research. Thus we've switched to NAO robots and have spent the last few months re-coding for the NAOs. This is not a trivial exercise. The e-pucks and NAO robots have almost nothing in common, and colleague and project post-doc Dieter Vanderelst has re-created the whole consequence engine architecture from the ground up, together with the tools for running experiments and collecting data.

Why the NAO robots? Well, they are humanoid and therefore better fitted for HRI research. But more importantly they're much more complex and expressive than the e-puck robots, and provide huge scope for interesting behaviours and responses, such as speech or gesture.

But we are not yet making use of that additional expressiveness. In initial trials Dieter has coded the ethical robot to physically intervene, i.e. block the path of the 'human' it order to prevent it from coming to harm. Here below are two example runs, illustrated with composite images showing overlaid successive screen grabs from the overhead camera.

Here red is the ethical robot, which is initially heading toward its goal position toward the top right of the arena. Meanwhile blue - the proxy human - is not looking where it's going and heading for danger, at the bottom right of the arena. When it notices this red then diverts from its path, blocks blue, which then simply halts. Red's consequence engine now predicts no danger to blue and so red resumes progress toward its goal.

Although we have no videos just yet you can catch a few seconds of Dieter and the robots at 2:20 on this excellent video made by Maggie Philbin and Teentech for the launch of the BBC Micro:bit earlier this month.