Isaac Asimov’s fiction paints a largely utopian technological future in which scientists develop new technology at a distance from society, but with its best interests at heart. All our real world experience, however, shows that what we really need is a science that is fully engaged with, and held accountable by, society.
Isaac Asimov’s three laws of robotics are imprinted on popular culture after the 2004 I, Robot film. The first law states that:
a robot may not injure a human being or, through inaction, allow a human being to come to harm.
In Asimov’s stories, the three laws are hard baked into robots. Any robot trying to break them self-destructs. Humans are therefore safe from robots and Elon Musk’s killer robots are nothing to be afraid of.
I finally got around to reading Robots and Empire recently. This is set long after the short stories on which I, Robot is based. One episode undermines the idea that humanity is safe from robots. The reason why is instructive when we are thinking about the need for societal engagement in science and technology innovation in general, and artificial intelligence (AI) in particular.
The set-up is this. With the help of robots, humans have settled 50 planets, each with a distinct culture and approach to life. One of these, Solaria, appears to have been deserted by its human inhabitants leaving millions of robots behind. The presence of so many abandoned robots is a big prize for traders. However, every ship that lands to pick some of them up is destroyed. The story’s heroine is despatched to find out why (I summarise brutally, missing out key elements of the plot to distil one essence of the story for my own purposes).
Almost as soon as she lands, robots made by Solaria’s inhabitants attempt to kill some of the humans in the group. The first law has been broken; humans will never be safe from robots again. Once off the planet, the main protagonists realise that in fact this isn’t the case. The laws remain sacrosanct and the failsafe built into robotic brains still works. Instead, before disappearing, the Solarians had redefined what it means to be human; anyone with a Solarian accent is human, any without the accent is not and therefore robots can kill them without breaking the first law.
It isn’t robots which are dangerous, but humans themselves bending the three laws to breaking point.
Asimov doesn’t explore the implications of this much further in Robots and Empire, it’s a different element in a science fiction story dealing with different themes to those I read into it. However, for me it really stood out as something which illustrates the challenges that society faces when disruptive scientific innovation emerges.
The episode vividly highlights that establishing rules and regulations to guide the development and use of technology, however strongly they seem, is not enough. There will always be scientists, companies or governments either venal, stupid or just unthoughtful enough to find ways to bend them often to the point where a minority gain to the cost of the majority. How does society protect itself against those with such power?
Laws, rules and regulations aren’t enough. We need to develop science that is open, where protagonists are engaged with society in constant dialogue about the implications of new developments, and where the interpretation of the laws is the result of deep societal dialogue rather than being left to a few people behind laboratory benches.
Programmes like Sciencewise are vitally important for bringing a wider range of public voices into policy making at critical moments. However, one-off moments of public engagement will not be enough for technologies such as gene editing, artificial intelligence and data science. These are rapidly developing, have the potential to change our relationship to the economy, to nature and each other. One-off engagement processes can’t hope to inform the massive array of decisions that need to be taken across the innovation system, in laboratories, by funders, in legislatures, by companies and even in the home, within timeframes often much shorter than the policy development cycle allows.
In our publication, Room for a View we describe how robust democratic systems have high deliberative capacity. By this we mean that there are a wide range of different views and perspectives visibly expressed and interacting across the whole democratic system and informing the decisions that government and other actors take. AI is a prime example of an innovation where developing such a deliberative capacity will be critical.
However, at the moment deliberative capacity on the issue is low. There have been some proposals for the development of institutions to support the development of new ethical and governance frameworks for AI, notably from Royal Society and British Academy, the Nuffield Foundation.
As currently proposed their aim is to promote expert led engagement with the ethical and governance issues raised by data and AI. While both are sensible and helpful steps forward, neither goes far enough in providing a basis for building deliberative capacity on the issues raised by AI; they are largely focused upwards not outwards.
Where they are focused on the public, there is a risk that they will see their role to be supporting better public understanding of the technology and its potential. While public understanding is important, it isn’t enough. A wide range of actors across the innovation system will need to engage much more widely with the implications of the technology as people take decisions about experimenting with, building, selling and buying AI enabled technology.
AI brings with it both threats and promises. Neither the dystopian future sketched out in the Terminator films, nor the largely utopian visions of Asimov’s robot series will come true. However, what is certain is that AI will be as disruptive as steam powered looms, moveable type and the internet. Now is the moment to ensure we have a deep democratic debate that puts control of the direction, ambitions and limits of the development of AI firmly in the hands of the public.
Image credit: StockSnap