Do AIs drive autonomous vehicles?

I’m late to Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, so everybody already knows it’s a super-gloomy warning about the possibility and prospects of machine brains overtaking human brains, setting off on a trajectory we can’t control, and at best indifferent to the interests of humans. Even if the explosion of machine intelligence lies far into the future, Bostrom argues, we should be thinking strategically now about how to manage superintelligence – intellects that outperform the best human minds by a long way, and across a lot of cognitive domains – and ensure it is human-friendly.

The book starts with a warning: “I have tried to make it an easy book to read but I don’t think I have quite succeeded … I tried to produce a book I would have enjoyed reading. This could prove a narrow demographic.” Indeed it is rather a dense read, the author being an immensely knowledgable chap – so much so that I suspect it has been bought many more times than it has been read cover to cover.

The first few chapters describe in much detail the forms superintelligence might take (not just AI but ‘whole brain emulation’ ie building an artificial brain), and the various potential pathways and dynamics for its progress in multiple scenarios. The point, I think, is to establish that this is going to happen for sure at some stage. Slightly alarmingly in this section the book, written in 2014, predicts AI might beat the world Go champion ‘in about a decade’ – this happened in May of course with Deep Mind’s Alpha Go.

The second section is about what values machine superintelligence might have and what rules of morality might possibly be encoded. There are not very cheering sections about how getting this wrong in well-meaning ways could backfire disastrously – for instance if the superintelligences act to use as much resource as possible to achieve their objectives, without regard for the existence of other species or the planet. “The first superintelligence may shape the future of Earth-originating life, could easily have non-anthropomorphic final goals, and would be likely to have instrumental reasons to pursue open-ended resource acquisition.”

There is a short final section, with much less detail than the earlier chapters, urging ‘us’ to think strategically, and co-operate to ensure AI can be managed for human good. In fact, there are almost no specifics here. The book is super-scary without proposing any solutions.

For all the detail about machine intelligence and scenarios for its progress and behaviour, the book is silent on what seem to me to be a some key questions:

  1. Where do superintelligences get their data input? At present there are huge teams of humans putting sensors on the world to provide information to machines. The machines are clearly improving in their ability to read text and to ‘see’ images including in the world. But will this amount to sense perceptions which are so important for human cognition? And could humans fight back if necessary by removing sensors and RFID tags and so on?
  2. How are superintelligences going to be embodied? Will they be mobile physical entities able to, say, mine their own rare minerals or whatever it is they want to do for their own survival and reproduction? Will an AI drive its own autonomous car? If not embodied, could superintelligence actually become a runaway threat?
  3. In a similar vein, superintelligences would need a supporting infrastructure, such as wireless communications. Given that the major transport arteries in the UK don’t have 4G along much of their route, or even 3G in some parts, the superintelligences won’t be driving up the M6 very soon. Maybe the humans could escape to the Scottish highlands or West Wales if the machines run amok, as there is hardly any mobile or fast broadband there at all. Similarly, the power network will be pretty vital – machines (or their batteries) have to be plugged in. We will have to build the machines’ infrastructure. Could we fight back by unplugging them if they turn on us?

These are very interesting questions. For all that it was slightly heavy going at times, I really enjoyed the book because it made me think. Above all, surely this is all too important to be left to technologists, and social scientists need to start thinking seriously about the implications of machine intelligence, so that whatever happens, there is some human design in the outcome.


One thought on “Do AIs drive autonomous vehicles?

  1. Disentangling humanity from machine intelligence is likely to be at least as complicated as disentangling the UK economy from the EU. The global economy is dependent on complex cybernetic systems – from algorithmic trading to just-in-time supply chains, automated warehouses, air traffic control, all the troubles of the world. Good luck trying to phase that lot out.

Leave a Reply

Your email address will not be published. Required fields are marked *

13 − 9 =