AIs as the best of us

Another book of many out on AI is As If Human: Ethics and Artifical Intelligence by Nigel Shadbolt and Roger Hampson. I found this a very accessible book on AI ethics, possibly because neither author is an academic philosopher (sorry, philosophers….). Generally I’m a bit impatient with AI ethics, partly because it has dominated debate about AI at the expense of thinking about incentives and politics, and partly because of my low tolerance for the kind of bizarre thought experiments that seem to characterise the subject. Nevertheless, I found this book clear and pretty persuasive, with the damn trolley problem only appearing a small number of times.

The key point is reflected in the title: “AIs should be judged morally as if they were humans,” (although of course they are not). This implies that any decisions made by machines affecting humans should be transparent, accountable and allowing appeal and redress; we should treat AI systems as if they were humans taking the decisions. There may be contested accountability beyond that to other groups of humans – the car manufacturer and the insurer, say – but ‘the system’ can’t get away with no moral accountability. (Which touches on a fantastic new book, The Unaccountability Machine, by Dan Davies that I will review here soon).

Shadbolt and Hampson end with a set of seven principles for AIs, including ‘A thing should say what it is and be what it says’, and ‘Artificial intelligences are only ethical if they embody the best human values’. Also that private mega-corps should not be determining the future of humanity. As they say, “Legal pushback by individuals and consumer groups against the large digital corporations is surely coming.”

As If is well worth a read. It’s short and high level but does have examples (and includes the point that seems obvious but I have seen too rarely made, that the insurance industry is steadily putting itself out of business by progressively reducing risk pooling through data use).

Screenshot 2024-04-03 at 09.46.09

 

AI and us

Code Dependent: Living in the shadow of AI by Madhumita Murgia is a gripping read. She’s the FT’s AI Editor, so the book is well-written and benefits from her reporting experience at the FT and previously Wired. It is a book of reportage, collating tales of people’s bad experiences either as part of the low-paid work force in low income countries tagging images or moderating content, or being on the receiving end of algorithmic decision-making. The common thread is the destruction of human agency and the utter absence of accountability or scope for redress when AI systems are created and deployed.

The analytical framework is the idea of data colonialism, the extraction of information from individuals for its use in ways that never benefit them. The book is not entirely negative about AI and sees the possibilities. One example is the use of AI on a large sample of knee-xrays looking for osteo-arthritis. The puzzle being tackled by the researcher concerned was that African American patients consistently reported greater pain than patients of European extraction when their X rays looked exactly the same to the human radiologists. The solution turned out to be that the X rays were scored against a scale developed in mid-20th century Manchester on white, male patients. When the researcher, Ziad Obermeyer, fed a database of X-ray images to an AI algorithm, his model proved a much better predictor of pain. Humans wear blinkers created by the measurement frameworks we have already constructed, whereas AI is (or can be) a blank slate.

However, this is one of the optimistic examples in the book, where AI can potentially offer a positive outcome for humans. It is outnumbered by the counter-examples – Uber drivers being shortchanged by the algorithm or falsely flagged for some misdemeanour and having no possibility of redress, women haunted by deepfake pornography, Kenyan workers traumatised by the images they need to assess for content moderation yet unable to even speak about it because of the NDAs they have to sign, data collected from powerless and poor humans to train medical apps whose use they will never be able to afford.

The book brought to life for me an abstract idea I’ve been thinking about pursuing for a while: the need to find business models and financing modes that will enable the technology to benefit everyone. The technological possibilities are there but the only prevailing models are exploitative. Who is going to find how to deploy AI for the common good? How can the use of AI models be made accountable? Because it isn’t just a matter of ‘computer says No’, but rather ‘computer doesn’t acknowledge your existence’. And behind the computers stand the rich and powerful of the tech world.

There are lots of new books about AI out or about to be published, including AI Needs You by my colleague Verity Harding (I’ll post separately). I strongly recommend both of these; and would also observe that it’s women in the forefront of pushing for AI to serve everyone.

Screenshot 2023-12-30 at 10.32.59Screenshot 2023-12-30 at 10.33.50

Life, the universe and everything

I’m late to Max Tegmark’s Life 3.0. For all its bestseller status, it didn’t do a lot for me. Probably more to do with me than the book. There’s a large chunk about the distant future and existential risk, which I can’t get interested in. There’s also a lot of physics and evolution, philosophy and cognitive science thrown in to the mix, at a very simplified level. And then there’s the love-in with Elon Musk – including a back cover blurb by the billionaire recently referred to by the Daily Star as a ‘car salesman’. Musk funded Tegmark’s Future of Life Institute. Life 3.0 was published in 2017, pre-Musk’s Twitter takeover and voyage into questionable political stances. But fundamentally, I couldn’t figure out what the book is trying to say, beyond that AI is changing things a lot.

Having said all that, there were some points that interested me. One is the idea of the substrate-independence of computation. Another – one that jumps out from the examples of AI use cases and how they can go wrong, rather than being made explicitly in the book – is that communication between AIs and humans will be fundamentally important to avoiding terrible mistakes. The UX design here is surely as important as any prompt engineering. The third is a section about the reported argument (by David Vladek) that self-driving cars whould be required to have their own car insurance, which will incentivise safety in their design. This raises a question about whether AIs could own property, and when you think about it one could instead require the owners of self-driving cars to take out the insurance. But it’s an interesting throught.

I think Life 3.0 is worth a read nevertheless. (Yuval Noah Harari quite liked it – whatever you make of that.) It ranges widely over the kind of issues societies need to be thinking about as they let AIs operate, and is clearly-written – a good flight or train journey book.

Screenshot 2024-03-10 at 10.17.19

 

AI needs all of us

There’s no way I can be unbiased about Verity Harding’s new book AI Needs You: How we can change AI’s future and save our own, given that it began with a workshop Verity convened and the Bennett Institute hosted in Cambridge a few years ago. The idea – quite some time before the current wave of AI hype, hope and fear – was to reflect on how previous emerging disruptive technologies had come to be governed. After some debate we settled on space, embryology, and ICANN (the internet domain naming body), as between them these seemed to echo some of the issues regarding AI.

These discussions set the scene for Verity’s research into the detailed history of governance in each of these cases, and the outcome is a fascinating book that describes each in turn and reflects on the lessons for us now. The overall message is that the governance and use of technology in the public interest, for the public good, is possible. There is no technological determinism, nor any trade-off between public benefit and private innovation. The ‘Silicon Valley’ zeitgeist of inevitability, the idea that the tech is irresistible and society’s task is to leave its management to the experts, is false.

The implication of this – and hence the book’s title – is that: “Understanding that technology – how it gets built, why, and by whom – is critical for anyone interested in the future of our society.” And hence the ‘Needs You’ in the title. How AI develops, what it is used for an how – these are political questions requiring engaged citizens. This is why the historical examples are so fascinating, revealing as they do the messy practicalities and contingency of citizen engagement, political debate, quiet lobbying, co-ordination efforts, events and sheer luck. The embryology example is a case in point: the legislation in the UK was based on the hard work of the Warnock Commission, its engagement with citizens, tireless efforts to explain science; but also on years of political debate and a key decision by Mrs Thatcher about its Parliamentary progress. The resulting legislation has since stood the test of time and also set an ethical and regulatory framework for other countries too. The lesson is that the governance of AI will not be shaped by clever people designing it, but as the outcome of political and social forces.

The book is beautifully written and a gripping read (more than you might expect for a book about regulating technology). There are quite a few new books on AI out this spring, and there are others I’ve read in proof that are also excellent; but this will definitely be one of the ones that stands the test of time. Not for nothing did Time magazine name Verity as one of the 100 most influential people in AI. She is now leading a Bennett Institute Macarthur Foundation-funded project on the geopolitics of AI. I’ll be in conversation with her at Waterstones in Cambridge on 14th March.

Image

 

The path not taken in Silicon Valley

The Philosopher of Palo Alto: Mark Weiser, Xerox PARC, and the original Internet of Things by John Tinnell is a really interesting read in the context of the latest developments in AI. I do have a boundless appetite for books about the history of the industry, and was intrigued by this as I’d never heard of Mark Weiser. The reason for that gap, even though he ran the computer science lab at Xerox PARC, is probably that his philosophy of computing lost out. In a nutshell, he was strongly opposed to tech whose smartness involved making people superfluous.

Based on his reading of philosophers from Heidegger and (Michael) Polanyi to Merleau-Ponty, Weiser opposed the Cartesian mind-body dualism involved in Turing’s (1950) paper and the subsequent development of late 20th century digital technologies focused on ‘machines that think’, electronic brains. He aimed to develop computing embedded in the environment to support humans in their activities, rather than computing via screens that aimed to bring the world to people but through a barrier of processing. In one talk, he gave the analogy of what makes words useful. Libraries gather many words in a central location and are of course very useful. But words that ‘disappear’ into the environment are also useful, like street signs and labelling on packages in the supermarket. Nobody would be able to shop efficiently if there were no words on the soup cans, and they had to go to a library to refer to a directory of shelf locations to find the tomato flavour.

Weiser emphasised also the role of the human body in knowledge and communication: “The human body, whatever form it took, was a medium not a machine.” In a dualist conception of mind and body it seems to be reasonable to think about a machine substituting for the activities of the mind. But the body’s senses are not information processors, and cannot be substituted by digital sensors. Embodied human experience in the world is part of human knowledge. Weiser became highly sceptical of the industry’s trajectory whereby software more and more “dictated what could and could not happen in a place.” Rather than mediating between the physical world and humans, tech should be looking to augment the material world in useful ways (hence the subtitle about the original Internet of Things).

Weiser died young, another possible reason why he is not better known. One can imagine though what he would have thought of generative AI. The book’s Introduction ends with a quote from Lewis Mumford: “The machine is just as much a creature of thought as the poem.” These AI products have been imagined as disembodied brains that get in the way of our direct experience of the world and indeed increasingly limit our ability to shape the world we want. A really interesting read, and one that will send me off to read other things – including the work of a PARC ethnographer who is really the second hero of this book, Lucy Suchman.

71swt7Fj0cL._AC_UY436_QL65_